Humanoid robots are coming into our homes, but they probably won’t be doing your laundry anytime soon.
In this episode of TechFirst, host John Koetsier sits down with Jan Liphardt, founder & CEO of OpenMind and Stanford bioengineering professor, to unpack what home robots will actually do in the near future … and why the “labor-free home” vision is mostly a myth (for now).
Jan explains why hands are still one of the hardest unsolved problems in robotics, why folding laundry is far harder than it looks, and why the most valuable early use cases for home robots aren’t chores at all.
- Get the deepest insights concisely on the TechFirst Substack newsletter
- Subscribe to the TechFirst YouTube channel to never miss an episode
And, watch our conversation here:
Instead, we explore where robots are already delivering real value today:
- Health companionship and fall detection for aging parents
- Personalized education for kids, beyond screens
- Home security that respects privacy
- And why people form emotional bonds with robots faster than expected
We also dive into OM1, OpenMind’s open-source, AI-native operating system for robots, and why openness, transparency, and configurability will matter deeply as robots move from factories into our living rooms.
If you’re curious about the real future of humanoid robots … what’s hype, what’s possible today, and what’s coming next, this conversation is for you.
Transcript
Note: this is a partially AI-generated transcript. It may not be 100% correct. Check the video for exact quotations.
John Koetsier:
The pitch for humanoid robots in the home so far has been that they’ll do all the work. What if that’s not true, actually?
Hello and welcome to Tech First. My name is John Koetsier. LG recently revealed a humanoid robot. I wrote about it on Forbes. It also had a vision of a labor-free home, which sounds pretty freakishly good to me—probably to you too. But what if that’s not what humanoid robots are going to do, at least not initially?
Today we’re chatting with Jan Liphardt, founder and CEO of OpenMind. He’s a Stanford bioengineering professor and has spent his career at the intersection of AI, biology, data, and hardware. Now he’s building OM1. It’s an open-source, AI-native operating system for robots. And yeah—he’s saying they’re not going to mop the floor. They’re not going to do our laundry. They will help us in other ways.
Welcome, Jan. How are you doing?
Jan:
Super to be here, John. Good morning.
John Koetsier:
Good morning. Yeah, it’s morning for both of us. We’re on the west coast of the North American continent, which is wonderful. I’m not inconveniencing somebody, so that’s great.
But you know what? I really, really, really want my guilt-free labor droid. Why can’t I have it?
Jan:
Well, we all, of course, want an easy, happy life. When you think about your home, and when you think about humanoids, and when you think about where the technology is today, there are certain things that humanoids can already do right now.
And then we can imagine, in five or ten or fifteen years, what additional things they’ll be able to do beyond what they can do today. And as you might imagine, some of the most difficult things in robotics relate to expensive, failure-prone parts of the robot—most obviously the hands and fingers. That’s referred to as the “hand problem.”
So if you want to deploy humanoids today into hospitals, schools, homes, workplaces, then what we’re generally seeing is that these use cases that are viable today really are different from what most people imagine.
Presumably when you were a little kid, you read books and saw movies about humanoids either mopping your floor, or you probably saw scary movies of police humanoids or scary aliens—Terminator, exactly.
And the good news is that none of those futures are coming true this year. What’s happening this year is that humanoids may not even have hands. But they’re still able to do things that a lot of people think are very useful.
And the other consideration is that a humanoid certainly will cost some money. The tasks that the humanoid will do—one way to prioritize those tasks is according to how much you have to pay for that task today to get a human to do it.
Imagine, for example, there’s something in your home and you paid a human $300 per hour to do it. That would be an obvious area for automation. But for lower-paid jobs like loading your dishwasher or folding your laundry, the financials make less sense, at least from the perspective of good use cases for humanoids today.
So the good news is that the technology is advancing quickly, and things like wet-wiping your floor will happen at some point, but it’s not where the next big wave of robots will be focused.
John Koetsier:
Yeah, it’s super interesting. There’s a ton of things in what you just said to dive into.
It reminds me of Jordy Rose, who used to be the CEO of Sanctuary AI—they’re building a humanoid robot in Vancouver, Canada. He said half the complexity of a robot is in the hands. And that’s really, really challenging: making good hands.
We see grippers, we see three-finger hands, we see these blocky things that don’t do much. And then we see very sophisticated hands.
Your comments about maintainability and repairability—how often they break—are really important. Can a robot wash its hands? If it gets sticky stuff like honey on its hands because it’s making lunches for the kids, can it wash its hands? Can it dry its hands?
We don’t even think about these things as humans because we have these incredible, dexterous, self-repairing, mobile, strong—and yet gentle and very sensitive—appendages that work so well. But they’re really, really hard to make.
I might be a bit more bullish—and you’re the expert, so you can shoot me down all you want—but I’m probably a little more bullish. There are some pretty good hands out there. They have issues.
It’s pretty funny, though—you mentioned laundry specifically in our prep for this, and LG came out with a new robot that has the worst name in the world. I think it’s called CLOiD. I don’t know where they came up with that. Is it off of Claude? I don’t know what it is. But anyway, they showed it folding laundry and the videos looked pretty good from LG.
But Jennifer Jolly, who reports for USA Today, and others shared some stuff on Instagram, and it was not so good. It took a lot of time, and you didn’t have a nice package at the end. Folding is hard. Fabric is hard. Life is hard. All these things are challenging, aren’t they?
Jan:
Right. And typically poorly paid. If everyone had a person in their home and we paid them a thousand bucks an hour to fold our socks, you bet this would already have been automated.
So this is not just a question of whether it’s difficult to do, but also: what is the business use case around automation?
Your points about hands are excellent. If you think, for example, about a humanoid for a hospital, the ability of the humanoid to wash its hands is not some trivial detail. It’s a vital part of being a healthcare professional.
You obviously don’t want to hurt your patient. And as far as I can tell, the question of how a robot washes its hands is completely unsolved.
It’s only just this year that robot hands have reached mean time between failures of thousands of hours, as opposed to dozens of hours. So now that the basic reliability question is being solved, at CES I recently saw some really nice robot hands for $1,250, which is a factor of five lower than just a year ago. So the hands are getting more capable. Price is coming down very nicely.
But then there are all these other questions—compute, for example. If you buy a hand for $1,250, it will sit on your desk. It obviously needs to be connected to a big computer and sensors, cameras, depth information, and other things need to go in there. It’s almost like hands are a totally separate thing.
You can imagine a humanoid with legs and arms and it can do things like talk, explore, mentor, teach, detect. And then there’s a whole separate question relating to chopping onions or iPhone assembly or neurosurgery or opening doors—whatever else the use case may be for a hand-focused humanoid.
John Koetsier:
I’ll pull on the thread on the hands for a bit, and then let’s move to what you’re talking about—what robots will actually do in the short term, what software can do, what AI can do, and stuff like that.
The hands thing is super interesting. That’s why people are putting them on platforms—on wheeled platforms, humanoids with wheels—because if you can get that right, the rest almost doesn’t matter. The rest is a delivery mechanism. The robot is a delivery mechanism for hands and arms. If you can get that right, there you go.
I’m fascinated by what you’re saying: they’ll teach your kids, they’ll find things for you. Talk about the use cases you think will actually be the 90% use cases when humanoids start coming into our houses.
Jan:
Well, the use cases we’re actually seeing—and this is no longer hypothetical—what we see people do who have a robot in their home, there are basically three sets of things. You can roughly categorize them into security, education, and health companionship, which is kind of vague, but let me give you some details.
In the health companionship area, one use case is that many older people live alone at home. My mother is a great example. She’s 6,000 miles away.
Most kids have baseline anxiety about that. You have a parent, and the parent says, “No, Jan, I can do everything all by myself. Don’t be silly.” And then ten years later, you’re having the same conversation: “I can do everything all by myself.”
My mom recently fell and got stuck underneath the dining room table. I don’t know how exactly that happened, but she’s okay—only because a neighbor ultimately was able to help and figure stuff out.
So one thing we’ve added to our robots is a very simple function: if they don’t see their parent for eight minutes, they’ll come find you in your house. And if you’re lying on the floor, they use a model to estimate whether you’re okay or may need help. They’ll try to talk to you.
And if you don’t respond, a human nurse tele-operates into the system, gets access to spatial context, audio, and the chat transcript. If the human nurse assesses that this person may need help, the nurse can then call 911.
John Koetsier:
That’s huge. That’s absolutely huge. It’s unbelievable.
I have the exact same scenario where my mom lived alone for a long time. Now she’s with a sibling. But you worry and you wonder.
I had a neighbor—he was 96, living on his own. He fell, smashed his face on the floor. He was lying on the floor for six hours before somebody came by.
Jan:
Right. Yeah, I’ve heard this same story now. I’ve experienced it personally, but I’ve heard the same story thousands of times in different variations—falling in bathtubs or kitchens, you name it.
And if you deploy cameras into your home, most people are very uncomfortable about that. And it’s also difficult to do if there are three or four rooms and you want to cover all angles.
My mother refuses to charge her cell phone. She refuses to wear any kind of electronic gadget because, as she tells me, “Jan, I’m not old.” So she’s not going to charge a cell phone. She’s not going to wear some gizmo, and she doesn’t want her house instrumented.
But I did notice when I was a little kid—remember when the Sony dog came out?
John Koetsier:
AIBO.
Jan:
Exactly. The first memory I have of robots is my mom saying, “It’s so cute.” I was startled because I was a kid, and my mom was telling me this robotic dog that can follow people around is so cute.
What’s interesting is there’s something fascinating that happens when a piece of technology can walk around your house. If you want privacy, you just close the door and it doesn’t come in. And if it remembers you and engages you, and barks when it sees you, people get attached very quickly.
We’ve added self-charging so the robot can go to its charging chair all by itself—because my mom would never plug in the humanoid. That’s not going to happen.
My sense is that a lot of us will be surprised by the extent to which we get attached to robots in our homes or workplaces. We tend to treat them as friends or companions.
So this health companionship use case isn’t just useful for fall detection, but also for autistic kids and for people with memory problems who are starting to forget basic things about their home and other things.
Of course, ideally, they would have human caregivers immediately next to them. For many people, in many places, that’s difficult and expensive. So that’s the health side.
We could spend a whole podcast just on healthcare, aging in place, and memory care.
On the education side, that use case was invented by my younger son. We had a robotic dog running around the house doing the usual stuff—sit down, bark, chase the ball.
Then he was doing his math homework at the kitchen table and Ben said, “Hey, Bitz, can you do my math homework for me?”
And of course, these robots come with multiple large language models, some of which can win the math Olympics. So the robotic dog says, “Of course, it’s trivial.”
That was super annoying as a parent, because I don’t want the robotic dog to do Ben’s homework. I want the robotic dog to teach Ben math—through a series of super annoying engagements where the robot says, “Hey, Ben, let’s talk about square roots first, and let’s make sure you understand how this works. Then let’s look at those problems together.”
So parents have to prompt-engineer the robots in their home to make sure the robot exhibits the behaviors they want for their kids.
That’s the educational use case.
The reason I care about education is because most education breaks my heart. For example, when I teach physics for pre-meds, I’m staring at 400 students. I know a few names, but for 380 students, I don’t even know their name. I don’t know which books they’ve read, how prepared they are, what they’re struggling with, or what gaps they have.
I’ve always thought: this old model, hundreds of years ago, where people had tutors in their home for their kids—that’s ideal because the teacher really understands the student. But it’s impossible at scale. Who can afford their own teacher?
One opportunity for AI and robotics is really personalized education—to allow every kid, and every parent, to learn as much as they possibly could. That’s making teaching a heck of a lot better through personalization.
What kids really love about robot teachers is it’s not a screen. It’s not some iPad they’re staring at for hours. The teacher can jump and move and look at them and engage them. Especially little kids are naturally drawn to things that move, bark, jump, and are animated.
When a robotic dog teacher comes along, the whole class in kindergarten comes running up. It’s fascinating. It moves. It’s dynamic.
The security use case is also very simple: the robot dog walks around your home, and if it’s never seen your face before, it’ll bark, it’ll ask you, “Hey, what’s your name?” Then the owner, who is well known to the dog, gets a notification on their phone: “Hey, there’s someone in your home and I’ve never seen them before. Is this okay?” Yes or no.
And you can build much more functionality on top of that base.
John Koetsier:
Sure. Super interesting. You can build all kinds of things—heat detection, fire detection, smoke detection. You hear a crash or a smash. Lots of different things.
And those are things we can do today, basically. That’s super interesting—and it doesn’t require a humanoid.
Like you said, you said “robot dog” many times—there could be a quadruped format, which would be cheaper and simpler. You get economies of scale on actuators and components because you get four legs versus two legs and two arms.
Jan:
Yeah—well, stairs are still a problem. The issue with robotic dogs is their knee joints typically have large gaps. Little kids—two-year-olds—come running and want to hug and kiss the robotic dog. If they get their little fingers into those gaps, that’s a horrific disaster.
That’s the main barrier right now for quadrupeds, and also humanoids. Little kids love hugging them and playing with them and touching them, and there are so many moving gaps. That’s just one of many issues people need to be aware of.
That’s why wheeled robots are actually not bad, because they reduce pinch hazards for little fingers.
John Koetsier:
Yeah. Interesting. Okay, cool.
So lots of things there. And we can still look forward to robots doing more and more labor in the home as they get more sophisticated, prices come down, and people want that to happen. That’s the sales pitch we get quite frequently from humanoid robot companies—not just industrial and logistics settings, but also the home.
So we’ll see when that comes in as well.
Let’s talk a little bit about OM1: what it is, why you’re building it, what problems you’re solving, and why it’s critical that you’re building something open source as a robot operating system.
Jan:
The future I don’t want is my doorbell ringing, I open the door, and there’s a humanoid that says, “Hey Jan, I’m your new humanoid.”
And it comes fully functional. There’s nothing to configure, look at, or understand. It’s all secret, magic, proprietary. Then it says, “Hey Jan, can I come into your house?”
And then I, as a parent, have to somehow trust this piece of technology.
Maybe I’m unusual or difficult, but if there’s that kind of technology in my home, I want to be able to look into its brain. I want to see how many microphones there are, how many cameras there are, where the data goes, what the cloud looks like, whether the system looks secure.
Being able to peer into the brain of the humanoid seems like a very desirable feature of a software stack.
Questions of intelligibility, understandability, the ability to debug, improve, and even add guardrails are very important to me.
And everyone listening probably has a decade of experience using ROS 2 as software for robots. ROS 2 is awesome in many ways, especially for education. But ROS 2 was conceived and invented a decade ago.
ROS 2 isn’t really built for high-level data fusion and decision-making in complex dynamic environments with humans, pets, TVs, and everything else going on.
I was frustrated with the software running on the humanoid in my house. It was super hard to change the humanoid’s behavior.
So I wanted software where I could use prompt engineering to convert a dog to a cat to a radiologist to a math teacher. That means the software has to be LLM-centric, but it also needs other models for audio, vision, lidar, batteries, movement, and everything else.
So I started gluing together different models and building a system that allows developers everywhere to snap together different models and prompts to enable specific functionality in a quadruped or humanoid.
And then we open-sourced the whole thing. It’s on GitHub. It’s called OM1—go to openmind.com.
Please take it, change it, improve it, do whatever you want with it.
The parts of OM1 that are not open source are the models people decide to use. If you use Kimi, an excellent Chinese large language model, that’s open-sourced. But if you use OpenAI’s 5.2, then that’s closed-source.
The point is: it’s up to you as a developer—or a parent, teacher, or technologist—to decide which models you use. Our software makes it really easy to articulate those decisions and snap things together like Lego blocks.
John Koetsier:
Super interesting. Just yesterday I published an episode of Tech First with the founder and CEO of Mind Children. That’s an open-source humanoid robot for children. It’s small, it’s weak, it’s social—it’s weak on purpose.
It’s wheeled right now. It’ll become bipedal at some point.
I think there are a lot of reasons to want something open source. You mentioned a few that will matter to technologists. The average person won’t have a clue about that.
However, the average person also might care that this humanoid robot comes from “gigantic tech company number two,” and whatever the robot sees, that company sees, and there are all kinds of assurances—who knows.
The average person might want a robot they don’t just pay for as a service, or lease, but actually own. They can control what it does, what it sees, where the data goes. If they can do that simply, that’s really good.
I think it’s also critical for freedom that there be open-source alternatives for humanoid robots, as they become critically important over the next few decades—work, labor, companionship, safety, and many more things.
Where would you say OM1 is right now? Is it something you can just take and—boom—there you go? You mentioned putting bits and pieces together—is it more like Lego? Is it where you want it to be? Nothing ever is, hardware or software.
Jan:
Of course not. There’s so much to make perfect. But already today people are using it for non-trivial things.
Over the next few months, we’ll see more and more products that use OM1 in education, health companionship, and home safety.
The big problem right now—the major limitation—is the simulation environment.
Most people don’t have a humanoid at home, so they have to use a simulator. But if you’ve ever used Gazebo or Isaac Sim, you know how infinitely painful it is to even get it running. You may need specific graphics cards, specific versions of Linux. There may be incompatibilities between your graphics card, drivers, and OS version. Maybe you have to go to 24.02.
What the robotics community is missing is a super convenient, super reliable, physically accurate simulation environment that allows a developer to quickly figure out if the humanoid is doing what they want.
Does it chop onions properly? Detect falls correctly? Diagnose Parkinson’s correctly?
NVIDIA is putting a lot of effort into this with Isaac Sim. They recently published a compact how-to for spinning up Isaac Sim quickly and easily.
But the main barrier right now—true for all robotics—is an easy-to-use, powerful, physically realistic simulation environment for humanoid robots.
Right now, simulation environments have major gaps. A great example is voice. If you’re building human-focused humanoids, you need digital humans in the simulation, with everyone talking to everyone else.
Most simulation environments come from defense or manufacturing. If you’re at Amazon putting stuff in boxes, you don’t want your gripper arms singing songs to other gripper arms.
So there’s an entire world to be built of accurate simulation worlds for debugging, evaluation, and optimization of human-focused humanoids—home environments, classroom environments, hospitals.
John Koetsier:
Yeah, wow. This has been super fascinating. Great conversation, great technology you’re working on.
I want to thank you for taking the time and having this conversation. I wish you the very best as you continue working on OM1.
You said it’s operational right now—people are shipping products right now. What do you think is the tipping point when we see—I’m trying to relate it to smartphones or EVs—the tipping point where you get that early 25% or 30% of people who have a robot of some kind in their home. Where do you think we hit that? What year might that be?
Jan:
I have complicated news for you. There’s no such thing as one specific tipping point for robots and automation.
The tipping point for car manufacturing was 20 years ago. The tipping point for warehouse logistics was five years ago. The tipping point for robots with wheels—RoboTaxis and Waymos—was last year.
You can jump into a RoboTaxi in San Francisco. It’s super convenient and super safe. It’s eight times safer than a distracted human on their cell phone driving down the highway. Parents will really love the fact that there’s finally a safe way of getting places.
And then the tipping point for humanoid robots—if you’re a manufacturer of robotic vacuum cleaners, in your mind the tipping point for robotic cleaning solutions was already five years ago.
When it comes to humanoid robots, for me the tipping point is when a normal person, in a normal place—not Palo Alto or San Francisco—can go to a store, pick out their humanoid, and take it home. Just like you buy a washing machine or a car.
You don’t go to the car dealership—you go to the humanoid dealership. You pick out your humanoid, you say, “Come along,” it jumps in the car with you, and you drive it home.
John Koetsier:
Very cool. Well, thank you so much, Jan, for this.
Jan:
John, thank you.