Check out some of the most impressive robot hands I’ve ever seen …

robot hands

Let’s be honest: most robot hands suck. They’re often just claws, or grippers, or stiff plastic mannequin hands that are all show and no go. But great humanoid robots — and many other kinds of robots — need great hands to work like we do, to help around our homes, and to work effectively in our factories and warehouses.

The problem?

“There are literally zero robot hands deployed right now doing routine work,” one of my guests just told me. And: “the best hands are hundreds of thousands of dollars, and they break all the time …”

Not impressive. Not great for a good, useful robot assistant, or worker. So in this episode of TechFirst, I talk with Kyber Labs co-founders Tyler Habowski (ex-SpaceX) and Yonatan Robbins about why dexterity — not AI — is the true bottleneck in robotics.


This episode of TechFirst is sponsored by KindBody Fitness: AI-powered fitness for all the health and none of the gym bro nonsense. Check out KindBody Fitness today.


Watch our conversation here. Before the interview, you’ll see an exclusive demo of their next-generation robotic hand in action, showing just how far manipulation technology has come.

We dig into:

  • Why humans rely on force, not precision, to manipulate objects
  • The surprising flaw in most robotic hands today
  • How Kyber’s “torque-transparent” design works without expensive sensors
  • Why hardware, not software, is still the limiting factor
  • A practical path to real-world automation (without sci-fi hype)

This isn’t about futuristic humanoids doing everything. It’s about solving real problems today—from lab automation to manufacturing—by building hands that actually work.

Transcript: Some of the most impressive robot hands I’ve ever seen

John Koetsier: Welcome to a sneak peek at one of the most impressive hands I’ve ever seen for humanoid robots. Hello and welcome to TechFirst. My name is John Koetsier.

We all know building great hands is probably one of the greatest challenges in robotics. Geordie Rose, who was the founder of Sanctuary AI, a humanoid robotics company in Vancouver, Canada, once told me that half the challenge of humanoid robots is in the hands.

Today we’re chatting with Kyber Labs co-founders Tyler Habowski and Yonatan Robbins. Tyler brings a background from SpaceX and early-stage robotics manufacturing. Yonatan comes from industrial design, mechanical engineering, and medical devices, and together they’ve founded Kyber Labs, which has just released a very impressive hand.

We’re going to see it in just a moment, then we’re going to chat with them. We’re going to talk about why force control matters more than precision, why hands are really one of the biggest bottlenecks in robotics, and why the smartest path to general-purpose robots might be a lot more practical than the hype suggests.

Most importantly, we’re going to get that sneak peek at their brand-new hands, which are freaking amazing. I’m going to share a four-minute video they shared with me showing their hands in operation. It’s super impressive.

This episode is brought to you by Kind Body Fitness, a million workouts in the palm of your hand. Check it out at Kind Body Fitness.

Okay, here we go.

I talked to Geordie Rose. He’s a former CEO of Sanctuary AI, which is building a humanoid robot, probably a year and a half ago, and he said half the complexity of a robot, half the challenge of a robot, is in the hands. Do you guys agree?

Tyler Habowski: I would put all of our complexity in the hands spread. Yeah, that’s, yeah. One half is in the right hand, and the other half is in the left hand.

Yonatan Robbins: So we’re focusing on the manipulation layer. We’re focusing on how to actually move things around. We think that the value comes from the hand and the way they move around. It doesn’t really matter, as long as you have six degrees of freedom that you can move the hand around to do whatever it needs to do. You can start stationary later. It can be on wheels and conveyor belts and legs. It doesn’t matter, as long as it’s living in human heights.

John Koetsier: We could have that conversation on a different podcast, but hands are really complicated, right? I mean, we look at our human hands, so many degrees of freedom, turn, twist, bend up, down, fingers out, and all that stuff. And guess what? They renew themselves when they break down. Robotic hands, not so much.

Tyler Habowski: Yeah, it’s true. It’s fascinating, though. We do so many precision tasks, but I want to point out a really interesting sort of thing that you may not have thought about. We’re actually not very precise with our hands. If I tell you to put your fingers 23.4 millimeters apart, you’re like, actually, I have no idea.

But if I tell you to apply just enough force to pick up a potato chip and not break it and manipulate it, you can do that all day long. And so it points to an interesting fact that actually humans are very force-driven. We control and think about forces much more than we think about position.

We may have no idea how many millimeters apart our fingers are, but we can know just enough force to apply it delicately and not even crush a blueberry. So it’s a really critical difference in how robots traditionally have been designed to interact with the world and how humans interact with the world. And I feel like this is really missing in the conversation of hands.

Especially up until very recently, people were mimicking the kinematics of hands, like here’s the degree of freedom, here’s how it moves, like you can move like this, but not necessarily the actuation modality. So we really sought to bridge the gap to build a kinematic hand, but also an actuated hand that mimics that sort of interaction capability. So it’s a different perspective.

That is, a lot of the hands you see out there are actually perfectly rigid. If they’re off, they’re totally rigid. So it’s very different. If you’re limp and your hand is just limp, you can just flop around and it doesn’t matter, and it’s really critical to how we interact with the world. You don’t think about every single finger joint angle as you open a doorknob or something. You just kind of grab it and let it go.

John Koetsier: Yeah.

Tyler Habowski: Whereas a lot of robots have very rigid joints and have to think about it.

John Koetsier: That’s really fascinating because we have an innate sense of where our limbs or fingers are, right? That’s kind of it. And there’s also this force feedback as they touch and reach something, right? So those two factors help us be, maybe we’re not super precise about where we go, but when we get there, we’re pretty good at grasping what we want.

Yonatan Robbins: I’ll give you another example for that. If I’m trying to hold this, if I’ll be 50% off in the location, it’ll probably fall, but if I’ll be 50% off with the force, I’m still holding it, and I can feel that I need to apply more force before I lose it.

John Koetsier: So that speaks to sensors in hands, I guess.

Tyler Habowski: Well, it speaks to a certain kind of modality, right? So the way that we were able to kind of, we took a very different approach to this. We were actually looking at, at a high level, how do you achieve that kind of performance?

It’s really challenging to pack 20 different, you know, you have 20 degrees of freedom in the hand that we’re making. It’s really hard to pack 20 different force sensors to get all the torques in all places of the hand. And also tactile sensors are expensive. We do have tactile sensors, but we want to get that sort of whole-hand perception on the force.

So how we did it is actually pretty different. We actually have a really torque-transparent design for the fingers. So mechanically they can just move back and forth. And so we know that if there’s no current in the motors that are driving the hand, then we know that there’s no force being applied externally. And so we can basically forward-calculate the force instead of back-calculating it from expensive sensors.

And so that’s how we’re able to do things like, we can put a feather in the path of the finger, and without a tactile sensor, without force-torque sensors or anything like that, we can detect the feather just by measuring the impedance in the motor and how it’s moving versus how it expected. Yeah, exactly.

And so we have a video of that online, but it’s really cool to feel when the hand conforms to your hand really gently. It’s a very different feeling than a very rigid robot arm. It feels like a soft robot touching your hand very gently. This is a very different way of thinking about the problem. We didn’t need a sensor. We just forward-calculated explicitly.

Now, this doesn’t give you tenth-of-a-gram precision, but it gets you a really good relative measure of how things are moving, and it’s kind of the critical signal, enough to be able to do something useful.

John Koetsier: Is that related to back-drivability?

Tyler Habowski: Yeah, so there are two different things, and I want to be really clear about this. We optimize for kind of three things, which were back-drivability, torque transparency, and cost.

Back-drivability and torque transparency are two different things. So there are some hands that have a tendon on one side and then a spring on the other side. And so they’re back-drivable, kind of like you can move the finger a little bit, but they don’t have any torque transparency.

And so we wanted to do both. For torque transparency, it’s a bit of a different thing. A lot of hands use really rigid servos, and so they have a small motor with a 300-to-1 gearbox ratio, and you can’t feel anything on the other side of a 300-to-1 gearbox.

And so what we want to do instead is do no gearbox. There’s no gears in this hand. And so you can feel it directly. You can feel all of those motions really directly in the motors themselves.

Yonatan Robbins: Even a one-to-six gearbox, you can still kind of feel some stuff, but eventually it’s going to break sooner.

John Koetsier: What’s harder when you’re talking hands, manipulation, grasping, picking up, and operating in the real world? Is it the hardware or is it the AI?

Tyler Habowski: Yeah, so I think this is a fascinating question, and I think the answer is it’s hardware first and then it’s a software problem. So the answer is both. The answer is both, but it’s important to recognize the staging.

When I first wanted to build this company, I wanted to build the software, and I was like, okay, great. Let’s build some cool software. I’m going to get some software engineers together and we’re going to do this. I was like, okay, wow, the best hands are hundreds of thousands of dollars, and they break all the time from what I was seeing. Yeah, yeah. And they’re probably, I was like, okay, well that’s not going to work, so we’re going to need to build our own hardware.

And so I view it as a hardware-first problem. And we’re still, I think, in the hardware phase, like transitioning into software. And so that’s how we view the problem.

Also, I feel like there’s a lot of variability in what you can do. You can do a lot with a hand that’s not so intelligent. And this is a lot of these cases that we found in the beginning. You can do a lot of interesting things that do not require this super long time-horizon, pick-up-your-kids’-toys-on-the-playground kind of general-purpose mentality. You can do a lot with more rigid things.

John Koetsier: I can totally see that there’s a virtuous cycle there, right? I mean, the better the hand you have, the more you can apply intelligence to what you’re doing. We’ve seen, I think it was some major consumer company that released Cloi, the worst name possible, at CES. And it was a robot for the home. And they were showing it folding, I think it was LG. I could be wrong. They were folding clothes, and it had pretty much claws, right? Like lobster claws. And so it can fold clothes, not quickly.

Tyler Habowski: Yeah.

John Koetsier: And not well, right? Yeah, yeah. And so if you have a better hand, you can do more with intelligence as well, correct?

Tyler Habowski: Yeah. The fun thing about folding clothes, it’s a great first problem because you can’t break anything. Nobody’s going to complain if you drop a shirt on the floor. It’s not like dishes, where if you break a plate, one in 10 plates, you break a plate, you’re going to be out of dishes in like 10 times the little dishwasher cycles.

So clothes are a great first example, and I’ve talked to many companies who are working on this problem, specifically companies that release products that are like, you know, they will just sit there and fold their clothes. It does take longer, but in the home you have the nice thing of, okay, I can do it while you’re gone or while you’re doing other stuff. And so speed is not as critical as it is in industries. So that’s kind of the one tradeoff you do get.

But yeah, it’s a pretty challenging thing. I feel like for clothes specifically, for deformable bodies, it’s actually a deceptively more viable problem. I’m not surprised that it’s hard.

John Koetsier: I find it hard. I suck at folding clothes. I don’t expect a robot to be awesome at it.

Tyler Habowski: That’s true. I actually think it’s one of the easier tasks for robots in the beginning because the training is really available. You can’t ever break anything, so the workspace is really free.

And it’s actually a pretty algorithmic thing. You need to do certain folds for different kinds of sizes. You can kind of bend a lot of the different stuff and actually do it. So it’s actually deceptive. It’s a really impressive outcome, but it’s one of the more viable things in the game. Not saying it’s easy by any means, but it is one of the more viable things, which is why we’ve seen more success in it.

Yonatan Robbins: So we have all of the force sensors in the motors, or they’re the motors themselves. So it’s much more durable. And you say also less parts is, sorry, best parts is no parts, and it’s much more cost-effective as well because we have no force-sensing parts in the fingers. Everything is here through the motors, through the motors themselves.

John Koetsier: That’s a very SpaceX thing, right? The best part is no part, and that sounds like tons of SpaceX quotes for you.

Wow, that’s pretty cool. Rattle them off all day. Exactly. So I was going to ask, how much manipulation is touch versus vision, right? And you’ve kind of answered some of that already because you’re feeling in the motors what’s going on. You’re feeling when you’re touching something. So you’re seeing that, but obviously vision has to impact it to some degree as well.

Tyler Habowski: Yeah. So vision helps you localize objects. It doesn’t give you the force feedback. It doesn’t give you the sense of whether objects are held well or moving the way you expect them to. But it does help you localize the objects and understand scenes generally.

So I kind of view that as a preface. It helps you localize, okay, here’s the cap for this. And then we can go in with our fingers and kind of collapse them in toward the center that we kind of expect. But then if you’re trying to unthread a cap or something, you can localize that, and then with the fingers you can measure the actual diameter.

And so they kind of work on a funnel basis, right? There’s vision to get you close, and then as you get closer, it’s fingers and the force feedback, and then as you get really close, there’s tactile feedback to get the full picture.

John Koetsier: That’s pretty much how humans work, right? I mean, we reach out. Once we touch something, we don’t have to look at it anymore. We can feel it and know how to manipulate it.

One thing, of course, that vision and good world models do is give you a sense of what this thing will be like to pick up. How heavy will it be? How rigid is it? So if, let’s say, it’s a cardboard box, well, I may not be able to pick it up on the sides. It doesn’t have a lid on right now. It may just bend in, right? I may have to go under. So it depends on your intelligence there as well, I guess.

Tyler Habowski: Yeah. It depends on what properties you set for the object. We’re really good at doing that as humans, just sort of intuitively, without thinking about it. But yeah, you only realize how good you are when you pick up a cardboard box and it’s suddenly really, really heavy, or really light when you expected it to be heavy, and you realize, whoa, I did not do that right.

You realize in that moment how much your brain was thinking and predicting what it would do. It’s kind of funny to see those gaps in our understanding, and it really illustrates how much we actually think about this problem subconsciously.

John Koetsier: Maybe let’s talk about what you’ve built so far and what you’re going to be releasing soon. How capable is it? Give me the specs, give me the details, what you can release right now about what this hand is and what it’ll do.

Yonatan Robbins: We’re not building a Terminator. We’re building human-like hands.

Tyler Habowski: Yeah.

Yonatan Robbins: If we have more force, it means that we can go smaller. It means that we’re applying too much, we’re using too much electricity, we’re using too much space. We want to be human-like, not more than that. We don’t need more than that. We’re going to do tasks that people are doing in the factories, not more than that. We really don’t need it.

John Koetsier: Mm-hmm.

Tyler Habowski: Yeah. And so for this current demo, we’ve put out a few videos. We’ve actually gotten a lot of interest inbound from those videos. It’s gotten some really viral traction for a variety of reasons.

And so we got a lot of interesting people. There’s been like 150 people now on our waitlist that have kind of signed up and have been interested in these things. And it’s a really interesting gold mine of different use cases for hands across different places. And we’ve kind of taken that as inspiration and looked through those things, tried to understand where we can actually add the most value.

Some people want to do things like wire harnessing and really complex manipulations like sewing, and it’s like, we’ll get there. But that’s not a great first use case, I think. And so we’ve kind of been trying to pare down a few of the use cases that we think are really viable.

And so one, for instance, that we’re going to release this demo for is a healthcare company that basically does clinical lab testing. So they do a lot of the blood tests that you do, like the pathology tests. And so these are really rigidly scripted things. They tell you pretty much where you need to go, what tool you need to pick up. You need to pick up a pipette, you need to vortex-mix a source tube, uncap a lid, do tool changes, send a sample, and put the plate into a reader.

And so you don’t need high-level, long-time-horizon planning to do these kinds of problems. You just need to remember the 30 steps in the work instructions. And so this is a really good example. This company has like a thousand employees that do these things. And so it’s a lot of people that sit there at a workbench and do this all day.

And so we basically worked with them to see, okay, here’s their workbench, here’s what they do, here’s what they are able to access during their workspace, and here’s what they kind of do on a daily basis. We kind of recreated that setup in our lab here, and we were like, okay, we’ll see if we can do all of this stuff.

And so we’re going to actually show a demo that we worked to develop with this healthcare company that we’re actually doing almost everything that they do to prep a regular blood test. So it’s a really cool example of, you know, it can memorize or figure out this 30-step procedure or whatever, and actually get it done and do these tests.

And so from there you can imagine you can recreate this sort of setup. Once you can do this, you can do any variety of tasks. And so it’s kind of a, you know, we’re not using so much machine learning. The way we’re approaching this problem is very different. It’s a demonstration of this approach.

So we basically have primitives that are things like pick up the pipette, push the pipette button, go into the source tube, uncap the source tube, mix the source tube, load this into the centrifuge. And so we have a high-level agent that basically can string together these primitives in an intelligent way that can achieve a certain outcome.

And so you can give it a protocol that’s like, okay, here’s the blood tests you want to run. Here are the samples that you have. Here’s the current system state. And it will output a long list of primitives that it can run through and actions that it takes to actually run the test.

And so you can literally prompt it, like, okay, run this test, or run this test but do this variation, or interpret these, or do these separate tests at once. And so that’s what we’re doing. We’re basically able to demonstrate that we can do that without any VLAs or without any VLMs in the path.

But this is a massive opportunity of remedy that we can unlock. And also, they literally have doctors that are requesting more tests than they can hire people to run. And so this would really help alleviate a lot of the strain on their testing labs.

John Koetsier: Nice.

Tyler Habowski: And they’re just one of 50 companies.

John Koetsier: What’s interesting about that is that you said, hey, it’s pretty precise. Move this to that point. That’s kind of a traditional industrial robot thing, right? Like I always make a weld in that exact three-coordinate spot, right? And I never have to change. I don’t have to think about it. Is it that precise?

Tyler Habowski: Not quite that much.

John Koetsier: I didn’t think so, right? It might be a slightly different position, otherwise you don’t need something super intelligent.

Tyler Habowski: Yeah. So you don’t need something super intelligent. So we have kind of an object localization system that kind of sees like, okay, here’s the tip box that has all the tips for the pipette. So you can kind of put it in there. You have a target on the pipette that can see where it is, just like a little sticker.

And so we can basically do those kinds of localizations so you can kind of move the things around. But generally it knows what the objects are and how to interact with them. Again, this is just a proof-of-concept demonstration. It’s not fully 100% thought out, but yeah, that’s what we mean when we say it knows.

John Koetsier: When you say it knows, what is it? Is the it the hand?

Tyler Habowski: So basically the it is the full system. So we have a high-level planner that can string primitives together, but the it is the full system.

So we’re building, the product production delivering to them is basically our robot hands on a commercial off-the-shelf robot arm, off-the-shelf camera system, off-the-shelf computer unit, controller, and then our software that controls it and makes it useful.

So you basically give it a plate, like a well plate of samples, and be like, okay, run this test on it, and then it will pick them up, pick the samples up, start uncapping them, put some link reagents in.

John Koetsier: Is it stationary? Does it move?

Tyler Habowski: It’s just stationary for now. You know.

John Koetsier: Gotcha. Don’t need to move around. It’s not a bad thing necessarily.

Yonatan Robbins: It’s actually a good thing to start stationary because we’re eliminating a lot of degrees of freedom that we don’t need to pay attention to. So that’s exactly the way to start, with the minimum thing that we actually need to move things around. And we’re starting stationary. Tasks will be only with one hand. Some will be with two.

But we also chose this customer specifically because we can bring the value immediately just using our primitives. We could just move things around exactly as it needs before we’re so smart about it.

John Koetsier: That’s super smart. You’re not boiling the ocean. You’re doing one thing that you know you can do well, and you’re the anti-humanoid robot guy, right? So, I mean, it makes sense for you too.

Tyler Habowski: Yeah. Maybe we’ll put their hands on a rail. I don’t know. There’s tons of ways to move it around in a lab that are really trivial and low cost. It’s a different way of approaching the problem. Like you said, we’re not trying to boil the ocean.

The way I say it is, we are not pitching magical general-purpose autonomy or the orb that will control all robots and do anything you could possibly ask it on day one. We’re pitching a path to get there that is much more pragmatic and realistic with what is actually going to be happening. So, delivering value now.

John Koetsier: So.

Yonatan Robbins: Even if the market will go ahead, we’ll see what the market needs. If the market will want legs, we’ll make legs. If they want wheels, it will be wheels. But we’re sure we need to start stationary and eliminate the extra degrees of freedom by cost and by complexity.

John Koetsier: Mm-hmm. Mm-hmm. Cool. Okay. So even this, if you’re aiming at, you’re not aiming at the ultimate science-fiction thing, is a massive step forward from grippers and a lot of the hands that we see on shipping humanoid robots, which are kind of like stiff mannequin fingers, right?

Tyler Habowski: Yeah.

John Koetsier: Not super impressive.

Yonatan Robbins: I’ll add one thing. We are aiming there. We’re just more realistic about the path to get there. So we start to deploy with the minimum thing we need. Eventually we’re going to get there, but we’re not promising this on day one.

John Koetsier: Yeah. Yeah, that makes sense. What kind of costs are people looking at if they want to put a super-capable hand on the robot?

Yonatan Robbins: It’s—

Tyler Habowski: A good question. So for us, we’re taking a little bit of a different approach. To be clear, we’re not selling hands directly. We’re basically selling the full systems as vertically integrated stations for people.

And that really comes from a place of, I think we need to help people utilize the value of hands and not just give people hands and hope for the best, because it’s a really hard problem. My view is that we need to centralize and productize that development.

It doesn’t, like a lot of people want to use OpenAI-like text models, but not everybody should be training their own text models to do foundational model text model research. So I kind of view it as the same thing. Computers may be vital to people’s businesses, but not everybody designs their own processor chips. So it’s kind of a similar model.

Yonatan Robbins: Interesting. To give a direct cost-ish answer, you just need to be lower than you pay a person to do the same job.

John Koetsier: Mm-hmm. Mm-hmm.

Yonatan Robbins: It comes down to that.

John Koetsier: Mm-hmm.

Tyler Habowski: Yeah.

John Koetsier: Let’s say Apptronik comes to you tomorrow or Figure comes to you tomorrow and says, whoa, your hands are freaking awesome. We want them. You do a deal.

Tyler Habowski: We’ve done many of those, actually.

John Koetsier: Really?

Tyler Habowski: This happened many times, actually. And the answer is no, I haven’t done anything yet. Hardware is really hard to defend if you’re only doing hardware. As a company, as a business, it’s really hard to make that defensible. What’s to stop them from reverse engineering it, changing one thing, no matter what we patent, changing one thing and doing it differently? Yeah.

I think we did a lot of clever things that, you know, you could find some way around patents and still figure it out.

John Koetsier: Yeah.

Tyler Habowski: And so how do you build defensibility as an American company doing this kind of work? My view is, not the hardware. So we view the hardware as a temporary accelerator to put us in a position.

And like I said in the beginning, when I wanted to start this company, I wanted to do the software. We only had to build the hardware as a means to an end to actually be able to solve this problem. So we want to build more of a moat around the software that controls it, data and the data pipeline, and also the customer distribution network.

We want to be some of the very first robot hands doing actual dexterous manipulation in the world and deployed to do that routinely. And so that’s going to be hugely valuable customer research as well as—

Yonatan Robbins: So we also don’t want to work right now with humanoid companies, also because of the extra degrees of freedom, and we want to do the minimum viable thing that we can actually use to bring the value. You don’t need to pay for legs if you’re sitting next to a desk all day. Why do you need them?

John Koetsier: Mm-hmm.

Yonatan Robbins: And another thing, I would like to distinguish between pick-and-place and moving things around and between manipulating things in your hands. They do need the dexterity and all degrees of freedom and the compliance and the torque transparency. That’s helping a lot. Just to do pick-and-place, there’s plenty of hands out there that are able to do it.

John Koetsier: Mm-hmm. Mm-hmm. Hmm. Okay. Let’s define two things. I want to define sort of a minimum viable robotic hand. That’s hard because it depends on the job. I also want to define what is the ultimate robotic hand, and maybe that’s hard because it’s science fiction, but let’s give both of those a shot.

What do you think is your minimum viable robotic hand?

Tyler Habowski: Yeah, this is a super interesting question. So one interesting thing is that there are literally zero robot hands deployed right now doing routine work. And so when you look at what is actually necessary, it’s very hard to start from scratch. It’s very easy to iterate, like what does the laptop need to do better? It’s like, okay, maybe you make these couple improvements. But when you’re trying to deploy some of the first things, it’s truly a very good question to ask, like what is actually the minimum viable product?

And I actually view that a lot of people are kind of in research and doing academic stuff and trying to deploy these crazy models, and there’s kind of a spec race for hands, I feel like, where people are like, oh, tactile sensor can detect a fly landing on it. And I’m like, okay, but do you need to do that to package boxes on Amazon? Yes, no, maybe you don’t. Yeah, I don’t know.

It’s such a hard problem, to be honest, because you can compensate for so many things in so many different ways. I’ve talked to hand surgeons that have worked with people who have lost all the nerve endings in their hand and they have no tactile sense in the hand. And it’s painful in the beginning, but after a while, like six months, they can eventually retrain their brain to rely more on vision, rely more on the proprioception from their muscles and the force feedback, and they can do pretty much any task that a factory worker can do.

And so do you need tactile sensing, strictly speaking? No, you don’t. I kind of view it as similar to how autonomous cars developed. The first autonomous cars had crazy lidar. Everything was mapped to the millimeter. Everything was perfect. But in reality, a one-eyed deaf person can get a driver’s license. So you don’t need a ton of sensing. You don’t need tons of hardware to actually make that happen. You can compensate for that with intelligence.

So if you ask what are the minimum viable sensors, it’s super hard to know. Maybe you can compensate with intelligence, or maybe you can have lower intelligence and compensate with sensors and do that in the beginning. That is one of the hardest questions, I think, that we have to answer. What is the minimum viable product?

But the one tricky part about this is that we actually have a lot of issues with, like, the recent parallel-jaw grippers, in my opinion, have not been so widely adopted and proliferated yet because they can solve, as they often say, like 90% of problems. But if you’re doing an assembly task where you have to thread a nut or something at some point, what do you do? Do you just pass it off to a human who then goes and threads the nut and then waits for the robot to do the rest of it?

Or for this clinical lab, if they have to use a pipette, what do you do? Okay, you prep the samples, but then the person still has to walk around and do all this stuff. So you truly actually do need to do about 100% of the workflow that people are doing right now. Otherwise, it’s not really worth it. You’re not really taking the people out of the loop and automating it. You’re just kind of making it a little bit easier for the person.

And so the minimum viable product still has to be very general and pretty good. Interesting. So from that perspective—

Yonatan Robbins: For your question before about the ultimate robot hand, our hands were developed over a couple of million years, but mainly for nature to save ourselves, to be able to do some things. And then we designed everything for manufacturing around these capabilities, around these hands.

Eventually, a robotic hand will probably be the most effective or the most efficient with, I’m guessing, and it’s like my thought, two fingers and two thumbs. I’m guessing that will be the most efficient. But to start with, because we’re integrating with machines and tools that were designed for this form factor, that’s one reason to start with the hands.

Another reason is to collect the data. We’re collecting data from people, and they’re using five fingers. So it’s much easier to collect the data with the same form factor as people.

John Koetsier: Yes, exactly. Get somebody using some gloves or something like that, right? Or on camera.

Tyler Habowski: Yeah, we have a lot. Yeah. We do have a lot to say about that, but it’s a long private story.

John Koetsier: Yeah. Cool. What breakthroughs do we need to make these awesome, incredible, end-stage robotic hands, whether they’re two fingers and two thumbs or three fingers and two thumbs? What do we need? Do we need better motors? Probably. Do we need better sensors? Maybe. What do we need?

Yonatan Robbins: The main thing, from my point of view, is to deploy something that is viable enough to start doing actual tasks and get the feedback and understand what’s happening and change what needs to be changed.

Tyler Habowski: Yeah, I would actually agree, and that’s our perspective. We need to deploy and iterate because we don’t know what we need right now until you actually test. This is very much a SpaceX mentality.

And if you ask what is the perfect, you know, NASA spent a very long time designing the perfect rocket from all requirements from everywhere, and they made the space shuttle. I was like, okay, it’s a great rocket, but they had no iteration time. That was pretty much it. Yeah. They built it.

Whereas SpaceX, we were like, oh yeah, we’re going to build this. Literally when I was there through the first 50 launches, no two of those rockets were the same. We iterated every single time. And it was always chaos to incorporate things because they were literally never the same. But that’s how you learn, right? That’s how you actually—

John Koetsier: I think that’s an amazing point. I think that’s a great point. I don’t know if you guys have heard the story about the art teacher pottery class, and he asked students to make the perfect pot. Have you heard that story?

Tyler Habowski: No, I haven’t heard this. I’m very intrigued.

John Koetsier: So some students spent all their time, month-long class, trying to make that one perfect pot, and some students just went ham and made 50, 70 of them or something like that. You know the story. You know the moral of the story already, right?

The ones who made more understood it, got better at it, made better pots, and got there really, really quickly. So I think that’s a great point of view. Get something out there. See what breaks, see what sucks. Yeah. See what’s hard. Iterate, add, change, adapt. That’s machine evolution.

Tyler Habowski: Yeah, that’s very well said. And I think the thing that’s missing, really, is we need to deploy it and iterate on it to actually figure out what the requirements are.

I think perhaps the answer is, to build the ultimate hand, we need to figure out what that actually looks like before we can build it, because I bet we have the technology now. If you were to really be sure about it, it just takes a long time, even given a thesis, to build that product, right? We had a thesis about back-drivability and torque transparency and low cost.

John Koetsier: Mm-hmm. Mm-hmm.

Yonatan Robbins: And to be robust enough and at a good cost as well.

John Koetsier: Exactly. Robust enough, right? I mean, your hand is no good if it’s perfect, but it breaks after a month every time, right? And you got to replace it.

Yonatan Robbins: All the time, or if it stops, or if it swims.

John Koetsier: Yeah. Yeah. It’s—

Yonatan Robbins: Very important to work to a good anticipation and robustness and low cost.

Tyler Habowski: And to kind of address the last two points that we came up with, one of the things we’re going to do is we’re making the hands hot-swappable for the deployments we’re doing. So if anything goes wrong with the hand, it’s like six bolts and a power connector, and then you swap in one. And we’re just going to have some hands on hand as backups.

And so it shouldn’t have any downtime even if things break. And that’s how we’ll learn. They’ll send it back to us. We’ll see what went wrong. We’ll fix them. Yeah, it’ll be more work on our front. We’ll probably lose some money on service contracts, but we’ll get better and better, and that’s how we’ll iterate exactly.

Yonatan Robbins: In the hardware and the software as well. Things will break.

John Koetsier: Yeah. Yeah. Cool.

Well, thanks so much, guys, for taking this time. I really appreciate it. Cool stuff you’re building. I’d love to see the videos as you release them and share them along with some of the talking that you guys are doing, and look forward to that.

Tyler Habowski: Yeah. Thanks so much, John. It was a pleasure to be here.

John Koetsier: Saying that things will break, we’re still unbreakable.

Yonatan Robbins: It’ll never break, but in case it breaks, we have a plan.

John Koetsier: Love it.


Don’t miss an episode … or the insights:

Subscribe to my Substack