Want to see 100 humanoid robots learning how to work?
I visited RF1: Robot Farm 1 in Tutor Intelligence‘s repurposed old factory in Boston, Massachusetts. Now you can too: here’s a walk-through, including chats with Tutor CEO Josh Gruenstein.
The big question: what does it actually look like when robots learn?
- Get the deepest insights concisely on the TechFirst Substack newsletter
- Subscribe to the TechFirst YouTube channel to never miss an episode
- Or listen on Spotify or Apple Podcasts
This episode of TechFirst is sponsored by Apprentice
Did you think AI was only for digital work? Nope … AI-native manufacturing is here. This month’s sponsor is Apprentice, which offers the first AI Agent built from the ground up for agentic manufacturing. Connects to all your systems, monitors everything, automates all your processes … but keeps a human in the loop. Check it out at apprentice.io.
Watch our conversation here:
Transcript: Watch 100 humanoid robots learning how to work
John Koetsier:
I’m in a robot farm right now in Boston, Massachusetts, and there are about a hundred humanoids here. They’re called Sonny. This is a company called Tutor Intelligence, and they’ve got a hundred robots here that are all learning. They’re all trying stuff. They’re trying, they’re failing.
Some of these have only been here a few days. None have been here more than two months, and they’re just generally doing things that may or may not work for picking, for packing, for sorting, for all kinds of other things that robots need to do in logistics, fulfillment, and warehousing areas. What they’re doing is looking at all that data.
That was successful. That was not successful. This isn’t working. This is working. When they’ve marked all that data, then the robots will know what they need to do, and they’ll be trained to do their jobs very, very well.
What’s the thinking behind the arms here? You’ve got sort of a shoulder, an elbow, a wrist, and a hand. What’s the thinking in terms of how many degrees of freedom and what you need and what you want it to build in?
Josh Gruenstein:
If you do the math, the minimal number of degrees of freedom in order to move to any position in 3D space—
So that’s, you have your X, you have your Y, you have your Z, and then you have roll, pitch, and yaw, right? And that sort of map is pretty beautiful of six numbers to six numbers, basically. So that’s the minimal number of joints that you need in order to go to any position in 3D space.
And what’s most unique about these arms is that they’re industrial grade. This is the same hardware that we’ve been deploying into the field in live production environments at kind of a superhuman productivity standard, at really high uptime with all of our customers. So we are building on the same hardware and software architecture.
And trying to basically minimize complexity as much as is humanly possible. Cool. They have five eyes. And just one more—they have four eyes. Four. So one, two, three, four. What’s the main one? I don’t think they really have a main one.
John Koetsier:
That’s great though. I love it.
Think it’s a C minus right now.
Josh Gruenstein:
Yeah, but you know, you’ve got to start somewhere.
The learning journey for these robots is multi-step. We start with initial task demonstration. We do rollouts so we can get human supervision, human reward feedback—reinforcement learning from human feedback on these robots in order to have them learn from their experience and their mistakes, and also their success.
And then only through that whole process can we build robots that are a little bit smarter, just a little bit. And the hope is we can repeat that cycle over and over and over again while doing better science to improve the control of our robots, their ability to understand the world, and their ability to act in it.
And we can build robots that are closer and closer to the capabilities. Good news, this robot is already better at folding clothes than I am.
Josh Gruenstein:
General robot artificial intelligence. And that’s really what it says on the tin. It’s robots that have the same intuition about the physical world and the same ability to do physical actions as human beings.
And today we’re starting with the industrial world, where we can fill labor shortages and provide real value. But our goal is to build generally capable robot brains that can help in the home, for businesses, kind of everywhere in the world. There are going to be robots, and we want to help make that possible.
So in addition to the camera on the head, we also have cameras on the chest and then on the two hands. And that’s really useful if you’re grasping into a dark space or somewhere where the hand can’t necessarily see.
We as humans have so much intuition to be able to fish around for stuff, and we know what to expect. As a robot, that’s a lot harder. So it becomes really useful to have cameras actually on your hands.
There are certain tasks that you just can’t do with one arm. There’s also rate and throughput and productivity, where in a manufacturing environment it really matters to keep pace. And that’s pretty hard to do with a single arm, especially if humans are able to do the job with two hands.
Josh Gruenstein:
So that’s another big advantage of having two arms on a robot. There’s also fault tolerance as another objective. Most of the tasks that you’re seeing today are really like the robot is switching back and forth between using its left hand and its right hand.
But our goal is to build robots that can operate at max throughput with both the left and right hand, in addition to more complex bimanual manipulation.
John Koetsier:
Can you talk about the grippers that you’ve got there in the hands and what that looks like? 3D printed right now? Are you going to put more sophisticated hands on there—digits, fingers, anything like that?
Josh Gruenstein:
Yeah, absolutely. This is kind of the simplest gripper that we could really imagine putting on these robots. It’s a really common gripper design in the robotics research community. It’s called a FINRA gripper. It’s a bio-inspired, kind of compliant design, and it’s surprisingly versatile.
You’ll see the robots do some manipulations with them that are kind of unexpected, using that compliance to their advantage. We as humans use compliance to our advantage when we manipulate the world, so it’s useful to have.
But definitely our expectation is, as we scale robot data, as we graduate up the task curve, adding complexity, we expect to build more and more complex grippers as we saturate the capabilities of our existing hands. But as you can see, we have not yet saturated the ability of our current hands.
It’s really interesting. Most robotic companies don’t want to invite people to take a look at what they’re doing until they’ve got it perfect. You’re very early in the stage right here, and we’re seeing robots that are just learning. What is this thing? Where can I grip? How do I do it?
John Koetsier:
Why are you letting us see it so early?
Josh Gruenstein:
That’s a really good question. I think I would maybe ask the opposite question of why wouldn’t you show people?
I think in the robotics community, there’s been a very demo-oriented approach—like, okay, the success criteria is we want to show the robot doing the thing once, get a video, and post that video or publish it in a conference or journal.
And I think that’s a culture that we’re going to have to move away from as robotics, and specifically deep learning for robots, moves away from something that exists only in the lab to something that exists in the field.
So we really want to engage with the world, and especially we want to be building partnerships early with folks in the industrial world that we can partner with to deploy robots into the world. Obviously, it’s not going to be these robots quite yet, but our whole strategy is building those long-term partnerships very early.
Most things that you buy come in a box, right?
And you would expect that probably there’s some fancy robot that’s packing those boxes before they get shipped to you. And the reality is actually that’s almost entirely done by hand.
And we have a massive labor shortage in the United States, and we want to reshore and bring more manufacturing and more distribution to happen in the United States. And to do that, we need robots to be able to perform those simplest tasks in the manufacturing and logistics environment.