Sanctuary AI humanoid general purpose robot: a deep dive with CEO Geordie Rose

sanctuary AI humanoid robot

Now might be the golden age of humanoid general purpose robot development.

Tesla, of course, is building the Optimus robot. Figure.ai is working on one as well, plus others like Chinese company Fourier Intelligence with the GR-1 Boston Dynamics, and Agility Robots.

So is Sanctuary AI.

Sanctuary says they’re on a mission “to create the world’s-first human-like intelligence in general-purpose robots.” They’ve recently released their 6th generation robot, called Phoenix, and have completed their first commercial deployment in March. They’ve raised over $100 million. (See my post at Forbes about the company.)

Today we’re chatting with co-founder and CEO Geordie Rose. (Geordie, by the way, is the former founder and CEO of D-Wave, the quantum computing company that sold Google a $15 million quantum computer, and is still operating.)

(subscribe to my YouTube channel)

Subscribe to the TechFirst podcast:

Get TechFirst wherever podcasts are found …

 

Full transcript: Sanctuary AI humanoid general purpose robot

John Koetsier:

Is this the decade of the humanoid general purpose robot? 

Hello and welcome to TechFirst. My name is John Koetsier, of course.

Everyone seems to be working on humanoid robots right now. Tesla, of course, has Optimus, Figure is working on one as well. So is Sanctuary AI out of Vancouver, British Columbia, Canada. 

Sanctuary says they’re on a mission to create the world’s first human-like intelligence in a general purpose robot. They’ve recently released their sixth generation robot called Phoenix. They completed their first commercial deployment in March and they’ve raised over $100 million. 

Today we’re chatting with the co-founder and CEO Gordy Rose. Welcome Gordy.

Geordie Rose:

Yeah, thanks, John.

John Koetsier:

Tell me about Phoenix.

Geordie Rose:

Sanctuary as a company is … while it looks like it’s a robotics company from the outside, it’s actually an AI company. 

So we founded the company with a mission to try to create a type of AI. That’s the sort of “I” that biological creatures have. All animals, people included, have to solve the same kinds of problems to make their way through the world. They have to understand the world around them to a certain extent, be able to understand what the impact of their actions will be on themselves and their environments. And this type of embodied AI has always been the genesis of and the thread that’s worked its way through all of our work. 

So for us, the robots are almost a secondary consideration. 

The primary consideration is that in order to build something that’s a truly intelligent software system, this immersion in the real world is not just a nice thing to have, it’s necessary. So we began building specifically humanoid robots because there is a belief that we have that the type of intelligence that a creature evolves or requires is very strongly connected to how it senses the world and how it acts on the world and what its goals are. So for a person doing work, which is the specific type of activity that we’re focused on, building humanoid robots makes the problem of what to build in terms of the physical vehicle that the AI controls trivial, in the sense that we already know what it should look like. 

So the Phoenix is the sixth of what will eventually be many, many iterations on this theme. of building a general purpose machine, call it a robot, that acts as a vehicle through which sophisticated AI systems can express themselves in the world and understand the world the same way we do.

John Koetsier:

Is that kind of an “Intel inside” approach to robotics and AI? You’ll be the intelligence … will there be multiple embodiments of your AI, perhaps by other companies down the road, if you think five years, 10 years, 15 years out?

Geordie Rose:

It’s hard to say. Right now, our focus is on building a fully vertically integrated system. So we control all of the design for every aspect of the system now. That wasn’t true even one generation ago, where we depended very strongly on external providers to build different parts of the system. Now we’ve moved into a mode where we’ve tried to control as much as we can in terms of the supply chain and the design and everything. 

So Phoenix is the first physical robot that we’ve built and designed from the ground up internally. Everything about that robot is custom designed and built at Sanctuary. Everything from the actuators to the sensors, to the communications networks, the computer, the robot, the physical and mechanical electrical design, all of it is proprietary and built by us. 

So we did that for a reason. In an integrated machine that has an AI enabled software control system, things like timing and the very fine details of the integration between the software and the hardware are critically important. In a machine like a robot, it’s very not very easy to separate these like say hardware and software as say for example in a conventional computer you can write a piece of code without really worrying too much about the hardware that’s going to be run on. 

That’s not true in robots. There’s a very strong interplay.

Now it won’t always be like that at some point in the future hopefully. There’ll be standards for how you get sensory data off a robot and then actuate a robot so that people can write software to that standard. But that’s not the case today. So we decided that we were going to do it all ourselves in order to accelerate everything we do.

John Koetsier:

The word that came to mind when you started talking about biological systems, animals, humans, and how they learn to navigate through the world, do things, anticipate what other things in their world space might do and become was neuromorphic

Have you gone there or are you more on a traditional computing architecture?

Geordie Rose:

So I have a history in that particular field, because my first company, D-Wave, built what you would call today a neuromorphic architecture. 

It was strange in other ways in that the neurons were actually things called qubits, quantum bits. But the actual architecture was a neuromorphic design. 

I’m not a big fan, I have to say. After years of working on specialized computer architectures, the place where you want to put a specialized computer architecture is when you know that you’ve gotten the best possible gains out of the algorithmic and software infrastructure that you have already. And in a field like the one that we’re in, flexibility is much more important than performance currently. 

So at some point, it may be true that you might want to put special purpose processors or processors that are custom designed for particular things. in a system like what we build, but right now that’s not the case. It’s much better to be able to work in the software side. 

Many of the problems in AI enabled robotics are computationally very demanding. And because of that there’s a desire to get better scaling of those algorithms in the systems. So anything worse than linear in the size of the thing is a terrible thing for a practical application. And it’s very difficult to reduce many of the problems in robotics to things that scale linearly with problem size. 

So right now for me, it’s better to think about algorithms and software and, and getting gains out of doing things better, than to try to, um, sorry about that.

[ the lights went out in the room ]

We’ve got this strange system here. Sorry, I’m going to derail the podcast, but we just moved into a new office, sorry.

John Koetsier:

“There is still a human here. I’m still alive. I haven’t died yet.”

Geordie Rose:

Yeah, I gonna start to move my body a bit more be more like less robotic

John Koetsier:

You can dance your way through the podcast. It’s all good. No worries.

Geordie Rose:

Yeah.

John Koetsier:

That makes a ton of sense. I mean, obviously because neuromorphic chip architectures are far from, let’s say massively commercially scalable, maybe even commercially viable, one might say.

You would know more about that than I would, but you definitely can’t boil the ocean and invent everything all at once.

Geordie Rose:

Yeah, and I think that there’s clearly a place for hardware that does things well, like GPUs and deep learning. Okay, there are ways that you can use architectures that are not the conventional CPU architecture to do things better. But they tend to have very long tail investments. 

If you’re going to do something like a special purpose processor for doing something, There has to be a really good reason that persists over time. So in GPUs, it’s matrix multiplication. That problem will never go away. And it’s not just in deep learning. It’s not just in graphics and rendering. It’s a fundamental building block of mathematical computation. 

So if you’re going to build something weird and special, like a neuromorphic chip, it has to be solving a really fundamental building block type problem. And right now, I just don’t see that.

John Koetsier:

Cool. So I want to go into your platform. I want to go into what it can do, what it can’t do, where you’re aiming at, what it will be capable of in terms of motion and sensing and all that stuff. 

But maybe since we’ve sort of gone into the AI and the technology bits and your development focus first, maybe talk about some of the other ways that maybe Tesla, that maybe Figure and others who might be building human-like robots … how is your development methodology different from theirs, do you think?

Geordie Rose:

Well, to be honest, I don’t know enough about how the other folks who are building systems do it to be able to comment effectively. I can say that of the companies that I know well who are either in or peripheral to the space, generally they pick a lane where they’re trying to do one particular thing better than everyone else.

There are multiple aspects to this problem. And you can become … a world leader in one of them without even touching the others. So, in a humanoid robot, there are multiple of these types of challenge problems. 

So for example, locomotion is a challenging problem. How do you get around from one place to another, especially if it’s bipedal, you know, with two legs? The problem of energy density and packing batteries into a system that’s able to walk around without a tether. That’s a very challenging problem. 

And there’s the problem of dexterous manipulation, which is once you’ve gotten the robot to where you want it to be, can it actually do the thing that you want with its hands? That’s a very challenging problem as well. 

There’s a kind of a thing that wraps all these together, which is the key problem that we’re trying to solve, which is the … call it cognitive control system, or maybe more poetically, the mind of the machine. It’s the software that coordinates the conversion of perception data into action. 

So we do that, our brains evolved to move our bodies. That’s what that organ is for, it’s for movement. 

That’s worth reflecting on by the way, is that even though we tend to think of the brain as being the source of the self and language and mathematical ability and the rest of it, what it literally evolved to do is move the body in response to the environment. So, when you think about software control systems for robots, that analogy is a beautiful one, because that’s literally what the software in a robot is supposed to do. It moves the robot in response to an external environment. 

So the problem that kind of supersedes or sits on top of all of the specific things that people are working on is this one. It’s like, how do you take all the sensory data from a complicated machine and turn it into actuation in order to achieve … essentially any goal that you’d want to be able to specify in language. So stepping up on level abstraction, the problem we’re working on is you speak to the robot where you issue it a command and the robot has to interpret what you mean. And then in the context in which it is in the world, execute that command for you. So that’s the central problem we’re trying to be the leaders in. 

You can call it the original intent of the AI community: building a general problem-solving machine that can do arbitrary goal-seeking behaviors. And others tend to be working more on parts of the problem.

So I think if there’s one thing that differentiates Sanctuary from everyone else, with some potential notable exceptions, it’s the ambition of the project, is we’re actually going after the holy grail of science, I suppose, not just robotics and AI, but the central driving number one objective that underlies all science, which is how to understand the mind

That problem is a fundamental bedrock of all human cognition. 

You know, all of our sciences and arts are built on top of our minds, the way we understand the world. So if you can build a technology that can replicate the human mind, you’ve built in some sense, the general purpose technology of human existence. It’s like the thing that there is nothing more fundamental than. 

Now I just want to make this point. So I used to be a physicist. I was a theoretical physicist for a while before I got sick of academia. And the thing I studied was kind of the fundamental stuff like quantum mechanics and general relativity and sort of foundations problems. And back then there was this unexplored dogma that we would call materialism or physicalism these days … that the reality of the universe is that it’s a bunch of physics equations and that we live in that reality. 

But there’s another way of looking at things, and I think it’s actually more clearly true, that the only thing we have is our experience. So your first person conscious experience of the world is the only thing you know is true. And everything else is built on that. 

So if you can build a machine that has that thing, you’ve built something that’s more fundamental than all of the rest of this stuff. And you can use that machine to any effect that you would use a human mind for and ultimately many more things than just a human mind can do.

John Koetsier:

Objective reality versus subjective reality. 

And of course, machines that we create embody some of our own subjective perceptions as well. Super interesting things there. 

So I still want to get into the capabilities of the thing and all that, but we’re in a very interesting space right now. You started the company in, I think it was 2018. So we’re talking five years ago or something like that now. 

How has the emergence of LLMs changed anything that you’re doing or impacted anything you’re doing?

Geordie Rose:

So for us, this has been an interesting journey because I have been following that type of model a long time, you know, back to the char-rnn days, which were like more than a decade ago now, where people were building these auto regressive models where you basically like using the statistics of your input data, you protect the next token. 

So I was following them fairly closely for a long time. And I happen to know one of the key OpenAI folks who was driving their agenda, and in the early days when we were talking about this particular thing.

What struck me is how convincing and how much conviction he had that this was just going to work. And I didn’t agree with them. So my perspective was that you were gonna plateau in capability and it was going to be kind of stupid the way that the previous systems were. And it would be kind of funny if you’re writing poetry or something like that, but it would never actually do anything useful. That was my perspective. And he was like, no, I think his quote was ‘success is guaranteed.’ 

John Koetsier:

Wow.

Geordie Rose:

And I was like, Oh man, I wish I could be as sure of anything as you are of this. 

And they put their money where their mouth is and they spent an awful lot of money making a bet that he and his colleagues were correct. And they were. 

So this is important because I’ve always viewed that approach as having a use case that’s not the one that people usually think of. The use case I’m interested in, or I have always been with that technology, is something called task planning. 

So if I give you a goal, like, I don’t know, please make me a sandwich, then you have to decompose the goal into steps.

John Koetsier:

Hundreds of steps!

Geordie Rose:

Yeah. 

And the steps have to both lead to the objective, but they have to be kind of sequential and they build on each other. So they have what are called pre and post conditions, each step. So for example, if, if one of the steps is slice the tomato, there’s a pre-condition that you have a knife in your hand, which is a pre-condition that you grasp the knife and so on, and you can work back these pre-conditions all the way

John Koetsier:

That you found the knife, that you knew where the knife was, that you pick the knife up, that you open the drawer where the knife was …

Geordie Rose:

Yeah. 

So my hope back even in the early days, when I was looking at these models and I’ve did a lot of experiments in them before they were ready, was that you could, you could issue a natural language prompt to a machine like this, like a software system like this, and it would generate a task plan for you. And the reason that’s important in robotics is that if you do it right, each element of the task plan is an executable element of something like an instruction set and the processor.

John Koetsier:

Huge for training.

Geordie Rose:

It’s something that you could do well to actually execute in the wild. 

So this would be a generalized task planner. So this was a new idea back then. We worked on it quite a bit, never got it to work. So when the LLMs got better, we of course thought, well, okay, maybe they’re ready now. And we did a bunch of experiments and they’re not, but they’re on a trajectory that if you do certain things to them, which are non-standard and have to involve integrating logic and reasoning and not just statistics, you might be able to build a very powerful general task planner. 

And that is a really big deal because then what you can do is you can essentially ask the robot to do anything at all, and it will decompose into steps, each one of which it knows how to do, and then you can have a general intelligence in a machine.

So my thing with the LLMs is kind of like a mixed bag because I think that we’ve always looked at them in a way that’s different than most people because we’re not all that interested in conversation. 

What we’re interested in is the use of language to specify goals, which is a little bit different. And that means something specifically mathematical in our case. So I was very pleased to see how well they worked. I was very happy to see my friend vindicated because it could have all gone sideways and it would have been a huge bust, but that’s not what happened.

John Koetsier:

It’s amazing and interesting. And I’m not surprised that the LLMs that we have right now are not sufficient to give you that kind of task workflow, because the training data that they’re on is not like that in most cases. 

And like you said, you can’t fail. You can’t be statistically accurate and actually cut somebody with the knife that you’re trying to cut the tomato with. You have to be rooted in reality there. So that’s super interesting. 

(I would suggest that people who engage with your robot, if your robot is ever in domestic service of some sort, might be interested in talking to it.)

But let’s get to where we’ve been heading through all this theory and big thinking behind your robot. Talk about this Phoenix that you’ve built. You’ve talked about human-like full body mobility. 

How many joints is that? How capable of this and how hard is that to build?

Geordie Rose:

Most of a robot like this is actually fairly straightforward. By which I have to kind of say that tongue in cheek, because obviously …

John Koetsier:

For who? Ha ha ha.

Geordie Rose:

Yes, for who? 

So if you’re the best in the world at what you do in several disciplines, it’s actually really straightforward to build a humanoid robot with one exception. Yeah, I’m talking about the physical part now. The exception is the hands.

John Koetsier:

Mm-hmm. Yeah.

Geordie Rose:

So you can build a bipedal walking robot. You can build a system that has the parent degrees of freedom of the upper body that is like the shoulder looks right, the arms look right, it moves its neck, and all that sort of thing. That’s fairly straightforward.

The problem is that those systems are not useful. 

So if you concern yourself only with what I’ll call utility tasks, which are things that have value aside from human interaction, which is a whole other thing, utility tasks are all driven through the hands, like with very few exceptions. 

So even if you build the best walking robot in the world, uh, if it doesn’t have hands, it can’t actually do anything. So this is a thing that I took as self-evident, but apparently it’s a revelation to some folks. It’s like, if you don’t have hands, there’s not a lot that you can do. 

So the problem, the reason why hands are difficult is kind of like multi multi-threaded. So you asked like, okay, so our robot has about 75 degrees of freedom, which is the joints, think of them as things that move, that are controllable from outside. 

And about 44 are wrist up.

John Koetsier:

Wow.

Geordie Rose:

So more than half of the complexity of the robot is in the hands.

The hands not only have to be able to move the way that the human hand moves with high fidelity in order to do the sorts of things we do, but they have to be covered with sensors that are sophisticated and difficult to build. And those sensors need to be encased in something that’s robust enough to survive years of use.

They have to, the sensors have to be constructed so that the signals that they generate don’t drift over time, which tends to be a problem with this sort of sensor. And they have a whole bunch of other constraints, mechanical, electrical, and so on. So actually building a hand that’s like ours is beyond the boundaries of science. 

John Koetsier:

Wow.

Geordie Rose:

It’s not just that Sanctuary can’t do it. It’s that no one knows how to do it. 

I think of all of the hands that have ever been built, ours is clearly the best today, but there’s still a big gap between the human hand and what we can build. 

And so Phoenix is in some ways a hand delivery mechanism. 

It’s a robot that lets you put human-like hands in the places that hands need to be in order to generate value through work. So think about tool use or the sorts of things we take for granted, like picking things up and putting them in boxes or taking things out of boxes or assembling things in a manufacturing context. 

Simple things like taking batteries out of machines, putting batteries into machines, these sorts of things are surprisingly to many beyond the state of the art in robotics, because when people think of robots, they tend to think of the sort of YouTube shots where robots are doing something in an automotive manufacturing thing and they’re flipping a car around to millimeter precision. But the reason why those robots look so great is that the problems that they’re solving are in very structured environments and they never change. 

That’s not the way nearly all work is. Nearly all work is very morphable and changeable and everything is not exactly in the same place every time you do it, even in manufacturing environments. So the types of systems we build push away from conventional robotics into systems that can sense the world more like we can. And then they use those senses to condition action. 

So as an example, if you want to pick something up, you need to know what it is and where it is. That’s not a trivial problem in the robotics world. So we’ve become the best in the world at all that stuff …

John Koetsier:

Estimate its weight, estimate its strength …

Geordie Rose:

So we’ve become the best in the world today at being able to build systems that can arbitrarily do in-hand manipulation, which includes grasping and moving things around and so on. And that’s come from our focus on the physical hands and the control systems that integrate the senses into action specifically for in hand manipulation.

John Koetsier:

That’s really amazing. And the spillover products here are pretty insane because grippers in robotics and manufacturing and all the robotic industries, that’s a big deal, right? You’ve got flippers, you’ve got just graspers, you’ve got things that turn a little bit, you have some with rudimentary fingers, you got things that suck, right … with suction, they pick something up …

Getting real hands that work is … it’s one of the holy grails as well. I was wondering about another degree of freedom because whenever we see a robot, humanoid or vaguely humanoid, and it needs to see something over there in that direction, it does the little dance and it turns its whole torso or its whole body because it doesn’t have a neck. 

Does your robot have a neck with degrees of freedom?

Geordie Rose:

Yeah, the neck has … so visual inspection is super important for grasping. So if you have a bunch of stuff in front of you on a table and you want to pick something up, being able to move your head to know where it is and what it is is super important. 

So these necks are actually … they’re not exactly the same mechanically as a human neck, but they do have three degrees of freedom, which allows it in practice to do the sorts of movements that the human neck does. 

So think of it like a shoulder in your neck. So it has the three motions: pitch, yaw, and roll. It has all three of those. So when you watch the robot, there are certain things that the human neck can do that ours can’t, like that thing where you move your head to the side.

John Koetsier:

I can’t really do that, I have a bad neck …

Geordie Rose:

We can’t do that in the robot. But this sort of thing is totally doable. And we use it all the time.

John Koetsier:

That is a huge thing for real-world speed of a robot doing something. 

If it has to shuffle around an object to get stereoscopic vision of it or see what’s behind it or if it’s connected to something before picking it up, you can take a minute to pick something up. Whereas a human can look around, oh, that’s an indiscreet individual object. They’ll pick it up. There you go. Super interesting. 

So you talked about walking, bipedal walking. It’s a challenge, obviously. How do you feel about your progress there?

Geordie Rose:

So we have two approaches to that problem. I should just say, to put it in context, that’s one of the technologies that isn’t on our critical path, by which I mean none of the things that we’re doing with customers or in our short-term technology roadmap depend on bipedal walking. The systems right now are either static and tethered, which means they don’t move, they’re just, you’ve got cables attached to them. or they’re on a four wheel base when limited untethered movement is required. 

The thing about the biped which is … it kind of makes it in a weird category is that there’s a perception that a true robot of this sort needs two legs. So there’s a kind of aesthetic or maybe visceral … thinking that somehow the property of having two legs is very closely associated with the property of humanness. So I’ve never really believed that, you know, there’s a lot of people who don’t use their legs, you know, wheelchairs or otherwise that are just as people as anyone else. 

I think what makes us people is our minds and our hands. That’s my thesis. So, I think a lot of the bipedal walking stuff has been performative and not necessarily value creation. And it’s not to put it down too much because it’s a really interesting problem that a lot of people have made progress on. 

So the way that we’ve dealt with it in our own investment thesis is that we put it in the longterm category and we’ve made an investment in a company called Eptronic, which is a world leader in bipedal locomotion where we’re essentially, you know, invested in them, built working on that problem for us outside of Sanctuary. And we do have a program internally, but it’s mostly about developing simulation environments and running simulations of potential bipedal designs. 

So it’s a software-only project right now that is focused on the simulation aspects of how you would train particular bipedal robots to do different actions like walk up and down stairs or climb a ladder, all that kind of stuff, but in simulation. That’s how we’ve dealt with that. problem.

John Koetsier:

Super interesting. You can’t do everything all at once. 

Let’s talk battery a little bit. Obviously, you kind of talked about some of your shipping stuff is generally tethered and isn’t currently bipedal. That’s going to have a positive impact on your ability to not have to have a big, massive, heavy battery in there. 

What’s your thinking about battery and a usable lifespan, recharging or swapping, all that stuff?

Geordie Rose:

So obviously it’s not a problem with the static and tethered systems because they have power coming in from a cord. 

When you go to the wheeled systems, there are batteries in the base. Now, in that case, you can afford to load it up with as many batteries as you can, because a base can hold hundreds of pounds. So you don’t have the same kind of constraints and you can do a very good job there. You can get hours and hours of useful battery life in a base like this. 

When you go to a bipedal robot, the game changes significantly. And one of the reasons why this problem is hard is that the battery technology right now for bipeds only allows them to have a fairly short uptime. But I think the way to solve this problem is just to have the robot hot swap its batteries itself. So it’s a very kind of interesting idea, I suppose. But it’s not complex technologically. 

So imagine you have a rack of batteries. And the batteries access through the core of the robot. The robot has hands, right? So you can design the thing for it to pull out one of the batteries from its own chest, slide it into the recharging system, reach and grab one of the batteries from the recharging system and insert it into the thing. 

So you can do this in a hot swap way where there’s different batteries that take over when one of them isn’t in it. So I think that the way that ultimately this problem is solved in untethered bipeds in some time in the future is that the systems have to be able to hot swap their own batteries so that in a fully autonomous system, there would be a battery level measurement. 

And if your level goes below a certain point, you have to go back to the recharging station and recharge just like a Roomba or something like this. So I think that ultimately that’s the way the problem is solved. 

Cause I actually don’t think that the physics of batteries are, they’re nowhere near the energy reliability or numbers of the human body, you know … we can eat like a carrot and run a marathon. Batteries are not nearly that good. So the human body and all biological organisms are much more energy efficient, uh, in everything

John Koetsier:

Just watch … we’ll develop organic, bio-organic robots, and they’ll eat something and store energy in fat cells. And … 

Geordie Rose:

Maybe someday, but not anytime soon. We’re going to need AGI. We’re going to need general intelligence to help us with that one.

John Koetsier:

Excellent. 

So it is interesting when we talk about wheeled versus bipedal, because there are so many applications for a wheeled robot that are, frankly, probably way better than a bipedal robot. And I’m thinking about warehousing. I’m thinking about potentially manufacturing in known spaces like a factory, a warehouse. You know you have a flat level floor. It’s always going to be that way. It can easily be maintained that way. There may be obstructions in your way, but you can go around them and it’s still flat and you’ll use way less energy. You’ll be much more stable and you can carry much more energy with you if you’re moving around. 

So there’s huge potential for a lot of the things that humans do in industry and transportation logistics that you can have a wheeled robot for. If you look at, let’s say agriculture, if you look at, let’s say in-home uses or real world outside uses, legs become more important. You’re not certain of your surface. You’re not certain of whether it’s flat wherever you’re going, you probably have to get up some stairs. 

There are other approaches to that than two feet and two legs, but that’s probably one of the better ways. What do you think is your timetable to having that kind of technology?

Geordie Rose:

It’s unclear because it depends on external factors outside of our control, because like I said, we’re not actually working on it directly. It’s hard to say. I think these things are all kind of like gray areas. 

So, first of all, we got to get the lights working properly. 

So for example, Agility Robotics has a perfectly fine bipedal walking robot that’s capable of doing stuff in fairly general environments. Boston Dynamics, of course, has been working on legged locomotion for a long time and are very good at it. So I think it’s context dependent. So as you say, certain applications actually do require legged locomotion. Some don’t. 

In terms of the complexity of the underlying technology, especially if you want to use them for utility tasks, like actually use hands to perform tasks, it’s very difficult to do that in a bipedal context today. 

And I think that if I were able to choose the two, I would pick bipedal because I think it’s actually more general than the wheeled case. 

But the reality of the situation is that problem compounds the difficulty of the others that we’re already trying to solve and the problems we’re trying to solve are already very difficult. So sometimes you just have to make decisions about where you focus your efforts and … I think that in this one, the decision to focus the effort on the thing that actually brings value to work, which is the hands, was the right one.

John Koetsier:

The mind boggles at the difficulty there with bipedal because as you say, when you’re doing things, you pick something up, are you picking something up out there you’re stretching for or are you picking up close? 

This makes a difference where you place your feet, how much weight is on each foot, how you’re balancing with the rest of your body and everything, all much simpler when you have a stable wide base. So let’s move off that. I look forward to that. I wanna see that. I wanna live in that reality, that iRobot science fiction reality. 

But where do you see your robots working first? You’ve got an install already with a customer, perhaps in beta, I’m not certain. But where do you see your robots being used commercially first?

Geordie Rose:

Well, the first deployments that we did and spoke about publicly were in retail environments doing general tasks. So the thesis of a general purpose robot is that it’s a system that can do anything you ask it to do and retail environments are very general in the sense that there’s a lot of different stuff that you need to be able to do in order to run a retail operation. It kind of encompasses logistics as a kind of a component of it because they need to receive and ship merchandise of multiple different kinds. 

So we did our first real world deploys of the technology. We’re in retail stores under a brand, which is well known in Canada called Canadian Tire. About 90% of Canadians are within a 15 minute drive of a Canadian Tire store. It’s one of our big cultural icons, I suppose, in Canada.

John Koetsier:

Canadian Tire and hockey, yes.

Geordie Rose:

Yeah, so they were both within … brands that Canadian Tire owns stores. And the thesis in what we’re trying to do there was to show that we could do a lot of different things. And we ended up being able to do about 40% of the entire workflow of the both stores that we deployed in. Everything from the front of house stuff to back of house stuff like receiving, shipping, depalletization, all that kind of stuff. So we were able to do all of those things. Thing with retail that’s difficult, though, is that the wages are low.

So in order to do it in a way that’s profitable, you need to take a lot of cost out of the system. And that’s not the stage that we’re at right now. It’s very expensive to deliver this capability.

So going forward, we’re likely going to focus more on higher value tasks. And we’re going to get to those ones that are the broader, more general tasks. But right now it makes more sense to focus on things that are for one reason or another, are very difficult or expensive to do. And one of the themes that we’ve been looking at very carefully is a fly in, fly out jobs. 

So Canada is big and in the remote north, there are lots of places that are very expensive to get specialists or experts to for a variety of different uses. And so something that might be a five or 10 minute job could cost you $100,000 to get somebody to the thing and do it. So these types of remote applications are examples. 

And another thing that we’re very carefully looking at is manufacturing for a strategic reason. So in North America, manufacturing has taken on a very strategic aspect. Being able to make things is really important for the United States. And they’ve gone through decades of not investing in making things. So that’s got to change. 

And making things in the modern world means using advanced AI and robotics, uh, to a certain extent and maybe to a full extent at some point. Uh, and we want to be involved in that. So we’ve been looking at primarily those two kinds of applications: automating things in the remote North, and the manufacturing and particularly assembly jobs in manufacturing contexts.

John Koetsier:

Super interesting and super necessary as well. We would be remiss if we have this whole conversation about general purpose robotics if we didn’t talk about the future of work and maybe even the future of jobs and people and where it all fits. 

What’s your perspective on this? How do you think a future in which we have relatively cheap, mass-produced, scalable robots – general purpose robots – that can go into the world of work and do general purpose tasks … how do you think that exists, coexists, displaces, lives side by side with people?

Geordie Rose:

So many of the engagements we have with customers are driven by the fact they can’t hire people to do the work they want. So there’s this common theme that’s driven our engagements with customers is that they have thousands in some cases tens of thousands of roles they need to fill that they can’t find people to do. And partly it’s that the economics dictates they can only pay a certain amount and people aren’t willing to do that job for that amount of money anymore.

Sometimes these jobs have other considerations that people don’t want to take on like they’re dangerous, remote, dirty, boring. 

So the world right now is kind of the world of work is being driven by two factors. One is declining birth rates have really changed the nature of the discussion when it comes to the role of automation in the future. Because up till recently, there was this idea that populations would continue to grow all the time and that there wouldn’t be enough work for people. But that’s not what’s happening. What’s happening is that the populations are going to decrease at some point in the not too distant future, not increase. 

And when that happens, tactically, what’ll happens to populations is nearly everybody is older. 

So I read a statistics recently, there are more people over the age of 70 in New York city today than in the entire state of Vermont. So this aging of the population is a side effect of decreasing birth rates that you see first. So the first thing you see when birth rates decline is that everybody gets older. And that has immediate effects because when you wanna draw your pension down in Canada, there may not be any money in that pension fund anymore because there isn’t any left. 

This can have catastrophic effects, like in Japan and in South Korea, emerging now where the birth rates are so low, and they don’t fill up with immigration, that there’s just nobody to do the work anymore. Civilization can collapse in that sense. So there’s this demand for more people. There’s less people being born. That’s not gonna reverse, by the way. That’s gonna accelerate as technology kind of gets more mature and everybody gets a higher standard of living. 

So the question is, how are we going to solve this problem that there aren’t any plumbers or electricians or Uber drivers or doctors. There’s just not enough people to do all the work. I think the answer has to be technological. 

We have to be able to find a way to build a new kind of machine that can do things that I would call like the base layer of civilization, the sorts of things that we know need to get done, but nobody really wants to do so that should all be done by machines.

So where I think this is going to go is that over time, and I’m not sure exactly how long we’re talking here, but say a few hundred years, the population of the earth is going to dramatically decline. But the standard of life will skyrocket and part of the driving of that standard of life will be that most of the things that support flourishing are all going to be automated. And those people who are around from now till then are all going to have, they’re all gonna be employed making a lot of money, doing the sorts of things that we’ve always done. 

One of the misconceptions about AI and robotics is that it takes, that’s not right. So I’ve been building robotics companies now for almost 10 years and every single time something gets automated, it gives. Because what you’re doing is you’re providing an opportunity for the people to not have to do that. shitty thing that now is being done by a robot and they move somewhere else in the organization, typically making more money doing something a lot more fulfilling. 

Because the things you automate are the things you’d think you’d automate. All of the boring, repetitive, dirty, dull, dangerous things. And so that’s where all this technology is going to go. 

There are more than 10 million unfilled roles in the United States right now. So for, say for Sanctuary, our business model, charges out at roughly say 100 to 150 K a year per robot. Let’s say we could fill 10 million unfilled jobs today with our robots.

This is the biggest company in the world by a long shot.

It’s bigger than the automotive industry, one company. And we haven’t touched a single job. We haven’t taken a job from anybody. And when you fill those jobs, all those companies do better. They hire more people and people are being paid more as a result. So, I’m not an economist, but I know a few. And my view of the future is actually very rosy. 

I think that AI and robotics together solve many of the existential threats that we may face as a civilization

Just as another example, if we actually want to go to other places that aren’t earth, like the moon or Mars, or maybe even other star systems at some point, it’s not going to be people doing those things. That’s a weird dream, you know, just not realistic because we don’t, we’re not built to live in those places, but that doesn’t mean that we can’t send machines there. You know, there are a lot more robots on Mars right now than people, and that’s not going to change.

John Koetsier:

Infinity more!

Geordie Rose:

Yeah. So when we build machines that are more like us, that can think about the world the same way we do, they can do things in these places that we simply can’t. They could mine, they could build, they could prepare, they could do all sorts of other things to explore. map. And so I think that the future with advanced robots and AI is the brilliant science fiction future that things like Star Trek kind of promised us. 

And not only that, I don’t think that we’re going to do very well if we can’t solve the problems of building these technologies, because they actually do solve really hard problems that there’s very little other option to solve in the future unless we can do this.

John Koetsier:

Isaac Asimov, who of course wrote I, Robot, and many other of these books, who was a scientist himself, called that CeeFee, C for carbon, F, E for Iron, essentially. And that is interesting from a very real perspective, which is your perspective, building the AI and robotics that will enable this base layer of existence will save civilization rather than doom it.

Geordie Rose:

Yeah. And I think saving it is maybe a little hyperbolic, but I do think that it can help it. And helping in the same way that, you know, cars helped us get around. 

Now, imagine a world without cars. You know you could imagine that a couple hundred years ago, we were in that world. It was a very different world. We weren’t able to get around as much as we do, you know.

In the future, we’re going to think: remember there was a time before we had general purpose robots. How did we do all of those jobs that those robots are doing? Like who cleaned all the toilets and who made all the clothing and all of that?

So I think that this transition that we’re going through is going to be difficult because there is always turbulence when you introduce a very powerful technology. But at the end of the day, it’s gonna be looked back as one of the crowning achievements of humanity is being able to build machines that think about the world the same way we do.

John Koetsier:

Geordie, this has been fascinating. We’ve gone deep, we’ve gone surface, we’ve gone through it all. I appreciate your time. Thank you so much.

Geordie Rose:

You’re welcome. Thanks for having me.