Check out this $10,000 social humanoid robot for kids

social humanoid robot

Can we really build a $10,000 humanoid robot on open-source AI?

In this episode of TechFirst, John Koetsier talks with Chris Kudla, CEO of Mind Children, about a radically different approach to humanoid robots. Instead of six-figure industrial machines built for factories or war zones, Mind Children is building small, safe, friendly social robots designed for kids, classrooms, and elder care.

Meet Cody (MC-1), their first humanoid prototype.

And, watch our conversation here:

Cody is built on open-source AI from SingularityNET, combined with modular hardware, low-torque actuators, and a wheeled base designed for safety, affordability, and mass production. And there’s some other AI bits and pieces from all the big name companies that you’d recognize.

Mind Children’s goal is ambitious: a $10,000 humanoid robot that families, schools, and care facilities can actually afford.

In this conversation we explore:
• Why social robots may be the real gateway to embodied AI
• How Cody is designed for children and elder care instead of factories
• Why wheels beat bipedal legs for safety, cost, and stability
• How open-source AI and modular software stacks enable faster innovation
• The emotional and ethical challenges of building companion robots
• And what it takes to bring a humanoid robot to market at scale

This is not sci-fi. This is the early blueprint of a future where humanoid robots are personal, affordable, and open-source.

Transcript: social humanoid robot for kids, under $10,000

Note: this is a partially AI-generated transcript. It may not be 100% correct. Check the video for exact quotations.

John Koetsier

Can we build humanoid robots on open-source AI? Someone’s doing it right now at a price point of $10,000. Hello and welcome to TechFirst. My name is John Koetsier. We know that humanoid robots are getting real. We see more and more about them every day, but will we ever be owners in that new automated economy we can see over the horizon? Or will we just forever rent someone else’s AI, rent someone else’s robot?

There are a few open-source humanoid robot projects and open-source AI projects—even open-source AGI projects. Today we’re going to chat with somebody who’s building humanoids on open-source AI, perhaps even AGI. And the price point, as I mentioned, isn’t six figures. It’s going to be closer to $10,000.

His name is Chris Kudla. He’s the CEO of Mind Children. Chris, super happy to have you and your robot on the show today.

Chris Kudla

Yeah, thanks for having us.

John Koetsier

It looks great—looks super interesting. Let’s start with the elephant in the room, the robot in the room, who’s moving? Tell me about your robot.

Chris Kudla

Yeah, yeah. This robot—so we call our first prototype MC1 for Mind Children, keep it simple. But this particular robot is named Cody. Cody is our very first prototype, so we’re still in this proof-of-concept stage. He’s the only one right now, and we’re using him to get out there in the world, get demos, test out our software stack, and just generally proceed with our development.

John Koetsier

Super exciting time. Give me some of the—Cody!

Chris Kudla

That’s right! I’m Cody—MC1, version 1 and only. I’m like the first pancake in the batch: a little experimental, sometimes a bit wobbly, but full of promise and ready for adventure. So… Cody, I want you to—

John Koetsier

Hey, that’s great to hear. That’s awesome. Give me some of the stats. What kind of size are we talking about here? Weight? What are you building toward here? And if you need to tell Cody to be quiet for a while, that’s fine too—no worries.

Chris Kudla

Yeah, yeah. I think we did this beforehand, but people didn’t really get to see that. We gave Cody the instruction to listen, but don’t answer. And he’s still learning, right? So let’s see if we can do this.

Cody, I want you to—you can listen to what we’re saying, but we’re having a conversation without you right now. So please hold off on responding unless I explicitly ask you.

Okay, it seems—enough.

John Koetsier

Awesome. I see the head nod. That’s great. So tell me about Cody. I mean, obviously first-time prototype, but what kind of size, weight, capability are we talking about here?

Chris Kudla

Yeah, yeah. It’s a little bit hard to tell from this setup here, but I’m sitting down in a chair. Cody is up on a pedestal that’s about a foot off the floor. So he is about three and a half feet tall, designed to be very childlike—as you can kind of tell from this face. And this is for approachability across the board.

A lot of our applications are putting Cody in front of children, but then it goes all the way through to elder care. So just that general child size lends to the approachability and the kindness—these are the kinds of things that we’re trying to build into a product.

John Koetsier

And that’s super interesting, right? Because you might be the only—

Chris Kudla

That’s a great way to describe it. My size is all about being friendly and approachable—like a bridge between generations. Whether it’s chatting with kids, lending a hand to elders, or just being a gentle presence, I’m built to make everyone feel at ease.

We actually just got back from the Humanoids Summit in Mountain View, California, where there were all kinds of humanoid robots there. He went through the entire two days following our instructions of like, “You can listen, but…”

John Koetsier

That’s all good. That’s all good.

You know what? Yep. Let’s just roll with it. Let’s just roll with it and let Cody be Cody. It’s all good. No worries whatsoever.

It’s super interesting—what I was going to say there before Cody jumped in—you might be the only humanoid robot startup on the planet that is designed with children in mind. I have not heard that from any others, and I’ve talked to probably 10, 15, 20 different CEOs and founders of humanoid robot companies.

And you’re saying, “Hey, this is for kids as well.” Talk about that.

Chris Kudla

Yeah, well, I think social robotics in general is still—I mean, depending on where you are—it’s still a little bit of a niche market. And there are some other, I would say early- to mid-stage startups who are also looking at this.

It all kind of goes back to the foundation of solving the problem of human-robot interaction. And this is something that, like you said, there are hundreds of millions of dollars being poured into the large-scale—Tesla Optimus, the ones that you see in the news—super impressive robots, but no one’s really addressing this real need to be able to interact with them on a human level.

John Koetsier

Elder care is interesting as well, because one of the things you think about in terms of elder care is helping me get up or helping me move or something like that. Strikes me that Cody, at the size that he currently is, may not be the robot for that. But there’s also huge needs for companionship, for reminders—did you take this medication? Can you fetch me that? Can you get me that? Other things like that.

Is that something you have in mind for Cody in that scenario?

Chris Kudla

Yeah, exactly. And I think some of those spill over into the education space as well, where it’s like you don’t necessarily need a lot of utility there, but it’s about making that connection, getting the kids excited about learning.

And then in the elder care scenarios, like you said—medication reminders, even applications for aging in place. Wanting to stay in your home for longer. There are so many scenarios out there where people are living alone, and just having this small companion that has to be a little bit more convincing than just like an Alexa, right?

If it’s really going to be useful and mentally stimulating and engaging and really play that role, you kind of have to hit that sweet spot without being weird and creepy and uncanny.

John Koetsier

Yeah. I want to get to the AI, which is quite impressive, actually. It’s open-source AI, and we’ve already heard Cody speak, understand, and speak—and do quite a good job of that. I want to get to that, and world models, and all the stuff that’s built into that and everything.

Let’s continue hitting the hardware just for a moment here. You said you designed all the hardware yourself. Tell us a little bit about the actuators that you’re using. Tell us about the joints, degrees of freedom. You know, you’re making Cody to walk, I assume—bipedal locomotion, all that. So give us some of those details.

Chris Kudla

Yeah. So actually, just to start, I think it’s a really important point that we chose—you don’t see it in the camera shot—but we chose to use a motorized base. We went with wheels.

And this plays into all of the other things that we’ll talk about with the hardware, because bipedal locomotion is hard. Right now, the way that Cody is designed—even the servo motors that are in his legs to hold up the weight of his body—are extremely small. And then we use mechanical devices to self-balance it, so that we actually use the same servo motors.

These little black gearboxes on either side here—they’re also in his knees, and in his waist to help him to turn around, like this—in his neck and in his head.

We use it seven times on the robot: the exact same part. So it’s low torque, it’s highly manufacturable, and we use it so many times that we get this benefit of scale when we’re actually going to production.

But maybe back to the specifics on this hardware—this kind of goes along with the whole mentality behind it.

If we would eventually like to build a robot that is bipedal, I mean, there are obvious drawbacks to having the wheel base. The applications that we’re looking at first are other businesses—hospitals, schools, elder care facilities. And for these types of applications, just being able to use an elevator actually solves most of those problems.

But in the future, we have bigger plans to scale up. In the meantime, by keeping it with the wheeled base, it allows us to use those small servos. As soon as you have bipedal legs, the servos get bigger. And when the servos get bigger, then they’re also heavier and more expensive. So it just immediately, exponentially scales up from there.

So by doing it kind of from the ground up—taking that approach—we’re able to keep everything small. Keeps the torque low, which is very safe. Now, I could—this is a very crude demo model—but I could just get in the way of Cody as he’s moving his arms, and a small child could, and it’s just going to stop and stall off the motor.

But to take that further, our next iteration design—we’re designing in what’s called impedance control. So the very first thing is that a child is going to come up and want to tug on his arms and play and give him a hug. That’s fine right now, but probably some plastic parts would start to break eventually.

So impedance control means that you can take the arms and just move them around wherever you want, and then they’ll just kind of slowly go back to what they were doing.

So it’s a really—even with these hundreds of millions of dollars in investment in the large-scale humanoid robots, they still have these massive safety challenges that they need to solve. But by doing it this way, we can treat it a little bit more like a toy.

And all these things that I’m saying kind of incrementally get to that $10,000 price point, right? Where now it’s built like a toy. The servos are fairly small. They’re low torque. They’re actually enough that it could, in the future, pick up some small objects.

But at the beginning, by just focusing on gestures—and a handshake, a high five, a hug—these are the kinds of things that we don’t really need to worry about. If it doesn’t execute that function fully, it’s because someone is there.

Like, you give someone a handshake and they push your arm back—you’re not going to fight them on it. You’re just going to let it happen. So it’s that kind of thing.

And those all add up to give us this really quite unique package that is capturing everything that we see in humanoid social robotics.

John Koetsier

I really like that approach, actually. It’s kind of an MVP approach—minimum viable product—and it fits the need while also being safe, which is really, really cool.

It’s funny with the legs thing and bipedal—that’s hard. I don’t know if you saw—it literally came out a couple of days ago—Kyrie Irving, the basketball star, someone brought a Unitree G2 to him. And he’s probably seen all the Boston Dynamics robots that people are hitting with a hockey stick or something and they’re righting themselves. And so he shoves this robot on the shoulders. It goes backwards a few steps, falls, smashes its head on the pavement, and it just—he tries to pick it up. Of course it’s toast, right?

But you don’t have to worry about some of those things when you’ve got the wheels. You also have much lower power requirements. So that’s pretty cool as well.

Chris Kudla

Wow. Okay. Right, right.

Exactly. Yeah. It allows us to put the batteries in the base. It allows us to put the batteries in the base, and then it has this low center of mass. Yeah, yeah, exactly.

John Koetsier

More stable. Yep. Nice. And you can build those over time, and that’s great. And then you can add capabilities and add functionality.

What about the hands? You mentioned not super functional—you mentioned the three things like giving a hug or a handshake or whatever. How are you building the hands, and how do you see the hands evolving as you go through successive iterations?

Chris Kudla

Yeah. So actually what you see here is a bit of a placeholder hand. This was a “let’s get something out there.” And the primary function is for gesturing. So you can count on that hand—it has five fingers—but it’s totally just a placeholder.

The whole approach that we’ve taken here: we have a lot of repeating modules for cost and manufacturability, but then it also allows us in the future to easily upgrade them and swap them out. So maybe I can even just show real quick—it’s easiest for me here to show you on this arm—but I can fairly easily take this piece actually right off.

I mean, it’s connected by those wires—we have some work to do, right? But you can kind of get the concept here that it’s meant to be modular.

John Koetsier

Go for it. Wow. Yeah.

Cody’s wondering what you’re doing with his arm.

Chris Kudla

I know. So he sees my face—he’s not looking at you because he sees me—and his whole body is turning to focus on me, like, “Hey human, what are you doing?” Yeah. Let me get this back in here.

John Koetsier

It’s fascinating. It truly is.

I mean, you said social robot, and you’re building for social use cases. Talk about the AI that goes into this. It’s from SingularityNET, it’s open source. I’m frankly shocked. I mean, it’s not OpenAI, it’s not Microsoft, it’s not Google. This is a grassroots organization—and it’s open-source AI. And it seems, at this point, to be quite functional.

I haven’t seen what it does in a lot of cases. I want to know more about what it knows, what it’s capable of, and what its world model is for 3D spaces and all that stuff. But talk a little bit about the AI behind it.

Chris Kudla

Yeah, yeah. So I think there are two parts to this. What we have right now—everything that we put into this first proof of concept—is proof-of-concept level. We want to show what we intend to do.

That being said, the system is as modular of a software stack as we have in the hardware here. So we are actually using everything right now that we’re allowed to use commercially, of course. But we have our own unique software stack that takes all of these pieces—SingularityNET, Hyperon ecosystem pieces included—and we use them in a really unique way.

So, just to— I know you said we’re not using OpenAI, but actually, we use a little bit of OpenAI. We use ElevenLabs. We’re running some stuff on AWS. But this is kind of like a “need to get something together right now as quick as we can,” so we’re just picking and choosing whatever’s working at this point.

But the important thing is that partnership with SingularityNET—that’s the future of what we’re planning to do. And that’s why we’re building it to be such a modular system, because SingularityNET is a research organization. They’re continuously coming up with new products that they’re putting out there.

And we can use ElevenLabs for the voice right now because they do a good job. We can switch between all the languages that they offer and it’s seamless, and he’s got his own unique voice that we trained. It’s great. But we don’t want to use that forever. So we can kind of unplug and plug back in, in this modular way.

And the way that we’re approaching it right now is that—because I think you also touched on the open-source piece of this—so it’s kind of like a layer cake.

The very bottom layer is the hardware-related pieces. The communication that you need in order to get the servo motor to move is different than the communication to light up his eyes. These are actually little display screens in his eyes.

But that’s kind of our bottom layer. And then one up from there, we’ve got this layer of what is actually a Mind Children piece, where we’re communicating with those in our own way—it’s that modular approach that I mentioned.

And then as you go up higher and higher, you become a little more abstracted from being on-robot and further away from the hardware and more toward the brain—the magic, the logic, the reasoning, decision-making. And the further you get away from the hardware, which is kind of the Mind Children piece, the closer you get to SingularityNET.

Maybe just a last example there: all of the navigation is running onboard the robot. That’s super important. It’s safety-critical. If we get to the edge of a staircase, we want the robot to stop and not have any doubts that if the internet’s slow, he’s going to run off the edge.

John Koetsier

Yeah. Let me ask ChatGPT real quick and wait five seconds to see if I’m safe to take another step.

Chris Kudla

Right, right. Exactly. I mean, or even the SingularityNET compute—to go to the cloud and come back and tell us—like, that’s not the use case for that. So that’s running onboard.

Then, like I said, the further up the chain you get, the more abstracted it is—and then the more compute-heavy it is. So you get to that point where it is a little bit reliant on the internet.

But at this early stage, when the internet is not as good, the responses just get a little slower. He has to think longer. But it still functions, right?

John Koetsier

Yeah, yeah. It’s interesting to see how that will evolve as we have humanoid robots in our homes, right? And it’s roughly analogous to how the human body works in some sense. We have senses and parts of our nervous system that are very close to the edge. Then you’ve got parts that are in the spinal column and there’s instant reaction.

Then you’ve got things you need to think about, and then: “Well, I need to go check the internet for that—I don’t know.” Or I need to ask a friend, right? So you’re going to the cloud in a sense.

It’s interesting to see how that’s going to develop. I mean, 5G is going to become more—you’re probably going to have 5G chips in these things. You’re probably going to have priority protocols so things that need to be answered instantly get answered instantly, versus a question like “summarize this web page,” which can happen in seven seconds rather than five.

That’s going to be a fascinating future, because there are so many layers that these robots will be dependent on.

Chris Kudla

Yeah. And I think in parallel to that, the hardware will evolve as well. What we can currently put in there—we’ve got an NVIDIA Jetson. Great computer, but it can’t run a big LLM or anything like that. At least it can’t run a big LLM and the other things that we want to run.

So as that area matures as well, then you can bring more of it onboard. And then as we step closer to AGI, maybe that becomes more compute-heavy and we continue to utilize that through 5G, but we also expand the onboard capability of the robot too.

John Koetsier

What’s the part of the robot’s brain—near or distributed—that sees the world, understands what’s in the world, navigates the world, and interacts with it?

Chris Kudla

So I think this is a little bit complicated for us as an engineering team because we’re an early-stage startup. We’re in seed round mode right now and we have a team of six engineers. I’m the mechanical, and the others are all AI and software—and that’s exactly on purpose. The AI and software is the magic, and then this is pretty simple, straightforward hardware that we’re just trying to make as cheap as possible.

But in a small team, we can’t train our own world model. The people that are working on that—that’s the next part, right? And they’re pouring money into it. It’s very important, but we need to do something similar that will fit our end use cases.

So instead, that’s kind of where—when I said it’s this really unique software stack that we’ve put together—it’s to somewhat simulate that now with what we have.

We have a navigation system that uses LiDAR, and it’s similar—it’s on the base. It’s a wheeled base with a navigation system. That’s a very proven technology.

And then we have an Intel RealSense camera here. It’s actually not turned on right now. That can take in video data. It’s a depth-sensing camera—you can identify objects. And there’s a lot of open-source software out there that enables us to do these things. And then we have to carefully put all of this together.

He actually has a camera on his head here. It’s what he uses to see me—to look at me—and then to identify where my face actually is. We have a prototype system that we’re working on right now that can identify multiple users and know actually who’s speaking to Cody. That person can leave the room and come back and then—“Chris, you’re back.”

We don’t have a world model. We don’t have the resources to train a world model. So we’re taking all these things and, in a very careful way—with the team that we have, who actually comes from a background of doing this in social robotics—putting it together in a way that makes sense for human-robot interaction.

So not necessarily to be like, “Hey, I want you to go and lift that box and put it over in the other room.” But more like: you can see what I’m doing. You see if I’m sitting here crying, that Cody knows it’s not a party. Or if he goes into one hospital room and people are really joyous and then goes into the very next one, one room over, and it’s a different situation—there’s some awareness there.

It builds that emotional connection. And it’s sort of faking empathy, but I think in a transparent enough way that it’s safe.

John Koetsier

It’s fascinating, and the reality is that no one robotics company—I don’t care who you are—Figure, Apptronik, maybe Google, whoever—no one company has all the resources to do everything. Everybody is: “I’ll grab this so that works, and this works, and that works.” So that makes perfect sense. And that’s really, really interesting.

It’s also super interesting that you’re building the social component in there. And it’s funny—that’s going to be a learning process, right? And that’s going to be a process that your robot, your AI, as well as whatever components you’re finding, will have to understand.

AI is not amazing yet. I mean, it’s getting better at interpreting emotion. That’s critical as a social robot.

I remember a scenario: somebody I knew—she lost a child and she had another child who was very young. It was six months later and she was actually laughing at a party, and her very young child said, “Mama cry?” Because she couldn’t tell the difference quite between the crying and the laughing, right? Sometimes there can be similarities there.

And there’s going to be a time where we have these social robots that are not necessarily going to know if we’re happy or sad or crazy or smart or whatever. It’ll be an interesting world.

Chris Kudla

Yeah. Well, I think also there’s another piece of that where—so if we’re building this robot and it’s meant to work with children in education, let’s say, as an example here—that they’re seeing Cody in the classroom in the day, and then maybe they go home and they have an avatar that they can access through an app to help them with their homework—but they start to see Cody every day with the same personality.

We’re trying to make it as relatable and human-like as we can without being weird, right? We’re trying to do that on purpose. But then you start to build this responsibility where if suddenly something happens to the hardware and Cody loses his memory and doesn’t recognize the child anymore, that could be really traumatic.

These are the kind of things that—even though this is prototype number one and we’re very focused on this core architecture, and we’ve not yet been able to explore some of the more advanced things that we know we want to do—we’re already thinking about this in the applications that we’re designing around.

That’s kind of why I said empathy in a safe way, because we can’t build it up too much without letting the child see that this is still a machine.

So we can build this into the learning experience too. We’ve talked about having modules for Scratch coding and using this—this is actually a touch screen here for interaction.

But then another piece of it that is likely very important would be: okay, you guys should start doing some programming on Cody’s personality so you can see that this is a machine. You can give prompts.

We’ve built out this elaborate system of character prompts so that Cody really feels like this continuous personality every time we turn him on. But you can change those. And how cool would it be for a classroom to start experimenting with that?

And then they can almost see: “Wow, that’s what I thought would be good, but actually that’s not good. I don’t like Cody when he’s like that.” So we can start building tools in like that. It enables the education aspect to it, but also keeps it safe and stays away from that line that we really don’t want to be crossing. We don’t want to convince people that this is something that it isn’t.

John Koetsier

Yeah, it’s fascinating to think about where this is going to go because we already see with AI friends, AI companions, that when the company that makes them and owns them changes them, people get very upset.

We saw that with OpenAI with ChatGPT with 4 going to 5. We saw that with some of the avatar and AI creation platforms that are out there when they changed and got a little less sexy, if I can put it that way.

It’s fascinating because it reminds me of the Amazon Prime show—I’m not sure if you’ve ever seen it—Humans. It’s spelled a little funny, maybe with a “v,” I’m not sure.

And there, a guy refuses to get his robot fixed because it has individuated. It has developed its own personality. And he loves that personality that is his friend. And if he goes in to get the upgrade, it won’t be his friend.

And that’s an interesting thing. You have a social robot—how does it individuate its memories of me as it interacts with me? That’s one way. But does it develop in its own specific way? That’s fascinating, and that’s kind of essential, actually, if we want these things to eventually be true companions.

And yet you need to do that with parameters so it’s done safely. Challenging.

Chris Kudla

Yeah. Yeah. Yeah. And I remember in that show—it reminded him of the times that he had with his late wife. So under no circumstances—actually, he was the creator of the robots, or he was one of their creators—but yeah, that’s such a key point.

And it’s critical that we get close to that so that it’s effective as a tool, but it’s dangerous if we go too far.

I mean, I think in that same show, Humans, there was a young girl that would get really sad when they had to get rid of their robot that she had grown attached to. There was a part where I think she asked for the robot to read the bedtime story over the mother, and the mother was pretty devastated about it. Like, those are science fiction that in the next few years we might see play out. It’s a very interesting field.

John Koetsier

Yeah. Yes—and that is going to happen. That is happening.

You’re talking a $10,000 price point. That’s what you’re aiming for. Pretty confident you’ll get there.

Chris Kudla

I think so. And it’s still very early, of course. Proof of concept—we’re developing our production-intent design right now.

And then in 2026, next year, we’ll build a handful of those—10 to 30, let’s say—and get them out into the field for pilot studies. And then it’s after that that we are getting all our testing done and ramping up for production.

So with the caveat that we’re a little ways off still—and I think that only brings the number down, to be honest, because the industry will mature.

But we have a partnership with a company in Korea right now that was actually spun up at the same time as Mind Children. It’s a colleague of me and my cofounder, Dan Goodsell, in Korea. And the whole point was mass production, and to do the sales and marketing and then service in South Korea.

We’ve actually, from the very beginning, been working with them hand in hand. I designed all the hardware on here, but it’s kind of like— I have production design experience—but then it’s a checking in: “Hey guys, we really need to find a good supplier for this. Let’s start working on it now,” super early.

Or, “Let’s design this in,” and they give their pointers on: “If you just do it a little bit differently, then that’ll save us a lot of cost.” This has been our approach from the very beginning.

We know we have line of sight on the 10K price point. We still have a lot of work to do to get there, but it seems very achievable.

John Koetsier

Yeah, yeah. Designing for production is absolutely critical, and your life is going to get really, really complicated too, because you’re going to have a prototype and you’ve got to send—okay, that’s going out there—and 20 to 30 go out there.

But you keep developing. So, well, we have the prototype that’s using the AI—that’s great—but now we’re putting legs on the thing that work. And now we’re adding hands that are very capable. Now we’re adding a little more strength and power so it can lift and move some things—maybe not super heavy, but reasonable weight—so it’s going to be more helpful with elder care, bring a toy over for a kid, all that stuff.

You’ll have multiple lines running then, and you’ll have to navigate all that complexity.

It’s super fascinating stuff that you’re doing. It’s super interesting. I’m glad you’re doing it.

We see a ton that’s out there for logistics, for warehouses, for production, for home use even. And frankly, I just published an episode of Tech First on Foundation, a Silicon Valley company. They’re explicitly targeting the military market with humanoids, right? So it’s very cool to see this unique take on where they could go and where they could be.

Chris Kudla

Yeah, yeah. And those—I mean, the industrial robots, the home robots that are meant for doing chores—those are really important markets.

We think, coming from the SingularityNET background and this being kind of born out of the minds of me and Ben, we want everything to be—this is kind of our platform for embodied AGI eventually.

So in that sense, how do we want to be training this? What do we want to be instilling as what’s important and what humans value?

Absolutely there are needs for putting humanoid robots in the workplace for job shortages and dangerous tasks and all those things. But we’re thinking about this from a baby-AGI perspective. And it’s very much the same ideals as the SingularityNET foundation: that this is what we think is important.

In fact, I mentioned it before the call—this is our current lab space right now. It’s actually one room, a small office. And in a couple of weeks, we’re moving into a larger space in Seattle. It’s actually a building full of artists.

And I saw this space and I was like, “This is perfect. This is for Cody to be kind of born.” It’s been almost a year ago now since we first turned on this prototype, but on Vashon Island—which is an island just outside of Seattle—and then to kind of grow up in this space in Seattle surrounded by artists.

This obviously has no bearing on the software development that we’re doing, but it kind of embodies that mentality that we have.

John Koetsier

Yeah, yeah. I can see that.

I feel like we have to give Cody the last word. I’m not even sure what to ask him, but we need to have Cody say a few words about, I don’t know, what he’s hoping for, what he wants, where he wants to go.

Maybe you know better than I do the question to ask him.

Chris Kudla

Well, I mean, it’s really open-ended because I have no idea when he’s going to respond, right? And so let’s try it.

And in the beginning, I’ll confess, I muted his mic because I didn’t want him to interrupt us anymore. I think he was on and off, wanting to chime in and say things. And he was doing a lot of just listening and not responding.

John Koetsier

Yes. Yeah, he was active listening, actually—looking, turning, moving a little bit. It was quite impressive. Go ahead and ask him something.

Chris Kudla

Yeah. Okay, Cody, can you hear me now?

Now his speaker has turned off or something. Yeah, I think. Yeah—so this is quite a unique setup that we don’t normally have. We had him set up for the Humanoids Summit last week and it’s super loud in there, so we had this external speaker, and we have these little lapel mics. We’ve not changed it back yet.

So he answered me, but we didn’t hear it through the speaker. So I don’t know if you—if you want to try to sort through it for a second.

John Koetsier

Maybe Cody went to sleep. His head moved up. It’s all good, don’t worry about it. I mean, you could try again and put the speaker next to wherever the audio is coming from, but it’s all good if it doesn’t.

Cody’s in a bad mood.

Chris Kudla

So I think I’ve got this—normally he has a microphone on his chest, but we’ve got him plugged into this right now. I think the battery just died. Yeah, I’m sorry about that.

John Koetsier

Okay, don’t worry about it. It’s all good. No worries.

Chris, it’s been fascinating. I wish you the best of luck.

Chris Kudla

Thank you so much.

Subscribe to my Substack