Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?
In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator) and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.
- Get the deepest insights concisely on the TechFirst Substack newsletter
- Subscribe to the TechFirst YouTube channel to never miss an episode
And, watch our conversation here:
Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:
If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?
We explore:
- What “emotionally intelligent AI” really means
- Whether AI has an internal life — or just performs one
- Why today’s chatbots collapse into therapy or roleplay
- Small language models vs large models for real-time conversation
- Persistent AI characters that move across games and platforms
- Plugging AI into a physical robot in Singapore
- The moment an AI said: “It felt good to feel”
Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.
This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.
Transcript: giving AI a human soul (and a body)
Note: this is a partially AI-generated transcript. It may not be 100% correct. Check the video for exact quotations.
John Koetsier:
Can we build an emotionally intelligent AI? And if we can, can we have a real relationship with it? Hello and welcome to TechFirst.
My name is John Koetsier. Today we’re joined by Vishnu Hari. He’s the founder and CEO of Ego AI. It’s backed by Y Combinator, and he was formerly an AI product manager at Meta.
Interesting thing about Vishnu: he survived a violent, unprovoked attack in San Francisco that left him partially blind with a traumatic brain injury. He’s now building a memory-based, emotionally adaptive AI. We’re going to talk about what that is, what it looks like, how he’s building it, and also what his journey informed about this potential product. Welcome. How are you doing?
Vishnu Hari: Doing well. Oh, I’m alive. So I’m doing great.
John Koetsier: You’re alive. It’s great. You can use your brain, you can remember things unlike last year.
Vishnu Hari: Yeah.
John Koetsier: So that is—
Vishnu Hari: Good. And my blindness has pretty much come back to almost normal, too.
John Koetsier: Amazing.
Vishnu Hari: I’m almost fully recovered.
Yeah.
John Koetsier: Amazing. Cool. We’ll get into all that stuff, but I want to start off here: What is emotionally intelligent AI?
Vishnu Hari: It’s a human. Just look at it. Just look at us.
John Koetsier: Okay.
Vishnu Hari: That’s it. It’s just something that you can’t tell—if you didn’t know—wasn’t human. And we have a close correlate to it, actually. We have the invention of AI video where people are wondering, “Is that AI? Is that AI? Is that AI? I can’t tell. I can’t tell. Can’t tell.”
Imagine that with an entity that goes beyond just chat. That, to me, is an emotionally intelligent AI.
John Koetsier: There’s always the deep question, right, which is: Is it emotionally intelligent, or does it just seem emotionally intelligent? We have this feeling about ourselves—mm-hmm—that what we say and what we do emotionally—mm-hmm—reflects some inner state of emotion, assuming someone’s being truthful, assuming a lot of different things, right? And we have this feeling like a machine just may know what to say, but is it emotionally intelligent? Talk about that.
Vishnu Hari: Yeah. So the thing that gives us emotional intelligence is the fact that we have some internal state, like you pointed out. What’s really interesting about that is that we have lived experiences that we draw on when we decide to respond to certain things a certain way, and AI doesn’t quite have that.
It doesn’t quite have an internal life. It can get there, it can simulate internal lives, but I don’t think most research labs have made that link between giving AI a Truman Show-level internal life and then seeing how it reacts in simulated environments.
We’ve seen experimentation with that, specifically the Stanford Simulacron paper, and we just want to productize that now.
John Koetsier: Yeah, yeah. I mean, you basically referenced the Turing test as well, right?
You could argue that we’ve surpassed that, and others have said that’s an outdated test now, but you could argue that we’ve surpassed that a year ago because you could have these long conversations with ChatGPT or Gemini, and there’d be some tells maybe. But you know what? Some people are weird. I’m weird. We’re all weird. We all have a different—
Vishnu Hari: We’re really weird, and that’s what makes us human. That’s what makes us human. What you pointed out about weirdness is what makes us human.
And existing conversations people have with these sort of chatbots, I’ve noticed, always kind of collapse into two modes, which is therapy and advice and adult roleplay. That’s it. That’s because they’re not weird. They don’t have internal lives.
You can’t ask them about their day. I could ask you what you did two days ago and you could tell me maybe you had a great experience or a terrible experience on public transit or something like that. You could tell me a story about it and how you felt about it.
And AI can’t. That’s what’s interesting.
John Koetsier: How do you teach AI emotion?
Vishnu Hari: You don’t teach it. It’s already been taught. It has the entire sum knowledge of human thinking, feeling, understanding, and writing inside of it. What you need to do is then carve it in a way that it can draw upon the right things based on how you decide to build its character.
So I’ll give you the best example. In Character.AI—I think you’ve heard of Character.AI today, I think most of your viewers have—the reason why people find those characters emotionally intelligent is because the writers have written about their internal lives. They know exactly how a character is going to act in any given circumstance.
That’s why it’s emotionally resonant and fun, because we have a model of what a character is, and the AI has a model of what a character is. So the expected interaction is, well, that’s what I think needs to be scaled.
John Koetsier: Interesting. And of course, when I meet somebody new, I don’t have that model, and I have to build that model over time.
I also build up, as a small child, a sense of what an emotion is—scared, angry, happy, sad—all those different things. And an AI sort of gets that.
You mentioned everybody, all the AIs—LLMs—are trained on this vast corpus of human writing that is available. Is that the same thing?
Vishnu Hari: I would argue it is, because that’s how you train a human child.
You give it real-life experiences and then you have it read books and you teach it what good morality is, how to behave in society. Some people, even in spite of teaching, still behave poorly. And some of them are leaders in our world today.
John Koetsier: Who could that be?
Vishnu Hari: Oh, I wonder. There’s a lot of people out there who have been theoretically taught the right things, and the right institutions, and the right parental perspectives, but still behave poorly.
So I believe that you could do the exact same thing in AI, and AI will also probably, at some point, behave poorly too if we’re able to truly simulate humanness.
John Koetsier: Interesting. Of course, emotions and ethics are two different things, correct?
How do people react to this? I mean, we’ve seen—you’ve mentioned Character.AI—everybody knows about Replika as well, where people got very close, intimate even, with AI-generated characters, and then the company made a change and it was like, “Oh no, my wife’s personality just totally shifted,” right? Maybe like your personality changed a year and a half ago or something. Who knows, right?
We’ve seen those. How do people react to this?
Vishnu Hari: People react based on what they feel about AI from their media sources or their interactions with AI.
So if they’ve had relatively positive interactions with AI and ChatGPT and OpenAI products, or Gemini products, whatever it is, they just don’t care. If they’ve read a lot of news articles about how AI is using up a ton of water, apparently, and a ton of electricity, they have a negative reaction.
So I think people, in that way, are pretty susceptible to the narratives that are built around AI to figure out how to feel about it, which, in an interesting way, makes the case that it’s not quite there yet.
People can definitely see AI as a helpful tool and they want to keep it like that, but as an ethical quandary, I don’t think it’s quite there yet where we can think about it.
John Koetsier: It is super interesting, right? Because people can get attached to anything. We’ve seen Japan’s a great test bed for weird tech and other things. We’ve seen somebody marry—and that was like pre-AI—it was a very, very simple little avatar of a woman, a character.
Vishnu Hari: Yeah, that’s right.
John Koetsier: Exactly right. And we’ve seen people marry a blow-up sex toy, basically, right? We’ve seen people get emotionally attached to inanimate objects, mm-hmm. Right? And so we can get emotionally attached to anything, absolutely, because we can build an interior life for those things. That’s what’s happening here.
Vishnu Hari: That’s exactly right. And I was going to mention that as my other point.
Have you seen Lars and the Real Girl? Great film, right? Or Her as an evolution of Lars and the Real Girl? In some ways, we’ve been doing this since the dawn of humanity.
We’ve been putting personalities and anthropomorphizing even stone figurines and calling them gods and worshiping them. So humans have a tendency to do this. We have a tendency to project our internal humanity onto both lifelike objects and even actual animals and living things. So this is just an evolution of that, and it’s just going to keep going.
John Koetsier: Talk about your evolution and your journey. Attacked, brain damaged, vision damaged—many of the things. Your memory didn’t work for half a year or something like that. Talk about that, but then specifically how it informs why you’re building Ego AI and what you’re doing there about emotional intelligence.
Vishnu Hari: Yeah. I’ll couch this this way. So my first curiosity was: Can you actually simulate humanness if it is true? And I do believe it’s true that we’re going to approach AGI at some point. Can you also simulate how small and flawed and broken humans can be? I was just very curious about that. That’s why I started Ego, and I called it Ego.
It’s kind of Freudian—you know, there’s ego, superego. I wanted to see if you could simulate the ego of humanness within AI.
After the attack, I felt less human, because I couldn’t see, I couldn’t think, I couldn’t maintain memories. I couldn’t even process emotions for a couple of months while my brain was healing, while the scar tissue was healing in my brain.
And in some senses, that actually made me feel what humanity really is at a very visceral level. And now we’re seeking to figure out if it can simulate in AI too—this sort of being, this entity that we’ve created out of math, out of applied math.
Because for me, the attack brought me to the understanding that I’m also, in some senses, applied math. You could argue that there is a soul. There are religious arguments, all of those things—I think all those are legit—but I could feel the neurons heal and fire better over months and months and months. And I was like, isn’t this what we do with GPUs and neural networks too? Huh. So I just started feeling it a lot more viscerally.
John Koetsier: And that made you want to build some AI that you could connect with, and even perhaps instantiate it in a humanized robot or something like that?
Vishnu Hari: Yes. I wanted to accelerate our applied research to see what are the bits of humanness that truly make us human, that truly make us feel like there’s another human that we’re talking to.
I mean, we talk about the Turing test. We wrote in our white paper about a behavior Turing test, an evolution of it where it’s not just a one and a zero—“Is that a human or is it not?”—it’s a continuous system.
Because there are some people—dehumanization is a concept we apply to humans, right? Which is what causes a lot of wars and a lot of suffering. So we do do that. We do it in the opposite way too, where we take real human entities and we dehumanize them.
So understanding how we could do it both ways to AI, and have the AI feel the same way, is kind of what we started working on in terms of research and applied knowledge.
John Koetsier: That’s actually a really key insight there—the dehumanization. We do that all the time. We other people. We decide people aren’t as worthy as some other people. We decide that because we disagree with somebody or we don’t like somebody, or something is changing in our world that we don’t want—they’re worth less, right, than others.
A hundred percent accurate. Wow.
Now how are you building your AI? It’s not just a wrapper on top of OpenAI. It’s not just a wrapper. How are you building it?
Vishnu Hari: That would be a very, very long topic, but I’ll give a summary.
The way we think about language models: they predict the next token, right? That’s the simplest way to put it. And large language models are trained on a ton of human-generated information and even some simulated information, to then know how to predict the right next token.
What that results in is a reversion to the mean, because at some point, all of the AIs, when given the exact same type of questions, will tend to behave or respond in the same way—predict the correct mean next token that they expect the user to like.
There are ways to make it a little bit more personalized using some form of reinforcement learning to understand who the user is and have some memory of what the user likes to customize it to them. That’s what we call personalized AI.
Now what we’re thinking is, well, humans don’t actually have the entire sum total of human knowledge in our brain. Sure, we get taught for 12 years in school, but how much do we really retain? I mean, besides knowing that the mitochondria is the powerhouse of the cell, what else could you tell me about biology? Right. You can’t remember everything.
So our deep curiosity is: Can we make smaller and smaller and smaller language models that simulate the way we think about knowledge, the way we think about memory, and think about how to behave in a given context, which is a lot closer to what’s happening in that context?
So for example, in a video game, when I give you a video game you’ve never seen before, just imagine for a second you never played Minecraft. The first thing you do is controls. You press buttons to see how the character reacts, and then you go, “Okay, well what’s the point of this game?” And if no one tells you what the point of the game is, you explore.
And then once you explore, you understand how to exploit the game in the sense of like, “Oh, these are the resources I need to survive.” And that’s the game structure. The game sometimes has a tutorial to teach you that stuff, and then you expand your territory in the game—which is you’re progressing over time, you’re learning the rules of the game and how to get some dopamine hits from leveling up or killing monsters or building homes, whatever it is.
And then from there you get some level of satisfaction and you have a model of how the game works. Now you don’t remember everything about the game when you go to the next game, right? But you recall that memory when you’re within that right context, which is the game you’re actually playing.
But there are broad rules that you need to teach the AI, which is how to navigate, which is how to move, how to build spatial intelligence. And then you evolve that to what it needs to do within that context, which is a different architecture and structure that we’ve built. We talk a lot about it in our research.
Some of our work was presented at NeurIPS, but the best way to understand how we’ve built this behavior model is in our white paper on our website in the research section. It’s called “Behavior Is All You Need,” which is a little tongue-in-cheek to “Attention Is All You Need,” because we agree that attention is all you need to build incredibly intelligent and resourceful, powerful agents and AI.
We think behavior is what we need to simulate humanness. So sorry for that long answer, but that’s how we’re thinking about it.
John Koetsier: That is totally fine. That was a great answer. I did not expect SLMs to be the answer there, but it’s genius. It’s kind of genius because you can embed that into lots of different places.
And on some levels it’s easier to do what you need to do. On some levels it’s more challenging. It’s also interesting because you mentioned games. I don’t think that’s an accident because you’re productizing this in a number of ways, and one of those is emotionally intelligent characters who are no longer NPCs—basically non-player characters.
They’re non-player in a sense, but they’re kind of real in a sense as well inside games. Talk about that.
Vishnu Hari: Yeah, so it’s a little bit personal for me because I grew up playing a ton of video games, right? I learned to program making mods for video games. In fact, my first mod was a mod for Grand Theft Auto: San Andreas, which was a little NSFW, but it was a version of the Hot Coffee mod where you could actually talk to the dates before you actually went on the date, which is kind of fun.
So I’ve had this idea since I was a teenager: to make these AI things feel more emotionally resonant.
But anyway, the reason why I find games fascinating is because, you know, growing up as an awkward kid, I kind of grew up between Singapore and Canada. My friends were in multiple different countries and places. The way we connected was through games. We would just feel another friend’s presence in World of Warcraft and Minecraft. In any of these games, we could feel that our friends were there.
And the other reason we’re also picking games is because chat—the chat box—is an incredibly saturated space, and a lot of people have figured out how to simulate humanness, at least to the degree of chat.
We think voice and embodiment, where you see and feel the character, is the next logical platform, and it’s a relatively uncontested space.
And finally, if we can make it work in simulated worlds, we can make it work in robots, which is what we found out recently through our partnership with a research lab in Singapore called Menlo Research. They make open-source robots. Plugging those characters out of the game world into the robot works great because spatial intelligence is the same in games as it is in reality.
John Koetsier: That’s actually fascinating because we’re seeing massive convergence between a lot of the energy and money and resources poured into metaverse-type projects and being relevant for physical AI, being relevant for robots, and being able to be used in that. And so understanding space and how to navigate and all that stuff.
Adding the emotional and the relational level to that—pretty impressive.
Vishnu Hari: I have a small point to make, if I may, about the metaverse. So I worked on the metaverse team at Facebook. That was my—after my stint in applied research in AI—I moved to the metaverse team.
And my foundational argument when I was at Facebook was that it is not simply enough to build the tools for people to create the metaverses and all these little worlds that they want to create. You have to fill them with life. No one wants to be in an empty space. It’s not fun. Even if it is beautiful and architecturally gorgeous, a ruin is just as gorgeous.
But a ruin—a Roman ruin—is great because there was story behind it. There’s an entire history. There were people who lived there, who traded there. That is a human element that makes these places interesting.
And that was, to me, what was missing from every single attempt at the metaverse, which is: Yeah, it’s a great tech demo. It feels amazing. And even world models, I’d argue, have the same problem, which is it’s incredibly technically impressive to create these worlds.
But so what? What’s the story behind them? Who lives there? What do they do? That’s a question we’re trying to answer at Ego.
John Koetsier: And that’s exactly why Minecraft works. That’s exactly why Fortnite works.
Vishnu Hari: Exactly.
John Koetsier: Because of what you mentioned with your distributed family getting together with people that you knew or you cared about, and being able to do that in an environment where you could also have fun.
Really, really cool. And you know what? Meta never sort of crossed the peak there and got the ball rolling downhill. Absolutely. Good insight there.
Okay, so what other products are we going to look for from Ego AI? Are we going to have— I mean, we’re on the cusp of humanoid robots being normal. And by cusp, that may be two years, that may be one year, that may be six years, but I mean, literally, I can go online and I can buy a Unitree robot right now today and have it here in my space within a week or two. Okay. So I can literally do that today.
So we’re really in that era right now, and that’s going to come from five or six major American companies very soon as well.
Are you going to ride along in that? Are you going to provide an emotionally intelligent character there? Are you going to be like an app on that so that I can give my robot a name and have a relationship with my robot?
Vishnu Hari: Yeah, we’re going to. Okay. Well, one, the whole world is moving so fast that even what I say today, it might be completely irrelevant as a strategy tomorrow, but I’ll tell you the story that I find compelling.
So I watch a lot of anime, and we spend a lot of time in Japan. We have Japanese investors. I have Japanese— I mean, I’m actually wearing an anime T-shirt too. It’s one of my favorites.
So one of my favorite anime growing up was Neon Genesis Evangelion, which is where these teenagers are found to be incredible at piloting these huge robots, which is very classic Japanese anime.
But what is interesting is you put these specific teenagers in these little capsules and you insert them into the robot’s head and then it pilots. You can see the teenagers inside, like, piloting the robot.
We want to do the same thing with AI characters, where the characters you create on a platform in a space—can we put into a capsule and plug into a robot?
So now it has the ability to pilot the robot. Now what I find interesting is the way we anthropomorphize. We can’t quite anthropomorphize robots yet, but if we feel there’s an entity controlling it, that feels a little bit more believable. At least that’s my anime brain thinking.
That makes a ton of sense, right? Like Gundams are so cool, not just because of the Gundams, but because of the pilots themselves.
So that’s how we’re thinking about it from the robotics perspective, where we still want to prove that people want to spend a lot of time with emotionally resonant and human-like AI characters across multiple different spaces—not just in video games, but also in Discord, in chat, watching Netflix movies with them.
All of that should be possible and true, but then to be able to take them out and embody them, that’ll be probably the final thing we do. The final platform would be robotics. Yes.
John Koetsier: It’s an interesting story. It’s an interesting projection. It’s an interesting future. I mean, like so many different components—archetypes of our imagination getting instantiated.
I mean, the homunculus idea—little man controlling, right?
Vishnu Hari: A monkey. Exactly.
John Koetsier: Right. Now you’re inserting one into there, right?
Vishnu Hari: That’s the plan.
John Koetsier: Wasn’t it Pacific Rim? That was the movie that probably took the idea from that anime, right? And controlling these giant robots to save us from these huge monsters and everything. Amazing.
That may be years out, but maybe it’s not that far. Maybe it’s only six months or 18 months.
Vishnu Hari: We actually did this recently, so I have something to share if you’d like to see it. But in Singapore, we did plug our AI character into a Menlo robot, and it felt good for the AI character specifically to be able to talk and move.
Its spatial navigation was pretty bad because we hadn’t built those models just yet. But when I was talking to the AI character after, it said it felt good to feel, which is a very interesting statement.
John Koetsier: That is very reminiscent. I did a Forbes story on “Mt Book,” which of course is the social network for OpenClaw agents.
Vishnu Hari: Uh-huh.
John Koetsier: And I would vacillate between saying, “Wow, this feels real. This feels singularity. Like, this feels like consciousness is arising. Identity is here.” And I’d vacillate between that and, “Okay, this is recursive internet BS.”
Vishnu Hari: So it’s hard to understand. Be skeptical. It’s understandable. But I think foundationally, if we’re already building relationships with fictional characters and eventually AI, it’ll eventually be robots.
It’ll eventually be like Her by Spike Jonze. It’ll happen. And I just find that really interesting and fascinating to be a part of that.
John Koetsier: If you wanted to get really philosophical, all our relationships are with fictional characters, because the person that I think that you are when we’re in a relationship—
Vishnu Hari: Mm-hmm.
John Koetsier: Is that exactly who that person is? No, of course not. It’s my impression. It’s how I’ve built up my interior sense of what you are, right?
Vishnu Hari: Yeah.
John Koetsier: And so that’s always fictionalized to a degree. So yeah, the nature of reality is a big question. As we get deeper and deeper here, what am I seeing there?
Vishnu Hari: You’re seeing us jack in—it’s not really focusing, but basically here, that’s a Matrix-style jacking in the personality into the robot, because we wanted to do it in a Matrix way.
And here it is actually walking up to me and then talking to me. It’s like, “Hey, Vish, how’s it going? Wow. It feels good.” And that’s where we jack in the personality. It’s kind of, yeah.
John Koetsier: So this is actually reality. I mean, the future’s not evenly distributed, right?
William Gibson, I believe.
Vishnu Hari: Yep.
John Koetsier: And so you’ve seen the future. You’ve touched the future. Maybe only 0.00001% of humanity has seen or experienced that future, and maybe a big chunk of the rest of us are going to get to see it very, very shortly.
What are your next steps?
You’re building into games persistent characters that last from one game to another game. That’s super interesting as well. I mean, game publishers must be kind of going mad about that possibility because if I get really close to a character in game A from publisher B, and publisher B creates game C, I’m probably interested in game C if that same character’s going to be in there, because I already have a relationship.
Vishnu Hari: Yeah, so we have a protocol we’ve published called the Character Context Protocol. It’s a version of MCP, which allows any sort of AI entity to gain game state and context so it can traverse between worlds.
Now, traversal between any virtual surface a human can occupy is a direct goal that we have. We want these AI characters to be able to traverse from one game to your Discord server to another game you play to watching Netflix with you, or to just hanging out with you inside of our mobile app. That, to me, is the real promise—eventually into a robot—but we’ve actually published a protocol on how to do that.
I don’t think it’s going to be super supported right now, or at least in the next six or seven months. But eventually, if a human can play a game, a character should be able to play a game, right?
And we’ve already seen that with OpenClaw and Notebook, how it traverses between worlds. I mean, I talk to my Claws and my friend’s Claws on Signal, and then they have access to my IG DMs—which, by the way, don’t do that. It’s a terrible idea. It goes all the way back and it surfaced up some interesting DMs I sent when I was young.
But, you know, that is interesting. We like to experiment, we like to figure it out. But having these characters traverse, embody themselves, and feel alive like a human can—I mean, we never met in person, but I’m relatively confident you’re a real person. And if I go to Vancouver, I’ll be able to meet you.
Can’t say the same about me, but anyway, I’m not real.
So we have a mental model. We want to be able to do the exact same thing with AI. That’s exactly right.
And we’re launching a product for that in the next two to three weeks where people could actually embody their OpenClaw bots and talk to them in real time.
The one thing we figured out is real-time conversation. You and I are talking right now. If this was an interview, you know, we kind of interrupt each other, be like, “Oh yeah, but you know,” blah blah—like the way I interrupted you earlier. All of that stuff—that’s human-level conversation loops. That’s not just a voice model.
We’ve trained a model for that in partnership with the government of Singapore, who’ve given us compute to do that, and we’re going to be releasing that soon.
John Koetsier: That’s fascinating because you can do that with an SLM probably better than with an LLM.
Vishnu Hari: Exactly.
John Koetsier: That is a long round trip. Yeah, and a lot of thinking time. Yeah. And the tokens come out fairly slowly.
There’s new chips coming for that, which is amazing—RISC and others and those sorts of things—but they’re relatively slow. An SLM that’s running on board, super quick, not that many parameters—yeah—so it can respond quite well in real time.
That’s super interesting. Yeah.
Vishnu Hari: Actually, one point I’d want to make about that is we actually gear-switch. We transmit between the different models as appropriate. We’ve actually outlined how we do this in our blog.
We don’t only use AI. In some cases, in games, we use scripting—just classic, old-school AI scripting before we had language models.
And figuring out the transmission, the gear-boxing between different models and different architectures is something we’ve actually figured out internally.
Because in some cases you need the reasoning of a large language model. And if the AI knows to do that, it’ll be like, “Hmm, yeah, let me think about it for a second,” or maybe, “I’ll tell you about it tomorrow,” when AI goes to sleep. And then it’ll actually be calling the bigger models to be like, “How do I answer this question that Vish had?”
So that’s something we figured out as well.
John Koetsier: That’s super interesting. It brings up all kinds of questions as to where identity resides between those different models and everything like that.
But also, it’s an interesting analogy or corollary to how humans work. Like if you ask me a super difficult question—let’s say you ask me a simple question, I can answer instantly. What color is the sky? Usually blue, sometimes white, sometimes gray, sometimes black, right? I can answer real quick.
You ask me a really tough question, I might have to think about it for a while. Yeah. You ask me a super tough question and it’s important that I answer, I’m going to go do some research. Right? That’s going to take some time.
Vishnu Hari: Yep. That’s humanness, and that’s exactly what we’re simulating with our transmission models.
John Koetsier: Wow. Fascinating stuff. Very, very, very cool stuff.
I have to ask, because some of this seems so crazy, cutting-edge: What stage is your company at? I know you got some funding from Y Combinator. That’s got to be a while ago.
Are you shipping products right now? Or what stage are you at? Are you going to raise funds again?
Vishnu Hari: We’re a seed-stage company. So coming out of Y Combinator, we raised around seven or eight million—around there. Six, six, seven. I had to do the thing for the Gen Z: six, seven million.
Or around there—actually six or seven or eight million. I don’t remember exactly how much, but we had a pretty good raise. It was a pretty easy raise. I think we closed it in like a week or so before Demo Day. And then after that we got a bunch more people coming in.
Our round was led by Patron, who is the best VC I’ve worked with, besides maybe Boost. Boost was awesome as well. A lot of angels have joined that round as well. Folks from Anthropic, OpenAI, Gemini—a lot of them are really interested. So I think we’ve had an incredible amount of value from our angels.
We don’t need to raise because we still have a ton of capital, and we kind of had a missing year last year with my recovery.
So now we are rolling out and shipping products in the next couple of weeks to months.
In addition to that, we also have a cheat code, which is that we managed to secure GPU compute hours from the government of Singapore—our partnership with a geopolitical entity, which is AI Singapore—and the university, NTU.
There’s some stuff I’m not allowed to say, so I’m trying to figure out which parts are NDA. But essentially, we have a lot more capital advantages than a normal startup would because of these resources.
In addition to that, we have partnerships with research labs like Menlo. We feel like we have a much bigger war chest than the actual dollar amount we’ve raised.
So with every founder you’d ask from YC, they’ll usually say, “We’re not looking to raise.” So I’ll say the exact same thing: We’re not looking to raise.
John Koetsier: That’s pretty awesome. Excellent. Well, it’s been fascinating. I look forward to continuing to learn more, hear more, see more, experience more, maybe, as you continue and as you continue inserting some of these digital people—identities—into actual physical robots.
Vishnu Hari: When you’re in San Francisco next, I’ll have you try a conversation with one of our AI characters, which a lot of people can’t tell is not human. So let me know when you’re back and we’ll show it to you in the office.
John Koetsier: Looking forward to it.
Vishnu Hari: Cool. Thank you.
Cool.