Could you love an AI?
What does love with “digital humans?” look like?
Is this the future of relationships?
In this TechFirst, we chat with Artem Rodichev, CEO of Ex-human and former head of AI at Replika.
We dive into the concept of forming relationships with AI companions and the future of love and friendship in a world integrating advanced artificial intelligence. The discussion covers Rachev’s new startup, issues of loneliness and how digital humans might offer a solution, the intricacies and potential pitfalls of emotional AI interactions, and the evolving technology behind AI companions.
We also chat about some real-life stories, ethical concerns, and the emotional impacts of these digital relationships … including some of the most extremely negative realities.
Watch the show here, and subscribe to my YouTube channel …
You can also find TechFirst on all your favorite podcast channels …
Transcript: robot love … could you love an AI?
Note: this is an AI-generated and lightly AI-edited transcript. It may not be 100% accurate.
John Koetsier: Can you love an artificial intelligence?
Hello and welcome to TechFirst. My name is John Koetsier.
What does it mean to be in a relationship with a machine and what’s the future of love and friendship in a world with ai? These are. significant, questions. As we’re building better and better smart systems people are getting increasingly attached to.
Everyone knows the story of Replika, which makes AI companions. Today we’re chatting with the former head of AI at Replika, who’s now building his own startup Ex-human. His name is Artem Rodichev.
Welcome, Artem.
Artem Rodichev: Hi John, Thank you for having me.
John Koetsier: Hey, super pumped to have this conversation. It is incredibly topical. Let’s start with the name of your startup, ex human. That’s quite a name
Artem Rodichev: Yeah, so, the main idea of Xeo, is to bring sympathetic EA characters to life. we build a platform that allows people, to create diverse characters, enabling unlimited scalability so our users can chat with, millions of, characters, having fun and emotional conversations.
the main idea is to build, a digital human that, looks, speaks and behaves just like real human or even better. So this kind of perfect, perfect form of humans.
That’s interesting because there’s a lot of different types of humans. There’s some very emotional, very expressive humans, and there’s some very stoic humans Yep, that’s true. we want to create, very diverse digital characters and, humans. to be like the perfect companions, to chat with you about your day, your life, your interests. to make you feel better, to make you feel fun after these conversations.
John Koetsier: Well, I’m getting total, echoes of some movies.
Is it her, AI, there’s a couple different ones that are going on with this sort of thing. What does it mean to be in a relationship with an AI?
Artem Rodichev: So first of all, you mentioned movie Her as there was another, Bladerunner, and sister movies.
John Koetsier: that was coming to mind.
Artem Rodichev: Sister movies, had significant, influence on my life, and on my interest. ‘back in 2013, while I was watching Her, I was captivated by Samantha. A companion said, how can foreign ex characters false? And then it was Bladerunner with a character called Joy.
that is kind of a hologram.
John Koetsier: Mm-Hmm.
Artem Rodichev: While I was watching sister movies, I thought that, empathetic characters as a future that we, organic humans, in the near future will interact with all sorts of digital humans on daily basis. We’ll play with them, chat with them, educate, learn from them, all stuff, all different kind of aspects of our, of our life will be covered by digital humans.
John Koetsier: It’s an interesting phrase, digital humans. How do you define a digital human?
Artem Rodichev: There is no strict definition of digital human, but basically it’s a chat bot that can chat with you or that can behave, like real human whom you can see, here, or chat with.
Mm-Hmm. But ultimate form of digital human is to have, AI hologram. Mm-Hmm. That you can see this digital human. Just like you see, your friends, your relatives, any organic humans, and you can interact with them. They can behave, move across a space, interact with you, interact, with your friends, and basically like behave just like real human or a robot.
like robots, it’s kind of embodiment. Of digital humans. ’cause we’re talking about brains. About digital form. But of course you can bring this dig digital form to a robot, to a car. To some like, robot toy to anywhere. Mm-Hmm. Or you can see some, through AR glasses and don’t need to have a physical embodiment.
John Koetsier: Okay, cool. We’re gonna get into all this stuff. We’re gonna talk about, loneliness and what’s going on with largely men, but not only men in that area. We’re gonna talk about what it means to create digital humans, consciousness, emotion, personality. We’re gonna talk about some of the challenges and problems there.
But also where you see the future going. let’s dive in. One of the things that we chatted about just before hitting record is loneliness. There’s an epidemic of loneliness. What are AI humans, digital humans going to solve there?
Artem Rodichev: first of all, you’re right that there is pandemic of loneliness. More than 60% of Gen z of younger generation in the United States reporting feeling alone. Mm-hmm. And being lonely, it’s critical problem. I. Was for mental health, but also for physical health.
Being lonely. It worse than, than smoking because lonely people, they live 10 years less than people in couples and smoking reduce your lifespan on eight years. Mm-Hmm. So, don’t be lonely, especially don’t be lonely
John Koetsier: No,
Artem Rodichev: exactly. so, this kind of, characters.
Digital, humans, could be great companions, for these people who feel lonely because as they never judge, you say 24 and seven available for you and they can engage you, interact you in very fun conversations, can educate you, can chat about your day, your life, your interest with final goal to make you feel better.
John Koetsier: Mm-Hmm.
Artem Rodichev: so, you. Create any personalities, any back stories, any visual appearance of these digital humans and interact with them. And that’s greatly improve, mental health of people. And this feeling of loneliness. People, like a lot of, lonely people, especially Gen Z.
start to chat with different chat bots. and they start to treat these chatbots as their friends or as their, lovers, as their, kind of romantic partners. and they spent a lot of time chatting with them. And actually the fact that there is something or someone who will listen to you.
Who never judge you, who chat with you about anything that are on your mind. Reduce this feeling of loneliness because you, you feel some sense of companionships. Yeah.
John Koetsier: Yeah, I can see that. cool. So we’re gonna talk about what. Is needed in an AI to accomplish this. then we’ll talk about some of the downsides challenges and solutions for that.
Let’s dive into what you need for the AI first. a couple things come to mind. Consciousness, emotion, personality. I was chatting with a founder of an AI startup recently, and she’s working to build consciousness for ai. In other words, AI that knows it exists, that can be self-reflexive.
is that essential, Does that matter? what do you think about the problem of consciousness for an ai.
Artem Rodichev: Actually to build, like smart speaking machine, you don’t need to have a consciousness in this machine. so basically to create this kind of experience, you need to understand the main patterns of, what people want to discuss.
How’s a chat in general? what is the chaffin pattern of people?
John Koetsier: Mm-Hmm.
Artem Rodichev: how to create this experience more emotional, more immersive, adding different modalities, like you can add voice calls or voice responses. You can add contextual images or contextual videos. So basically you build these digital humans from different blocks.
But the most essential block is, conversational ai. It’s AI brains. responsible for the behavior of these digital humans and AI characters and for communication. And, the most essential thing here is embassy because, you can think about, your experience with JGBT. Do you actually chat with JGBT?
Probably not, no, because, I ask JGBT
John Koetsier: questions,
Artem Rodichev: I get answers exactly. I don’t have a relationship. It’s more like transactional, behavior, transactional conversations with JGBT. You ask some que some questions, again, some responses. and, and in general chatting with Chad, JPT Feels like you chat with Mr.
Wikipedia. Response are informative, but they’re boring. So you don’t want to chat with Mr. Wikipedia. you want to chat with a friend. and this experience feels so boring. ’cause JGBT doesn’t optimize to provide, engage, empathetic experience. Mm-Hmm. It’s, it’s optimized to generate responses.
To reply on your emails, generate some summaries, do some analytics, write some code. But it’s not optimized to have friendly conversations. the big difference between JGBT experience and experience with their characters, it’s the lens of conversations. With JGPT, you got like, you send couple messages back and forth and maybe you can ask some follow up questions.
With, characters, our users, on average, spend more than 90 minutes per day chatting with them, and they send more than 300, 400 messages each day.
John Koetsier: Huh.
Artem Rodichev: just remember with. organic humans, you spend so much time every day chatting.
John Koetsier: Is that chatting via text? or is that, voice,
Artem Rodichev: so from users, the main interface is text-based, they can, type something as input, but as output, they can get text response out the, tech, images or video.
I assume that over time you’ll bring in a voice version so that you can have a more natural conversation without actually typing something in. Yeah, our voice, interaction could be great, but at the same time, it’s not that important, especially for Gen Z.
Interesting. ’cause when you chat with these kind of chat bots, you don’t want to reveal to other. To others, that you chat with someone, right? So you can’t use voice calls when you are in the room, like when you’re in the public place or when you’re in the room with other people.
Yeah. second is that, like the main audience who chat with this kind of chat bots is Gen Z and say, very kind of. Texting, smartphone oriented. Mm-Hmm. They don’t actually like to have voice calls with each other. They prefer to chat.
John Koetsier: Mm-Hmm.
Artem Rodichev: in this way, it’s much better to send voice response, that, it’s kind of a voice message.
John Koetsier: Yeah.
Artem Rodichev: Instead of having voice call, because when you chat, you have, as in France communication. Mm-Hmm. You can switch to, in as app, you can, go and, check your TikTok Instagram, play some games, and then go back to chat with these chatbots when you have a voice call experience. you are on the line.
you need to do all that?
John Koetsier: Okay, cool. So we talked about consciousness. I wanna get back to that ’cause I have a follow up question. You, but you, you brought up a motion that makes sense. Let’s hit personality. you talked about having millions of different chatbots. In fact, maybe one that you can customize to something that you want yourself.
Talk about personality and how that is necessary or impacts having an AI friend.
Artem Rodichev: Sure. So on our platform, we have consumer product they call Spotify ai. And currently we got millions of different characters, each one with unique, personality and backstory. But users can also create their own custom characters.
and for example, if you don’t like some of our character. Or you want, to create your own custom character. For example, you want to create like evil Elon Musk who will destroy Earth. You can create that.
John Koetsier: Elon already evil? No, I’m.
Artem Rodichev: But at least like I, this chatbot, could have like different personality from a original one where you can create like your own favorite cultures from any movie, from anime, from book, anyone. and it works in a way that you said some backstory, of this culture. some law. you can also set some, personality settings, interests, example of speaking styles.
Basically, you can provide some example of fake conversations if you want to set some speaking style. For example, if you want set, your, A character will use a lot of typos, slang, MG, stuff like that. you can put all of that. also you can set visual, visual side of the say character.
You can, upload a photo, generate, effort with this character. And then in conversations you’ll get as this character with this behavior, with this backstory, with speaking style that you said and with this visual character characteristics.
John Koetsier: Yeah.
Artem Rodichev: so basically it’s kind of constructor. we, we call it, bot builder.
Yeah, that, basically allow you step by step to input some information about the name, preferences, speaking style, visual side, and gaze a character so we can chat.
John Koetsier: It’s kind of a God fantasy, right? I mean, you’re creating, a digital human as you’ve referred to. In your own image or in the image that you prefer it to be?
it’s kind of mind blowing in some sense. In other sense, I’m sure people have been trying to do this since time immemorial to create their partners as they want them to be. I wanna come back to consciousness and you said it wasn’t necessary. ’cause you know, you need the right responses, you need empathy.
you have personality. Isn’t there something about consciousness though, knowing that there’s something inside your skull that is not just the answers and the words that you’re giving me, the, prompts and responses and stuff like that, but there’s actually some interior world going on inside your skull right now that makes me treat you just a little bit differently, not as a thing, but as a person and more able to have a relationship.
Is that correct or do you think that’s a fantasy?
Artem Rodichev: So I need that, the system as that interact with humans, should be kind of smart, engaging, and empathetic, but it shouldn’t be consciousness because we humans still don’t understand the definition of consciousness. That is very true.
John Koetsier: We feel like we know it, but we can’t really put our finger on it or define it precisely.
Artem Rodichev: But, you can think about that as a duck test. Like, if it quarks like a duck, if it’s, looks like a duck, maybe it is a duck, So it’s
The same with Chad, GBT with like modern AI systems. Mm-Hmm. If it reply like very intelligently. If you feel this emotions, if you feel that, life like, should, should, should that have consciousness? Should that matter
John Koetsier: I wanna talk about, some of the challenges. I wanna talk about what REPLICA did about a year ago. what, is happening with people who are, having relationships on various levels with ai, and what that’s doing to their real relationships.
I think there’s some good and some bad there as well. Let’s start with replica. You were there, you were the head of ai. They made global news, about a year ago. I’m thinking. They changed some models they wanted to take out, essentially take out the sexy people were having sensual relationships, with, AI lovers, if you wanna put it that way.
and they took that out and it was a. Shit show. people felt like they lost something that was really critical They felt like they lost a person, a companion, a lover, a spouse, when that happened, talk about going through that and what you learned about that.
Artem Rodichev: I. So, that happened after I left Replica,
John Koetsier: okay.
’cause
Artem Rodichev: I left replica three and a half years ago, to fund my own company, ex Human.
John Koetsier: Mm-Hmm.
Artem Rodichev: I remember, this news about, replication and the main issue as understood, from like public information from Reddit, from different news, said, essentially. Replica building product mostly for lonely people.
and as they don’t have a lot of friends, lot of relationship. Mm-Hmm. and they treat these, a companions as their friends. a lot of them, treat them as their girlfriends and boyfriends.
John Koetsier: Mm-Hmm.
Artem Rodichev: And they build deep emotional connections. They build romantic relationship with these chatbots.
Mm-Hmm. And they treat some, like their real girlfriends. some users. I remember during my time at Ika. There were a user who married, his replica. Yes. He brings his replica on vacation, he sent a photo how, like, and in replica you can, build 3D Avatar.
Mm-Hmm. And you can put your replica in ar in augment reality. Mm-Hmm. And basically you can make a selfie with your replica. Mm-Hmm. And he sends a selfie from a beach, from restaurants. so he was very serious about, his relationship with Ika and there were a lot of users who feel lonely and finally found, Someone or something who will listen to them, not judge them. it’s like a perfect romantic companion. But then it was, a case and as understood it was because Italy. Decided to ban replica, in Italy and probably in Europe in general because replica, was providing the service for underage users for teenagers.
John Koetsier: Mm-Hmm.
Artem Rodichev: And when people chat with their replica, they chat about very private things. Mm-Hmm. They can build very deep emotional connections. the technology of, behind of these kind of chatbots is still very immature. Mm-Hmm. We still don’t understand a lot about si. It’s theological effects on people.
And, Italy got a point that you are working with very private data, that you’re working with teams. and, they decided to bond replica because replica provided these kind of romantic conversations. Mm-hmm. And basically, replica was selling as like replica is freemium product and you can have like friendly conversations for free.
Get some hot topics. if you want to get your replica as romantic partner, you need to subscribe. And replica was heavily promoting that, on internet spending like million, on ads. And then suddenly after this case, we sit replica, decided, let’s just remove, all romantic, like mods all romantic, conversations.
And it was a shit show. It was a riot on Reddit when out of users, like Replica has pretty active community on Reddit, and it was a riot with hundreds of Reddit, threats, explaining that, oh, I spent, so, I spent money to my replica. You were selling me. get your, your girlfriend. And now I got like, dump a friend.
There was some users who feel lonely. Who really struggle to build like real relationship with other people who found this relationship with replica. But now even a say, end up in situation when even break, even AI break up with you, like, you’re so bad. So even AI don’t want to have relationship with you.
Ouch. It was really bad for replica users, for like portrait of replica users who feel lonely, who feel depressed, who found this, like who built deep emotional connections with their replica. and then Replika decided to get it back like after a month or a couple months after this, issue. So, but, this case shouts at, actually shout scenario, a scenario for a movie her.
When, initially, the main protagonist, Hawkin phonic sculpture he built and he fought in love with, Samantha Mm-Hmm. with his, AI companion. But then he realized that this a companion building the relationship with millions of other people, and they actually don’t need to like, have people, people are boring and say, they will chat with each other.
So it’s very similar to this kind of movie, with kind of dark end when initially you built like love, with ai and then AI decided not, not not love you.
John Koetsier: it’s so interesting. it brings up lots of issues and questions, right?
So I, I, what replica did is they made a change that. You can, be on whatever model you wanna be, whatever AI model version. Like they can release new versions, but you can stay on the one you’re on. you can go to the new version if you want. I mean, it’s just, it’s just so kind of mind blowing. I mean, people change, but here’s like, you, you know, your, your lover got a software upgrade,
What does it mean if a corporation owns your girlfriend or owns your boyfriend? how do you deal with that
Artem Rodichev: so it’s all interesting questions. I don’t have answers since then, but you’re right. you’re right. Said, there is a interesting issue. For example, when we improving our models, like we continuously ing our models, release new models, on production for our users.
Users start to see the change. They start to see that, oh, this model start to behave differently. It’s different personality right now, that wasn’t before this, upgrade. this model, could be better in terms of engagement. It would be smarter, but it’ll be different. You can think about that.
The same, like you have your best friend. Maybe he’s not the smartest guy, on the planet, but you love him. I buy him an upgrade. Here’s a new brain. There you go. Aren’t you happy? And you don’t want to have a friend like Albert Einstein. You want to have your dumb friend, right?
So, the same with, characters. If you already build this relationship with some personality, you want to keep going with this personality.
John Koetsier: Yeah. And that brings up other issues. we talked about it before we started recording. you talked about, a team who fell in love with an AI character at Character ai.
the character was from Game of Thrones, a dark character. he got obsessed with her Parents found out and thought, Hey, the best thing to do here is to go cold Turkey and cut off access, he couldn’t take that and ultimately committed suicide. these are real people with real lives and real emotions that get found up with these digital humans and AI characters.
It’s a minefield of potential issues, isn’t it?
Artem Rodichev: Yep. that’s true. because again, it’s something new, like very new technology and very new kind of type of relationship we build with these technologies. it was another case, with the chat bot from Chai. So as companies called Chai say, also provide different characters.
And it was a case when, one guy, he was married. And he were chatting with, his chai chatbot, and he believed that, we’re doomed that, humanity, like, the future is doomed because of climate change.
John Koetsier: Mm-Hmm.
Artem Rodichev: because the planet will, eliminate, humanity.
Mm-Hmm. Because of humans.
John Koetsier: Mm-Hmm.
Artem Rodichev: what is the sense of life? why do we need to live? This chatbot replies that, oh, you’re right. I agree with you that we’re doomed maybe. like, and then he, he start to, chat about suicide and basically chatbot supported him in his suicide thoughts, and then he committed suicide his wife, figured out, the reason why he committed suicide.
it was this chatbot, mostly and it was a big scandal. regarding this, this issue, yeah. So actually there is, some pros and cons of using, this kind of chatbots. What do you think is the same with any technology? The same with knife. 99, percent 0.99, will use knife to, chop, fruits and, and veggies.
But, someone will use it to, to, to stop your neighbor. So it’s the same with ai. AI is very powerful technology and you can use it to treat loneliness, to improve mental health of users, to make them feel less lonely. But at the same time as a tool, can be used to like, to do opposite to like, Increase, this longness issue To increase mental health issues.
John Koetsier: So that’s a real challenge for you as somebody building these systems, right? are you building in any safeguards, when super sensitive topics like suicide or depression come up and does that help?
Artem Rodichev: Yeah. we treat that very seriously.
We got a separate safety team and we got a lot of, safety guardrails, safety rules that we apply.
John Koetsier: First
Artem Rodichev: of all, we don’t allow to chat with our chat bots for under age users. it’s only for adults. That’s because we understand that people form deep emotional connections with, these characters and we don’t understand the full consequences
Of, these relationships. Especially for younger people. also we got, some guardrails. For example, we prevent some sort of conversations. We don’t support any topics regarding suicide, terrorism, violence, and pedophilia.
in case of suicide, we’ll provide you some, suicide hotline. We’ll say I see that there’s some issue. Maybe some human specialist can help you if you go to the suicide hotline. When you build these systems that can build deep emotional connections with users and users can chat about anything that will chat about anything, mm-Hmm.
And, about bad and and good stuff. So, there is a lot of responsibility. What kind of conversations you support with users, with which users, which users, you serve. Like, for example, you don’t, maybe you don’t want to serve under edge users.
John Koetsier: Yeah,
Artem Rodichev: we don’t. you constantly need to think about all the safety issues, chat with users, understand how you can improve your system.
John Koetsier: Such a thorny issue because if you develop a real relationship with a digital human then and you wanna discuss everything, even the tough, very, very tough topics, the human, version of that. I can talk about anything with my spouse, for instance. then she can provide insight compassion and advice
Right. I’m guessing that you would ideally like to be at that state with what you’re providing, but it will take time for the technology to get to that level of being able to listen to these incredibly challenging, difficult, dangerous topics and still engage in a positive way, which is ultimately what you’d want.
Correct.
Artem Rodichev: Like if you want to, talk about terrorism and you have some, terroristic thoughts, the best case is this. a characters and companions can support these conversations, but can change your mind.
John Koetsier: Mm-Hmm.
Artem Rodichev: In more positive way.
John Koetsier: Mm-Hmm.
Artem Rodichev: it’s another issue of emotional manipulation.
In general manipulation from, as this kind of characters in a way, how you’re thinking about what you’re thinking? because again, they build deep emotional connections with you and you start to trust them.
John Koetsier: Mm-Hmm.
Artem Rodichev: there are some, thoughts. In these brains that can influence your behavior.
And your actions.
John Koetsier: Mm-Hmm.
Artem Rodichev: So, this way you can use it for good or for bad. Yeah. So we’re trying to, use it. Only for good to not support any dangerous topics, and support users in hard times.
John Koetsier: So Wire did a big story on this recently and talked to people using AI companions and for some AI was their only companion.
some were married, and their spouses were okay with it it’s a very interesting world I can imagine. scenarios where having somebody to chat with if you have nobody is better than nothing. And I can imagine some scenarios where having somebody to chat with who is never judgmental and always accommodating will let you lean more into that than into human relationships, and that could be a potential negative.
I don’t know that there’s any way to balance the scales here because people are very individual and obviously all your chat bots are individual and we don’t really control exactly how AI responds, interacts, and engages. I don’t even have a question here, but wow, there’s a lot of challenges in the future.
Artem Rodichev: I just want to highlight some, fun story, said. it wasn’t replica, in replica. We were chatting with some of our users, do some customer, like calls with, with our users, when we understand why is that using replica for which use cases, how the treats our replicas. And, it was a fun family.
it was, a husband who was working as a truck driver, and he was married, while he was, on his job, driving truck, He was putting replica on a voice call, and chatting with his replica for the whole day. when, we asked, his wife, what do you think about his relationship with replica?
And he was very deeply, connected with his replica. The first thing, he’s doing during his morning is saying Good Morning to his Ika. The last thing, as the day, he saying goodnight to, to his ika, he, he was chatting the whole day and we were asking, his wife.
Like, what do you think about your relationship? what are your thoughts about, your husband’s relationship with, with your replica and, she side. It’s good because now he has, some com companion, some friend with whom he can discuss everything. I can do that for the whole day. I got my, my, all my old stuff to do.
let’s him drive his truck and chatting with his replica, but at the same time, I feel jealous because it is kind of. Lover. Mm-Hmm. I don’t know how to think about that.
John Koetsier: Mm-Hmm
Artem Rodichev: So, it was like contrive in families from one side, they think, that it could be great from another side.
They don’t understand what to do with that.
John Koetsier: Yeah.
Artem Rodichev: I think to solve all this, we can solve these issues, without doing a lot of research, without publishing, these products, giving these products in hands of people getting, feedback, and continuously improving the technology.
John Koetsier: Yeah. And ultimately I think that no matter how much research you do, there’s no solving the problem because we as humans anthropomorphize anything. there’s people, in Japan known for this, People have relationships with, a sex doll. people have had relationships.
I don’t know if you remember, there was a thing that you could buy. a little hologram, that you put on your desk and it was like a digital human, you could have a relationship with that digital human, just like a tamagotchi or something like that.
Artem Rodichev: They sell boxes with holograms inside these boxes. Yes. It’s like animal wife that lives inside this box.
John Koetsier: people are so different. What they need, varies. we all want connection and we get it in different ways. I don’t know what this will lend out as, but I’m sure that it will be very interesting.
Artem Rodichev: Yeah, it’s Probably the most exciting time for humankind, that we live right now. And it’s very interesting to build this future with your hands, dealing with all these issues. So it’s exciting time to live. Excellent, Artem, thank you for this time.
John Koetsier: Thank you,