Synthetic humans and AI-generated avatars: is the future of fashion fake?

We have synthetic humans earning millions as influencers and models who are created by a computer. What’s driving this … and where’s it all going? To dig in, we’re chatting with Tyler Lastovich in this episode of TechFirst with John Koetsier.

Lastovich leads strategy at Generated Photos, and Generated Photos makes realistic faces via AI: generative adversarial networks. (Update: he has since moved on to found his own company.)

They’re growing incredibly fast, count most major gaming companies as their customers, and are talking to major social media outlets as well. The market right now is for synthetic models and characters, but in the future is probably as large as the world: avatars for all of us in an augmented reality and virtual reality world.

We chat about the growth, the implications, the ethical considerations, and much, much more.

Scroll down for the full audio, video and transcript … or check out the story on Forbes.

TechFirst podcast: is the future of fashion fake?


Watch: generated faces and AI-generated synthetic humans

(Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.)

Read: how Generated Photos uses AI to make millions of “people” who are not real

(This transcript has been lightly edited for length and clarity.) 

John Koetsier:  Are synthetic humans the future of fashion? Of influencers? Of models? Welcome to TechFirst with John Koetsier.

So, we have synthetic humans who are earning millions annually as influencers. We have models who are created by a computer. I want to dive into this whole space — what’s driving it, where’s it all going — and to do so, we’re chatting with Tyler Lastovich, who leads strategy at Generated Photos. Tyler, welcome!

Tyler Lastovich: Thanks for having me, John. 

John Koetsier: It is super great to have you. You’re remote, you’re a digital nomad, you’re just like all the rest of us right now. You’re in Wyoming. I’m in Vancouver, Canada.

I’ve been super eager to dive into this and dig into this because it’s a crazy question, right? We have fake humans, synthetic humans who are massive influencers, million-dollar-a-year influencers. We’ve got brands who are creating their own models, other things like that. You’re a player in this space in a different way.

What does Generated Photos do? 

Synthetic humans via AI

Tyler Lastovich: Yeah. So Generated Photos is a company that does synthetic media creation.

So what we do is we use machine learning to actually, basically train what’s called a GAN — or generative adversarial network — on a huge training data set, and it’s human faces right now. And so what we can do is we can go through and create millions and millions of people, essentially, that have never existed, that are unreal. And so we make photorealistic virtual imagery.

John Koetsier: Very interesting. Where’s the demand coming from? Why do clients want this? 

Tyler Lastovich

Tyler Lastovich, Founder at Lastly Studios, and former head of strategy for Generated Photos

Tyler Lastovich: Sure. We’ve seen demand all over the map, which has kind of been the most fascinating part of the company so far. We have a number of game developers that are coming and that can project our images over 3D characters and make kind of an infinite content dilemma solving problem.

So we can talk more about that later on, but that’s a very interesting space.

We have people that are in professional institutes, such as medical, law enforcement. There’s a bunch of people in the kind of security space that have been using our product. Think about a court case or something like that where you don’t want a real face, that’s very good for us. Designers, obviously, making designs, making products — not having likeness rights is important.

And just beyond that, just software companies, you know, user avatars, demo software, products where you can add in huge amounts of synthetic data to see if something’s going to work right.

John Koetsier: Yeah, absolutely. We’re seeing a huge amount of innovation here, right? Synthetic humans as models, influencers, companions even. We can explore a lot of different things here. And there’s also been an explosion of deepfake technology, which uses similar AI technology to create in a lot of cases.

Part of this is because we can now, right? AI is just getting that good. Part of it is the demand that you’re talking about right now, and it’s new for us as a culture, as a society. It’s new for us in technological terms.

What are some of the ethical issues here? 

Ethical issues of synthetic models

Tyler Lastovich: Well, yeah, that’s a good point. You’re right is where it really is new. And I think that’s important to note that no one’s really gone through this. We’re one of the earlier companies that is kind of forging ahead through all of this.

I think the main concern that we see raised time and time again is the concern for racial bias, is probably number one for us. When you’re producing imagery, having an algorithm that can kind of be … very normalized, it’s very difficult in the machine learning space. Up until now, machine learning has been a fairly biased field, you know, they actually work by using bias. I don’t know how technical we want to get, but these machine systems go through gradients and they go through millions and billions of permutations and combinations to get good results. And to do that, they kind of hone in on special features of faces and likenesses.

Synthetic humans: this model does not exist.

Synthetic humans: this model does not exist.

So, what you need to have is absolutely huge amounts of training data and diverse training data. And so I think that bias in general is the hardest ethical point to see. We’ve seen the facial recognition systems from, say, Amazon or others, kind of fail recently. And I think that’s a big part of these digital synthetic model creations, where I think the second part that we see a lot is people fearing that they’re going to be put out of a job. 

John Koetsier: Yes.

Tyler Lastovich: That’s a very common kind of AI trope where, ‘Oh no, models will totally be replaced. It’ll all be virtual. It’ll all be fake.’ You know, and I honestly don’t believe that. We actually employ quite a few models. I mean, we shoot all our own training data. We have all licensed models, we don’t scrape any of our content, we run a full photo studio.

And so, I think that we can talk about that later too, but it’s definitely one of the main concerns.

John Koetsier: So the model union has not reached out to you and said, ‘You’re, you’re … [laughing] you’re taking our jobs’? 

Tyler Lastovich: No, not yet. I mean, we’ve had some interesting discussions with models, ongoing, and sometimes it’s a little harder to find people who really want to have their likeness shot and portrayed. We do have enough models and people coming in that it shouldn’t really look like any one person.

We haven’t really had any claims where it’s like, ‘This looks exactly like me,’ you know, ‘take it down from the website.’ Nothing like that. We offer our photos completely free of likeness rights, so, I do believe that’s pretty, pretty unique in the industry.

I think that most models today kind of have some synthetic component, whether they’re photoshopped or edited or something like that. There’s a lot of this virtual production — especially with COVID — that’s been a big thing for us coming through.

So it’s been really interesting. 

Models are already synthetic, to some degree

John Koetsier: l love that you brought up that most models that we have today are somewhat synthetic. Whether that’s Photoshop, whether that’s makeup, whether that’s prosthetics, whatever it might be, there is some element of something that’s not entirely natural in probably the vast majority of models and actors that we see these days.

We did see, you remember Google brought out that product where it will phone somebody — Google Assistant will phone a restaurant and make a reservation for you, right? And there was a lot of backlash because that synthetic voice didn’t identify itself as ‘I am a robot,’ or ‘I am the Google Assistant.’ … and so they got backlash, and so they actually added that.

Is there a scenario where we need to do something for visual media along the same lines? Or is that just different? 

Tyler Lastovich: No, I definitely see that as an upcoming problem. I think that there’s a lot of people working right now on the verification and validation of synthetic content, especially AI-produced media. I think there’s a big kind of chasm between what you can use as normal content and what’s presented as fact and news.

And so we don’t portray or suggest that people use that at all, in terms of a factual basis. It will be, at some point, a likelihood that you have to embed some sort of normalized tag or some sort of recognition system into the images, but right now nothing’s standardized. There’s no real validation or encoding loop to really do that. I think we’d be happy to work with whoever kind of comes through and has that. We’ve been looking recently on adding different watermarks to our images, potentially … some little, subtle things.

So that’s definitely in the current log that we’re working on and looking at. So it’s hard to balance. You know, people don’t really want them in the images to be honest, since they’re stock imagery and you use them in designs. And so, it’s kind of a pro and con on both sides. 

John Koetsier: I think embedding it in the image in an invisible way, but something that is machine-readable — even via a little bit of code in a browser or something like that, or in a camera software or photo-editing software — makes a ton of sense. Because then you’ve got some traceability, trackability of where it’s from, that it is an ethically sourced image, and all those other things like that as well.

It’s interesting that you touched on the question of ‘nobody has yet complained, hey, that synthetic human looks exactly like me.’ ‘Cause that was what I was wondering, you know, is there one that looks like me, right? And you’ve created almost 3 million so far, and I’m sure that’s just the tip of the iceberg. I’m sure you’re going to create millions and millions more over time as well, so that may come.

But you’ve launched something that is quite interesting. You’ve launched a privacy product. Can you talk a little bit about that and why you did it? 

Synthetic humans as deepfakes for yourself

Tyler Lastovich: Yeah, that was an interesting launch. It’s really kind of caught fire on Twitter and around the web, so it’s been really interesting.

So it’s called the Anonymizer, and what it does is it uses an additional machine learning technique where it does facial recognition on a photo that you upload. And what it effectively does is it finds a near match through our images. So, depending on what we’re doing, we’re either going to generate or find an existing photo in our database. And so it basically just pulls in your synthetic alternate, you know, if you had a second face, that’s close to what it would come through as.

And so, yeah, people have been using it. I’ve seen them all over on the web already. You know, I’ve had someone follow me on LinkedIn and Twitter that has a synthetic face and I can recognize it’s one of ours, so it’s been pretty interesting. 

John Koetsier: That is really interesting. And, you know, we see that actually on social quite a bit, right? I see that quite frequently where somebody will say, ‘Hey, somebody is using my picture for their social and that’s not me, don’t friend them,’ right? So almost a similar kind of a scenario there. Now you can kind of create your own digital avatar that is sort of like you, but not exactly like you. 

Synthetic humans: this model does not exist.

Synthetic humans: this model does not exist.

Tyler Lastovich: Yeah. And I think an important part of that really is the privacy aspect.

You know, we had seen a number of journalists, like yourself, come through and ask to use our images as kind of privacy protections, where they want to be anonymous, but they kind of still want to give a little bit of a likeness/resemblance to themselves.

You don’t want to completely misrepresent who you are online, but at the same time, you don’t want to show up in facial recognition or identities or anything like that. So we’ve had a number of people come in, basically looking for that exact thing. Or, to be able to find someone … I know it’s kind of a weird case, but we had law enforcement, a number of them actually, looking to use like synthetic honeypot traps for predators. 

John Koetsier: Oh wow.

Tyler Lastovich: And so they have a specific type or something like that, that they’re trying to lure people into, and so they can use our synthetic imageries ’cause we do offer all age ranges as well as children. And so, sad as it is, they like young, young women. We can have a synthetic image generated, and so you don’t actually have to use a real person. So that’s a very valid use case I think for something like synthetic imagery that doesn’t really get talked about as much in the media.

John Koetsier: No, I mean, that’s not obviously something that you want to talk about a lot — it’s a horrific area — but it is really good that in the process of doing their work, they don’t have to take pictures of real people, real children that will be used in ways that you don’t want — nobody, no parent wants their child’s image to be used in something like that.

Tyler Lastovich: Correct, yeah. 

John Koetsier: And interesting that you mentioned law enforcement, because I know some people in law enforcement as well, and they won’t share things on social — especially not their face, because there’s risk associated with that. And risks, not just for them, but for their family.

And it’s funny as well, about, you mentioned some journalists, because I know a PR rep, Jonathan Hirshon, and he has made — it’s his shtick, he’s internet famous for it — he has never had a picture taken of him, and there is no picture of him available online. And I mean, he’s gone to conferences and taken huge precautions that nobody’s taken a picture of him. Now he could create an image and use that, and ‘I’m something like this.’

Tyler Lastovich: Yeah, no, I mean, it’s a fascinating area, and we’ve seen a lot of use come through that. You know, it’s been a really good launch for us. 

John Koetsier: So, right now you’re doing faces, headshots. What’s coming next? Are you going to do whole body? Are you going to get into video? 

Photos first, then full video and 3D

Tyler Lastovich: Yeah, we definitely plan to get into both of those things over time. It’s really hard to train stuff, I’ll be honest. I mean, it takes a lot of GPU power and a lot of compute to really go through these things. It’s kind of at the cutting edge here, and so it does take time. We still have further refinements to go, to get really photorealistic on faces. I think when it comes down to, say, 3D characters or video game creation, faces really are what gets the bulk of the attention, because that’s what you focus on, and that’s what’s kind of poor today. Clothing represents a very tough challenge in terms of all the nuance and the creases and the very specific details— 

John Koetsier: Yes.

Tyler Lastovich: —for the systems that we’re using. So, we definitely do capture full training data objects, it’s not just people. I mean, we can use these same systems to do anything, really. There’s nothing that’s really a limitation structurally to that. I think that right now we have a nice cleaned data set that we train on that’s specifically optimized — that again, we capture in our own photo studio under controlled conditions, where the lighting’s all nice and everything like that. So we have very high resolution source files for faces. But going forward, we’ll have all sorts of stuff.

John Koetsier: I think you’re just scratching the surface of what will eventually be a target market here because, for instance, if you look at like the Facebook avatar … I’ve worked hard to make a Facebook avatar in Messenger and other things like that, that looks somewhat like me. Somewhat like me. In Oculus as well, like Oculus Quest, you can make an avatar that’s available.

And, you know, I can get sort of an oval-ish face that’s bald, right? Is that me? Well, not really. No, it doesn’t really look like me. There’s much more that I would like to put in there and I can’t do that. Also, as we get much more into augmented reality and VR, and we want to telepresent ourselves in, in some way, shape, or form, you want to be able to do that in a way that — whether it’s true to life or not — represents you as the way that you want to be seen, right?

So I think there’s a huge market here emerging, nascent right now, but coming for people to create lifelike avatars of themselves to use in situations like this where we’re having remote conversations. 

The metaverse is coming

Tyler Lastovich: Yeah, I completely agree.

I like to talk about and think about the metaverse. You know, that’s kind of the upcoming concept of kind of the second world, second place where it’s not just people — like you just explained it so well — and everyone’s going to want to have very deep customization of their likeness and how they’re represented in their kind of second world, second life.

But I think just equally hard is having enough content to fill out the world. So I call it the ‘infinite content dilemma,’ where if you have an infinitely large world, you need an infinitely large amount of content to kind of seed it and to be kind of underlying the whole thing. And so right now we’re seeing really great advances in natural language processing and text, you know, GPT-3 from OpenAI and things like that can be melded really well with our characters. There’s all sorts of rigging models and stuff that can project our images, say, on top of a 3D face, and that face can then talk or things like that.

And so, kind of marrying all these technologies together will give you really, really deep interactions that have photorealism. So, you’ll be able to have Netflix videos that are procedurally generated to match your demographic or ads that are very targeted. All of that type of stuff is definitely coming down the pipeline faster than I think people realize. 

John Koetsier: Really, really fascinating. And, you know, what does Siri look like when I have smart glasses and Siri can appear in my visual frame of reference? Or what does Alexa look like? What does she look like to me? What does she look like to you? 

Tyler Lastovich: Exactly.

John Koetsier: What does the Google Assistant look like? That is sexless, and genderless, and everything else there. So, I mean, you could be a fire hydrant for all I know, but very interesting world that we’re moving into. I want to ask you a couple of questions because — a little bit more personal — because this podcast is about tech that’s changing the world and innovators who are shaping the future.

How big do you see Generated Faces and synthetic humans getting? What percentage of models do you think will be created by a computer? And will we all have one or more avatars? 

Tyler Lastovich: I think so, yeah.

At this point, it’s hard to imagine a world where we don’t have a second likeness, a second representation online. I think that what we’ll see in the next 5 or 10 years is really kind of a cross between the real world and the virtual world.

So, models probably won’t exist as singularly just real or digital. I think these will kind of meld together.

Right now, the biggest influencers that are CGI on, say, Instagram, that are making the millions that you’ve talked about, are actually real models that are captured and then redone with CGI. And so I think there’s kind of a meld between these two things that will persist for quite a while. At some point, I don’t think realism is going to go away. I think there’s still value to someone and how they look in real life. Authenticity is going to be a very big factor in that. I think just as you touched on, the fact that verification is important for digital, I think it will be almost even more important for real content.

And so, going through that, I think we will all have digital likenesses, digital representations, and some sort of digital augmentation — whether that’s digital fashion or things like that, there’s so many ways to go through this. But yeah, I really don’t see a limit that we can see from here. I think it’ll just be massive.

John Koetsier: I couldn’t agree more. And I see that when we have smart glasses and ubiquitous augmented reality. We will put on our avatars as we walk around in the street in real life, in MeetSpace, and people who will participate in our shared consensus reality will get the image that we intend to project of ourselves in their own visual frame of reference, and it will be a very, very interesting world.

Tyler, this has been fascinating. You’re doing some very interesting work. I want to thank you for being on the show today. 

Tyler Lastovich: Thank you. It’s been great.

John Koetsier:  Excellent. For everybody else, thank you for joining us on TechFirst. My name is John Koetsier. I appreciate you being along for the show. You’ll be able to get a full transcript of this podcast in about a week at, and the story at Forbes will come out shortly thereafter. Also, the full video is always available on my YouTube channel. Thank you for joining. Until next time … this is John Koetsier with TechFirst. 

Big reader, huh? Subscribe to the podcast already

Made it all the way down here? Who are you?!? 🙂

The TechFirst with John Koetsier podcast is about tech that is changing the world, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe on your podcast platform of choice: