This AI automagically creates fake faces for privacy-safe photo sharing

ai fake faces privacy

Can you share personal photos online … without sharing your face with the giant global database that is the internet? And, can you share photos of crowds of people, or demonstrations, without subjecting everyone in those photos to AI-driven searches and privacy violations?

Brighter AI thinks they have a solution, and in this episode of TechFirst with John Koetsier, we chat with the CEO, Marian Glaeser. Essentially, his technology replaces every face with an AI-generated substitute to ensure you can share your pictures in a privacy-safe way.

What becomes a real question, however, is how real your photos are now …

Scroll down for full audio and video (don’t forget to subscribe to the podcast) and a complete transcript …

Subscribe to TechFirst: Fake faces via AI for privacy?

 

Watch: This AI creates fake faces for privacy-safe sharing

Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.

Read: Deepfakes for privacy

(This transcript has been lightly edited for clarity.)

John Koetsier: Can you share personal photos online without sharing your face with the giant global database that is the internet? Welcome to Tech First with John Koetsier.

Every time you share a photo online, you, in a sense, lose control over it. Algorithms and AI grab it, they judge it, they sort it. That might be pretty innocuous, that might be just ensuring that it’s not a pornographic image that you’re uploading to Facebook, for instance. But it also might be putting you into a global face recognition database. Brighter AI says they have a solution to that.

To learn more, we’re chatting with Marian Glaeser who’s the CEO and founder of Brighter AI. Marian, welcome!

Marian Glaeser: Yes, and good to be here. 

John Koetsier: Excellent. We tried to do this like a week and a half ago and my internet totally failed, so I apologize for that. Now we’re on and we’re live, and thank you for coming again. Let’s start here, Marian. What’s the problem? Why should we be concerned about the photos we share?

Marian Glaeser

Marian Glaeser, CEO at brighter AI

Marian Glaeser: Yeah. You already picked it up a little bit. If you think about it deeply, it’s more crucial. We have not just smartphones with high digital cameras and quite a bit of quality, but also public cameras, or cameras that you don’t even know that you’re being captured from someone that takes a group picture, for example.

And all these images, they can be analyzed for any kind of face and anything that is in there, and basically stored, collected, and matched. Which means that even without your knowledge being maybe captured on the street from someone, maybe even in the back of a selfie of the other person, this image that you’re not aware about maybe online is being crawled, and then can be used to check this picture of you in the back of someone’s selfie and directly link it to your social profile…

John Koetsier: Yeah.

Marian Glaeser: …or to link it to LinkedIn.

There was something very scary that I saw that’s one example. Somebody took a picture, basically secretly, of a person sitting in a bar and this person in the bar wasn’t aware of it. And he checked directly through a platform what’s this person and what’s the LinkedIn profile of that and checked the profile even before this person was talked to, and that’s quite scary. 

John Koetsier: It is potentially scary. I mean, it has a lot of implications. It could be as simple as somebody taking a look. But it could be a government agency, it could be some foreign country, it could be a company looking for you in an automated way. Is this a high net-worth individual? Should we pay attention to him or her when they walk through the front door or something like that.

So for some people, their solution is they don’t share anything and maybe they wear a hoodie or a hat all the time, who knows. Others share without even thinking about it.

What’s your solution?

Marian Glaeser: So, our solution is that we see that, like saying to not share will not happen, because people like to share what they do. And it’s good, especially as you said, political, like law enforcement misusing it, there are moments, for example, in demonstrations where you want to share what is going on in the world, you’re going out on the street to protest for and have the freedom of speech.

And you want to share that and you should have the right and should not be limited by anything.

So what we see is that there are current solutions to maybe pixelize a face, or maybe put an emoji on top of the face, but this essentially changes the emotion of the scene. As you see here in this nice video, you having all those different people and they scream into the camera, they stand up for their rights and this basically meters in their face to express what is going on.

Our approach is — and thanks for showing it actually again — we extract the original faces and replace them with new faces that are non-trackable to the original one. So they are different from the original and if this person was on a demonstration, this picture will not help anyone, not any of the law enforcement, or from misuse if you’re not in a democratic country, or anywhere, to use this image against you. 

John Koetsier: How does that work? Because are you totally replacing the faces? Are you making some tiny changes to them so that an algorithm and AI can’t see exactly what they are? What are you doing exactly to them? 

Marian Glaeser: Yeah. It’s a good question because there’s one approach where you only change like slight pixels and then you try to fool facial recognition. What we see in research and also in own development, that’s a cat and mouse game. So you changing kind of like a couple of pixels, you fool one detector, one algorithm, and there’s a new algorithm and that works again.

So our approach is different, as you said, we’re changing the entire face.

So we would take your face, for example, and modulate with the same age and gender and ethnicity, a new face which would be not yours. It would be basically an artificial person’s face, and therefore not be recognizable and trackable. And the advantage of that is that since it’s a different face, the better facial recognition algorithms become, the different face will tell them, ‘Oh, this is clearly not the same person, this is clearly not John,’ because the facial recognition is even more fooled by our entirely new face. 

John Koetsier: That is really, really interesting.

It brings up all kinds of questions, right? Who is actually there? Is this photo real? Is this historical? Who can take real photos of historical things? Are the rest of us just taking random pictures that are essentially faked?

You know, there’s lots of questions that brings up. 

Marian Glaeser: Yes, definitely.

Essentially, as you say, we’re changing the original material, so we do feel publishing those images should be marked with ‘Okay, here are protected faces.’ On one hand they’re protected because we want to encourage that, and on the other hand it’s not the original faces to say, ‘Okay, this is altered material.’

And this is in a way then bridging the gap of having on one hand the material, the photos online, and share them with those kinds of emotions, and compensate that the material is not actually real, both the huge advantage of data privacy and protecting those people. 

John Koetsier: So, a couple questions that come up — and we’re going to get into the questions around I want to share pictures of myself, maybe my family, or something like that, and I want to do that in a privacy safe way. But maybe even before we do that, how does this work technically? Where are you getting the faces that you are mapping onto people’s bodies? 

Marian Glaeser: It’s a bit magic. 

John Koetsier: Haha, I don’t believe that.

Marian Glaeser: That’s what our investors say, ‘It’s magic.’ No, but it’s… 

John Koetsier: I want those investors. If they believe in magic, I’ve got a bunch of startups for them. Hopefully they’ve got a few billion dollars to spare. 

Marian Glaeser: Yeah. So, and jokes aside, but in fact we having a deep neural network.

It’s a generative deep neural network that truly generates a new face without having any additional information except for what is the target age, gender, ethnicity, and basically the mimics of the person. So there is no face before, the face that we generate has not been there before.

It’s out of a combination of data, newly generated, and it’s funny about deep neural networks as they use those information and then create something from new. Even basically trying to go in there is magical, because it’s happening within the neural network. 

John Koetsier: So that’s actually really interesting what you said there. What I heard — and correct me if I’m wrong — is that perhaps you have a 35ish, female, Asian woman in the picture and you’re going to generate a face that is along those lines, right? And maybe there’s a 45-year-old Caucasian male, or maybe there’s a 20-year-old Black woman you’re going to generate a face that fits those general characteristics, but is not the exact person that was there. Is that correct? Is that what I heard?

Marian Glaeser: That is correct. 

John Koetsier: Interesting. Okay. 

Marian Glaeser:  And in order, just to add on that, in order to avoid biases, like you might have for ethnicity that it can be not as clear, and we also don’t want to basically create something that wasn’t there before just by doing a wrong interpretation. We don’t want to make someone Asian even though he isn’t, because of some side effect. 

So in fact — and that’s a critical question — in fact, what we do is we take the input of the original image including the bias, and we incorporate this in the output. Which means that we are just recreating the same kind of uncertainty, with the same kind of basically, yeah, appearance towards the new face.

John Koetsier: Wow.

Marian Glaeser:  Which means that if it’s before, kind of like not clear Asian, not clear European, because it’s just the way it is, then this appearance will be very similar to that as well. 

John Koetsier: This is interesting. This is more challenging stuff than I thought, and all kinds of questions that it raises obviously. I’ll turn to the personal question. I want to share a picture of myself, it’s the selfie, right? We went out to hike the mountain, there we go. Maybe it’s my family, maybe it’s my kids, other things like that.

How do I do that and have it be privacy safe, and yet not be sharing pictures of somebody else who doesn’t exist? 

Marian Glaeser: Yes. With every technology step comes certain risk, and we basically watermark images that they have synthetic faces to avoid this misuse. We enable the user on own risk to disable it, but this is then a clear violation of a moral code, I would say if you — but this is also not something that is really new. Because also with Photoshop, even before, if you want to really fake something, you’re even without our tool able to fake a selfie and try to make there something out of it. 

John Koetsier: Yeah, that’s not exactly what I’m asking. What I’m asking is, I go somewhere, I take a picture of myself and my family. I want to share to my friends and to my family via some social platform, and I want them to know, yeah, this is me and I was here. I climbed Mount Everest or wherever we are, and here’s the picture of me.

And I’m wondering if your technology can allow me to do that in a privacy safe way, maybe changing some of the pixels or something like that enough so that an AI algorithm that looks at it can’t necessarily match me up with my LinkedIn profile or whatever else, but my friends and family know, yeah, yeah, that’s John. That’s what he looks like and that’s him actually at the summit of Everest.

Can you do that? 

Marian Glaeser: No, intentionally. So, the bit of the cat and mouse game of changing just a few pixel, we’re not doing this for your own safety. Because if we would claim, okay, we changed a couple of pixels, this might work at the current state, but this photo will stay online. So if we only change a couple of pixels so that your friends still know that you’re John, we are risking actually that you feel, oh, this is protected.

John Koetsier: Yes.

Marian Glaeser: But the new next algorithm comes along and it’s not protected anymore. So this is why we don’t want to do this. And from the research showing those micro-changes they will end up in a loop that you cannot really break. So we’re saying we’re changing the entire face.

But here’s what you can do … so we have an app in the pipeline. To take a selfie, you decide, okay, I share it responsibly with only some people and you can then select the people in the back, for example, to have a replaced face. And you can say, okay, for this certain platform it’s more about the moment and it’s more about something and people actually know that it’s me. I just replaced myself with that one as well.

So to get you basically the freedom to select yourself or select the people on the back, and then in the end it’s your choice who to share it with. 

John Koetsier: Okay, okay. Interesting. Okay. So how do people use your service and where do you see that going? Do you see yourself being in the camera app at some point? Or what do people need to do to use your service? 

Marian Glaeser: As of right now, we launched it as a web tool. So you can open it on a browser, have for example, from right now targeting mostly group images. So maybe you come home from a big event, you have 20 photos, you check them, you want to upload some. So, use that as a web browsing platform and also a bit of support for mobile app to just test it and do it as well, so as a mobile application.

But we see, as you say, and as I mentioned, we have in the pipeline an app so that you can use it directly on your phone. Take a picture wherever you want, select the face that should be protected and then shared wherever you want. And this is where we see the future of any kind of data protection like that, to keep it as close to the original capturing, because even after you captured the picture, it’s being already uploaded to your iCloud, it’s being, maybe used a filter and it’s being uploaded to Instagram.

So to really have on the device, the first level of safety, and then it gives the user ability to protect those faces. 

John Koetsier: So some point, perhaps you could see a deal with maybe somebody like a Google or an Apple even to be an option perhaps in the default camera. 

Marian Glaeser: Let’s say our investors think it’s magic. 

John Koetsier: Very good. Well, I want to thank you so much for your time. It’s been very, very interesting. 

Marian Glaeser: John, it was great having your great questions, and yeah, happy to be here. 

John Koetsier: Excellent. For everybody else, thank you for joining us on Techfirst. My name of course is John Koetsier. I appreciate you joining the show.

This podcast will be live today or tomorrow, search for TechFirst on all the major platforms. You’ll be able to get a full transcript as well in about a week, or sometimes 2-3 days at JohnKoetsier.com. And the full story at Forbes comes out right after that. Plus of course, the video will remain available on my YouTube channel.

Thank you for joining. Until next time … this is John Koetsier with TechFirst.

Made it all the way down here? Add TechFirst to your podcast mix …