Intel research scientist: ChatGPT is NOT generative AI

generative AI large language model

ChatGPT is generative AI, right? Well, according to Intel senior research scientist Ilke Demir, not really.

GPT makes sentences in a conversational manner, but is not actually generative in the sense of creating something entirely new, she says.

“I would not say it is generative AI in the sense that we understand generative models, because generative models are understanding distribution of data and creating new samples from that data,” Demir says. “In terms of ChatGPT, maybe it is too token based, that it is just like rearranging the words in a way that it makes sense. Sometimes it doesn’t make sense, sometimes it’s not true … in the correct terms it is an application based on a large language model.”

In this TechFirst we chat with an Intel scientist who has been working on generative AI for years. We talk about the genesis of generative AI, which is as far back as the 1970s, and we talk about ethical uses of generative AI and how we can use neuromorphic computing to help reduce the massive computation cost of generative AI.

Plus, we also talk about what Intel is doing with generative AI, including several projects around privacy.

(subscribe to my YouTube channel)

Subscribe to the TechFirst podcast: chatting about generative AI, ChatGPT, large language models with a senior research scientist at Intel


Transcript: generative AI, Intel projects, and large language models

Note: this is an AI-generated transcript. It is not perfect, and in some cases may be far from it. When in doubt consult the actual video or audio podcast.

Ilke Demir: it is a large language model that is making sentences in a conversational manner, but it is not actually AI in the sense that it is creating something new. Making new sentences based on a large language model may be a generative AI application or maybe a generative application based on a large language model. But I would not say it is generative AI in the sense that we understand generative models

John Koetsier: is generative AI the beginning of the end for humans or perhaps the end of the beginning? Welcome to Tech First. My name is Jonas here. Today we’re chatting with a senior research scientist at Intel who’s working on generative AI that respects people, that creates capability where there isn’t, that keeps, she says, humans at the center.

All of this is increasingly important as we wonder about the lessons of the Luddites, if they have some, the concerns of artists, which they do have some, the worries of workers, which always exist. Her name is Ilka Damir. Welcome, Ilka. Welcome. 

Ilke Demir: Thank you. That was a dramatic start to the conversation, and I hope it’ll be a more hopeful future than that.

John Koetsier: Excellent, excellent. It was very dramatic. Let’s talk about your story first. How did you come to be a senior research scientist at Intel? 

Ilke Demir: just to correct it, I’m a senior staff research scientist. I know it doesn’t matter to the whole world, but it matters inside to me, so I’m just saying it out. Sorry about that.

Okay. I started my journey, I dunno where to start. If you go to my childhood, I was like playing with like little electronic things if we’re doing surgery on electronic things with my dad and we were like opening them and oh, like chips and wires. Wow. And then closing them, hopefully not leaving anything outside.

Then I went to then I started recording in high school, thanks to my senior. Students than us because we were like preparing for Olympiads, et cetera, met Olympiads, et cetera. Then I did my undergrad in Turkey, in Middle East Technical University in computer engineering with a minor in electrical engineering.

Then I fell in love with robotics and computer vision and computer graphics due to an internship I did with S Smart Robotics and nice. Then I applied for PhD positions everywhere. And of course, like us has the best programs, yada, yada no advertisement. And I got accepted into Purdue. So I did my masters and PhD in computer science in Purdue on generative models.

Believe it or not generate models are not new. They’re there at that time. So it’s not just invented a month ago with Chachi pt and Chachi PT is not even a generate model in my opinion. But anyway so too much information in one sentence. Yeah, so my PhD was on different type of genetic models, procedural models, and how we can actually extract procedural models.

From data itself, so it’s called Proceduralization and it helps us modelers, animators, users, editors all of the creators to understand the distribution of data, how it can be an inter filter representation so that you use it in movies, games, everything like all virtual worlds. After my PhD, I actually, during my PhD I done internship at Pixar, which I need to mention it because it’s the most magical space I have ever been.

But then I did a postdoc at Meta. We looked at genital models, we looked at virtual reality, we looked at human understanding. Everything is coming together, right? And then there was like a little startup experience we did. Then we sold it to the. Big company and then I joined Intel. Whew. So much. 

John Koetsier: Wow.

That is quite a journey. Wow. I know. Impressive overachiever much. Not really. I was gonna start somewhere else. I was gonna start somewhere else, but I have to start here. You literally laid it out there. You said in your mind chat, G p t is not generative ai. Talk about that. 

Ilke Demir: Chat. G p T is like in the source of it, it is G T three or GT 3.5, whatever language model, but it is a large language model that is making sentences in a conversational manner but it is not actually generative AI in the sense that it is.

Creating something new, making new sentences based on a large language model, may be a generative AI application or maybe a generative application based on a large language model. But I would not say it is generative AI in the sense that we understand generative models because generative models are understanding distribution of data and creating new samples from that data.

In terms of chat, maybe it is too. Token based, that it is just like rearranging the words in a way that it makes sense. Sometimes it doesn’t make sense, sometimes it’s not true. And fact checked and those are other aspects. Sorry. But in, I mean in the correct terms it is an application based on a large language model.

John Koetsier: That’s super interesting actually, and I want to dive into generative AI and how you’ve worked with that over the years, which seems like it started right at the beginning of your educational career. But what you’re basically saying there is that it’s not creating new things. It’s not really generating something new.

It’s rearranging words in a way that usually makes some sense, is sometimes wrong, but it’s not. It doesn’t also, it doesn’t know what it’s doing. It’s just doing it. Exactly. Is 

Ilke Demir: that correct? Exactly. If we need a better role word, you can say it is conversational ai, which is a better term instead of trying to make a conversation without fact checking and without of course.

But it in the traditional sense of generative models, I would not say it’s a generative model. 

John Koetsier: So you’ve been working with generative AI for a long time. You’ve been working on it since your university career. Talk about that journey and where you see it trending. I. 

Ilke Demir: Yeah, of course. So if you look at the history of generative models, you can even see in 1972 Stein at all, they have that turtle grammars.

I dunno whether you heard about it. So I do not imagine, I do not imagine you have a turtle, it is walking. And you have three comments. To give it like, it’s like it’s programming and like doing something, but it actually ties to the genital models in a way. So the turtle can go straight, the turtle can go left, the turtle can go, those are the three comments. And based on what you say, it actually makes a shape, right? Like left, right? Straight, left, straight, et cetera. And this is the base of shape grammars, which is actually another way of a generative model to create shapes. And then of course, that is like the, like 1970s, right?

That, that’s very old. And from there, all those automatic processing, automatic content creation, automatic synthetic models are appearing in terms of procedural models or syntax based models or shape grammars or s. These are all like traditional journal models that people have been using for years and years.

Like all those like beautiful Marvel movies or like all the games, like EA games, et cetera. They have been using procedural models for content creation for so long. Now what is coming is the deep aspect. Those models were either based on handcrafted features, a handcrafted rules, handcrafted grammars, and in terms of proceduralization, which was my PhD we were actually extracting those rules from the data itself.

So that was like a machine learning approach, not deep learning approach, but machine learning approach to extract those rules. Of the world. From the existing data, like just maybe giving a very simple example, assume you have a building, right? Buildings may have parameters, like how many floors, how many windows is it like the area windows or like regular windows or like maybe all the windows, right?

All of these are parameters of the system that is creating that. Now, if you just have the output of that, can you actually infer those parameters, those rules from the data itself? So that you can edit for everything. You can just pull, push, do anything with the data, and then you have the output that you want.

In 3d. In 3d, in text maybe, I dunno. That is the background on the generator models. Now the deep part is coming. So instead of those handcrafted features, handwritten grammars or machine learning approaches based on clustering or shape understanding or segmentation, can we do it in a deep space? So that is what is changing with generative ai.

John Koetsier: So generative AI is just like every other technology. It’s been going on and on for years, sometimes decades. And then boom, there’s something that catches the public’s eye and whoa, there’s a new thing. And it’s not really new, but it’s a new expression. At least talk about.

Your feelings around generative ai because, you said off the top, I hope there’s some really positive ways of using generative ai. And of course there’s some negative, I’ve certainly heard from some artists very vocally that many of the mid journeys, the creative diffusions those sorts of things are using their art to create new stuff. And they’re worried about that. Others are worried about other aspects. Talk about generative AI and how you see it being used. Is it going to be a net positive or is there some negative aspects? 

Ilke Demir: With every technology comes positives and negatives, and I think our hyping about it.

Is incorrect. But running away from it as it is, as if it is a danger is also incorrect. These tools exist as long as we find ways to responsibly develop and responsibly deploy them. So if we just, forget about the dataset, forget about attribution, forget about artist styles, et cetera, and just say that we have this awesome approach that can generate everything.

Then of course there will be like side effects after the fact. Unfortunately, that oh, it is like infringing styles. Infringing copyright you can see. Logos of images from other places without giving names. I’m talking but.

In my opinion, all of these aspects should be design choices in the beginning of the process. Not after the fact that oh my God, we did this, let’s correct this. Yeah. And we need to put this responsibility pillar in the beginning, and I, not only in the beginning, but it may be it should influence how you design the approach instead of how you patch the approach or how you like close the holes.

If you. Make a ship that is not floating at the beginning, however you try to patch it, it’ll take water, yes. So that’s like the dataset. If dataset is leaking something, you cannot do that. Or if the dataset has bias, you cannot do that after the fact. Anyway Comparing with it, with the traditional, maybe my answers are too long, but forgive me, I, I have so 

John Koetsier: much to say probably.

No, this is fine. This is super interesting. We’re having a conversation. It’s real, it’s natural. It’s all good. Keep going. 

Ilke Demir: Thank you. Comparing it with the traditional generative models I think the aspect that we are lacking a lot is control. So I gave, I keep giving examples from procedural models because I think it is very aligned with how humans are thinking in the creative space.

In procedural models with some interactive components, when you pull something or when you pull a wall with between windows on it, the windows replicate. It doesn’t become suddenly an elephant. Or it doesn’t suddenly become a car. I’m giving extreme examples, but this is how currently the generation is done.

If it generate AI in the Latin space. You walk direct, you walk in a Latin vector in that manyfold, and suddenly the face with Beautiful white lady becomes a eyeglasses young child, for example. And that is probably not intended. It is just if you go in that direction, that happens because that is the money fault.

So there’s no control and there is no interpretability on that manifold for the artist. To find their way interest. If there were like, no, and parameters that are saying like there are some, I’m not saying there’s none because those giant models are getting better. So we can actually change lighting, we can change headphones, we can, I’m talking about images.

We can change gender, et cetera, but it’s not they are changing the structure. 

John Koetsier: It’s really hard actually, because I know some people who are doing a lot of generative AI around images. They could be spending two hours crafting the right input, the right prompt to get what they want, and then it’s not even perfect and it’s not predictable either.

So that is really interesting. It’s also interesting what you brought up around data sets, right? And avoiding copyright infringement and stuff like that. It is interesting to me because I know artists who are saying, Hey, my art has been used there and I’m not okay with that. It’s funny because if I look at OpenAI and what they’ve done with chat G P T, I’ve probably written, I think I calculated one time about 20 million words in my lifetime in various places.

Lots of places, almost all of them are out on the internet and so guaranteed. Open AI has used some of my words in some cases to do some of what they’re doing. I’m not worried about it. I’ve learned from others, I’ve read from others. And everything that I generatively ai generate in the wetware, not the chips is influenced and shaded by that. And so I’m not too worried about that. But it is an issue. The funny thing though is that often innovation happens at the edges, right? You see a project like Mid Journey or Creative Diffusion, right? And it happens at the Edge and it’s it’s not a full on company. It’s more like an organization and they’re just proceeding ahead and it’s a damn the torpedoes.

We’re going full speed and we’re just doing whatever we can. Of course, A company like Intel can’t do that. A company like Microsoft, like Google can’t do that. A company that has something to lose from a lawsuit can’t do that. And a company that wants to use that via a p i for their products also has to be worried about that as well.

Cuz they have legal exposure. Correct? 

Ilke Demir: Yes. All of those are so many good topics. I dunno which one to start from. 

John Koetsier: There’s so many. Pick one. 

Ilke Demir: In terms of big companies not being able to use it maybe develop it directly. I agree with some of them that some of the data sets that we are using, some of the data sets that we want to use, but we cannot use cetera, are actually more suitable for research in maybe more.

Academic organizations or like those those that, those of us that do research and can actually be experimental in their research we can also do that. Big companies can also do that. The problem becomes when it becomes product if you have an experimental research POC that is super nice, that produces super nice outputs and everything is going well, et cetera.

That still doesn’t make it a beautiful product. And that doesn’t still make it like it’ll be deployed for the right audience with the right requirements, et cetera. So I think another layer why such big companies are not really I don’t wanna say cannot do that, but are more cautious for doing that is to find that right.

Audience and deployment and productization path because Yes. We cannot be as experimental as like normal, like pure research. In pure research. 

John Koetsier: At least in terms of what you release. Yes, 

Ilke Demir: exactly. Exactly. Research is all about experiments and like even if you are very curious about some little thing that is Producing an output that is not intended, but super, like nice discovery.

That is fine in the research domain. But VM are taking it from the research. Then there are so many other layers that we need to talk talk about and think about. 

John Koetsier: That’s a great segue because Intel is doing a number of interesting things in generative ai. Talk about some of the projects you’re working on.


Ilke Demir: of course. I lead a team called Trusted Media, and it is basically creating a trustful digital future for all of our digital personas in 2d, 3D text, speech, whatever you can think of as media content. And in trusted media we focus on treating. So one of them is manipulated content detection because there has been so much fake data, synthetic data out that.

Most of them come from unreliable resources used for bad intent. Especially for deep fakes. They have been used for misinformation, for adult content. Like all those like evil cases. I call them evil cases. And how we can actually start the battle with these synthetic content from the beginning, from defect detection, for manipulative content detection, et cetera.

The second way is like the next step, right? How can we enable. The responsible generation, responsible generative ai so that we don’t block every synthetic content, but we enable the creators with responsible ways. Or if something is needs, like maybe for data augmentation, maybe for deep fake with content or maybe for priority enhancement privacy enhancement.

For anonymization, et cetera. These can actually be done responsibly and then done responsibly. They are actually changing lives. I will give some examples. Then the last one is the media provenance. Okay, we found bad content. We enabled generation of good content. How can we attribute that good content to the original creators?

May it be the datasets being known for any output of generative ai. How can we actually tie it back to the dataset? How can we tie it back to the original creator? Tied back to the generative model that created that content. Was it done with consent? Is there an edit history, et cetera. So all that provenance information, how can we embed all of it into the data itself so that whoever is using the data will know that?

Oh, okay. This is Eeks model. It was trained on this dataset. It was done with consent. The model was evaluated for bias, et cetera. So all of that information comes from the output. Then you had the output that, okay, this is a good one. We know who did it. And it’s also important for transparency and accountability pillars of responsible ai, right?

Because you know how it was done and you, if you have a problem with it, if there’s a problem with it, you know who to go. Now, if I see a deep fake around, then it’s I have no idea who created, how it was created, like who is in this photo, et cetera, video, et cetera. So these are the three pillars that we are working in trusted media, and we have many different projects in all of those three trusts.

John Koetsier: So many questions that arise from that. Like, how do you do that? Do you have to catalog everything? Is it a DRM like system in order to say, okay, this was created by how’s the metadata encoded? Is the metadata encoded in a way that survives? Manipulation. So many questions that that, that opens up.

We can’t address them all right now. But you said you were gonna give some examples of what you’re doing that is, that are you’re doing in a very positive way. Maybe go there first. 

Ilke Demir: Of course. Of course. And I’m taking note of all the questions that you just asked so that I add them to my research agenda.

So okay.

Ilke Demir: Research. So for. Just like the RM that you said, like Intel actually one piracy and et cetera. One content was in in danger inter intellectually provided like contact content protection approaches around watermarks around like all the using all the security and privacy principles.

And it was actually solved by those initiatives. Now just similar to content protection, content integrity is at risk with deep fakes and all that synthetic re media. And Intel tries to lead that too. About standards and how we are doing it for prominence. There is a coalition for Content Protection and Authentication.

C2, pa and c2 PA brings together the brightest and best minds of many companies. Intel Adobe, Microsoft bbc, I guess of like many different industrial leaders are. Coming together there to provide open standards for media prominence. How can we standardize all of these so that all the outputs, all the actual captured data or all the data in different modalities like 2d, 3D speech, et cetera are protected and can be authenticated by c2 pa.

And. There are many different ways to create those open technical standards or preserve all those technical standards. One way they propose is through blockchain and other approaches in addition to that. But what we are what we want to do is, can we use it by itself? To embed that information, right?

Can we actually build, change the generative network? If we go to the very basic generative network, generative sale networks, right? Guns if we go, can we go to again, and instead of having a generative. And discriminator networks, trying to play a game with each other can be integrated.

Third network into that game, who is an authenticator network. So it’s a generative authenticator, discriminator, all trying to create an output which is. Different from the traditional, again, output that it is actually authenticated. So the metadata is actually embedded itself. It’s either a token, it’s, or is either the provenance information itself cannot be integrated.

Similar to that, we can also exploit the generators. We can inject some layers into that for authentication or for marking it, et cetera. So these are all the things that we are working on the provenance side. It seems 

John Koetsier: so hard. It seems like an impossible task, and I’m sure you understand it much deeper than I do, but I could see it working for corporate content.

Disney creates something. The BBC creates something, CNN creates something. Fox News creates something. I could see them betting it, but so much of the world’s media. Happens in the hands of everyday people and they create something and they shoot something and they share something. And is it true?

Is it real? There are markers and detectors for seeing what might be fake. How would some. Metadata getting coded with anything that I share that says, okay, it was created here by at such time in these conditions and still enable privacy, which then is another layer here on this all sounds really challenging and really deep. If you have the solution for it, great. That’s wonderful. I’m sure it’s really challenging but I don’t see it. 

Ilke Demir: There is no one solution for all to start with. So one way to go is just like you said big companies embedding it into their workflow so that their content is actually more protected and known and attributed, et cetera.

But another way to go is to. Armor the tools that everyday users are using so that whatever tool that they are using that provenance aspect come directly. Just assume that the main stable diffusion API that you are using actually have that provenance piece. Then whoever on the internet is using it needs to attribute needs, needs to own it, needs to say that, okay, I created that.

Don’t shoot me. I’m Why we shoot, but 

John Koetsier: I get it. 

Ilke Demir: Yeah. But again that, that comes back to why we have trusted media. So if we had provenance, like if we had that magical provenance that was integrated on all software for capture, for creation, for big companies, et cetera, then we wouldn’t be doing detection, right?

So that’s why the short term solution comes from detection of such manipulated content and then the as we shift the as it becomes an everyday use for everyone, then we can actually also use provenance information. 

John Koetsier: We’ll have to move on cuz I want to talk about some of the other projects you’re working on, but that is an interesting world that we’re looking at.

If you just think about media literacy classes of 2031. How will you know to trust the video that you’re watching? It was a month ago or so, I saw a tweet, which was Elon Musk. Video speaking sounded like Elon Musk, looked like Elon Musk. It wasn’t perfect. The lips and everything wasn’t perfect, but he was promoting some altcoin, some crypto altcoin and saying, yeah, this will be amazing and incredible and you should buy in and all this.

And of course, Complete deepfake and nonsense. But it’ll be interesting to see how we develop as a culture, as a society, as a world, in a world where that can be created. Let’s talk about some of the projects you’re working on. You’re working on some cool projects, including giving speech to those who don’t have it.


Ilke Demir: So just one comment on the previous thing. I think I saw that. Can let it go. Can you, now you say defect, I need to say something. It’s all good. I think I saw the Elon Musk video and we actually run our realtime defect detection platform which is actually real time. So it’s not just given one video, it gives one answer.

It actually proceeds processes like frame by frame. So I think it was a hybrid video. So for the frames that it was fake, it was saying fake with high confidence, like 92 to 98% confidence that, okay, this is fake. Then it was like when the other guy, then it was like, oh, okay, this is real, real fake real with the confidence.

We are quite confident in fake catcher catching all those, all of those cases. Okay. Going back to the voice synthesis. I said like defects or like synthetic content for good is actually impactful when it’s, when it is done with humans in, in, in the center of it. And when it is done with consent.

So there is this project called I Will Always Be Me. It’s in the link. For those that lost their voices due to illnesses or they are about to lose their voices due to some vocal infection, et cetera. With their consent, their recordings and voice voices or from different places.

Either they can record them or they can use like their existing data and that can be synthetic voice for them. And through assistive devices, when they lost their lose their voice, they can actually lo use their own voice to speak. Again, not through their mouth, but through assistive devices.

And of course, nobody’s claiming that this is as good as using your own voice, but instead of. Using some robotic voices like this to communicate. Yes. They can still be a little bit themselves, closer to themselves maybe. So that is one approach that we are that Intel 

John Koetsier: brought up. That’s really interesting.

And it reminds me because Intel wasn’t Intel, the company that gave Steven Hawking a voice. Yes. Yeah, absolutely. Yeah. And I actually interviewed another Intel research scientist about that, and was super interesting how she used predictive AI to help him massively speed up. Not typing the letters and first predicting letters and then predicting whole words and predicting phrases and mass by.

Orders of magnitude speeding up how he could speak over time. This is super interesting technology for people who are gonna lose their voice. It’s awful. So very cool. Absolutely. 

Ilke Demir: Anything else? Yes. We have my face, my choice. I know. Dramatic title. 

Yes, very dramatic. Wow. 

I am the name mother blame me.

Ilke Demir: So we have many online photos or videos, and. Do we want to be on all of them? I don’t think so. Some of them comes from sur surveillance. Some of them comes from friends that we don’t wanna be 

John Koetsier: friends with. Some of come from our spouses who took a bad photo of us and shared on social media without consent.

Ilke Demir: Exactly. Exactly. Some of them you are just walking in a bar, you are in the background and you don’t wanna be there, so for all of them, there are face scrollers that are connect collecting. All of those faces from internet, from social media may be Clearview AI may be like social media platforms storing 2 billion face embeddings, et cetera.

Even if we want or not, our faces. Live there and it, they are probably associated with our names. We don’t know that because we don’t even know the photos and where they live. So my face, my choice, Bri, brings contextual integrity to those social media photos that if you don’t want to be in a photo, you are not in a photo, but you are replaced with a quantitatively dissimilar deepfake.

So assume that we have the space of all many faces. We find the furthest one from your face, which is very dissimilar to your face, and we slap your face with that face. So the expression, the movement, the head post, the gaze, everything stays the same, but the identity is not there.

So that Then those sorry. Then those face crawlers are trying to Associated with your name, they fail and their search space explodes. So instead of comparing 2 billion face embeddings, now they are exponentially comparing much, much, much more embedding. So the faces that are similar to your face are increasing and it’s no longer you, there’s no like clear boundary between you and the closest face to your face.

John Koetsier: Interesting concept. Very interesting concept. How do you operationalize it? Because I have no control over the security camera that captured my face, and I have no way of saying after the fact that I don’t even know that I was captured. But if I need, even if I did know, I have no way of saying, Hey, take that video, run it through this software.

How do I do that? 

Ilke Demir: So there are three. So this is based on access levels and access face access models. So the first one is that anytime a photo is about to be shared and when I say shared, you can think it as a social media platform or like uploading to some platform, like when it is going public.

Yes, the first choice says that. Everyone is defect. No one, no, no face at all. And that’s easy case, right? You take the face, you find 

John Koetsier: face. Here’s my team. We’re out for dinner and nobody’s recognizable. Yes, I got it. 

Ilke Demir: The second one is the uploader chooses it. And this is a little bit again Easy concept that okay, I want this phase to be as is.

And this phase that we don’t care. It can be deep fakes. And that’s mostly for background faces or non friends, et cetera. The third one is similar to what is in, what’s currently in social media, right? There’s like tech, tech corporation. So if someone uploads a photo of you, it comes to you and you tag or tag yourself.

If you untag yourself, it is still there, but it’s not associated with your name visibly. It may be associated with your name invisibly, right? In that case, when you untag yourself or like when you say, okay, I don’t want it at the published time, your face is swap with a deep fade. Now when you scale it to the whole photos in the internet we hope that the photo ingestion platforms may be like, I don’t know.

Twitter or Instagram or whatever, like big platform you can think of, they will adopt this hopefully, and this fa contextual integrity will become to the platforms if there’s no Pressure for these platforms to bring contextual integrity. It’ll stay like that forever. And I think the current legislations are also going towards that, like GDPR and et cetera.

I don’t wanna mention that’s not my domain, but that they are Forcing all these platforms to actually not collect bio biometric data and faces our digital passports. So if we cannot protect our digital passports, what are we protecting? I’m not saying especially for my face, my choice, but such measures should be forced for protection of our.

Digital passports. 

John Koetsier: What an insane, crazy world that we are creating because last week my wife and I were in Vegas. We went for a hike in Valley of Fire State Park. We took tons of photos and some of those photos, people there are other people and. In those cases, their face is, if we want to think about it that way, it’s kind weird even to think about it.

It’s their property, right? It’s their identity. It’s not our identity, but I own the photo that I took, I think. And you know who owns that and it, so let’s say I upload ’em to Facebook. And Facebook subscribes to this service, and those faces happen to be in a very private database that would never, ever be cracked or hacked in any way that has all the faces of all the people in the world.

And those people get notified. You’re in this picture, and they say, no, take me out. And boom, it gets replaced with somebody. And at some level, I don’t really care. Here’s random person number F. 5,000,000,033 and it’s replaced with random person 5,000,000,000,300 30 million. Cuz you’ve expanded the namespace, you’ve expanded the fa the data space of available faces.

So in some, although I don’t care, but in some level, some of my property theoretically, Has been altered without my consent, although I probably consented when I uploaded it to the social network in the first place. So many challenges to think about and so many different organizations that would have to subscribe to this level of technology.

Fascinating stuff. We have to call this to, we have to bring this to some kind of end. It is very cool. I do wanna ask you two more questions briefly. One is generative AI is massively computationally expensive. As we were talking about in the pre-chat, I did a story recently on Forbes that if Google was to answer all queries as if it was chat, g b T or with that level, That could be 30 to a hundred billion dollars in CapEx.

It would’ve to spend for Nvidia GPUs to make it happen or other processors. Is there a way to make it computationally less expensive? I know some efforts are around neuromorphic computing, and I know Intel has some work there as well. Any thoughts? 

Ilke Demir: I think neuro neuromorphic computing is going, getting there and Intel is releasing some libraries for public use.

But in terms of deep networks and deep learning, I think there are still some unknowns that we need to Do more research on to actually bring those different operations to neuromorphic computing. But one step back, like when we before we are there, is that I think most of those can actually be up offloaded to CPU U and run CPU U in inference.

So we have Intel has all those AI optimization libraries like Intel Deep Learning boosts Vienna NI Intel advanced Magic Mouse. The matrix, multiplication, amx, et cetera. And all of these are actually enabling much, much faster inference times. I mentioned realtime defect detection that is the most concrete use case that before it was offline and we were like waiting for frames to be processed and infer the phone, et cetera.

Now it is real time thanks to all of these like Intel AI shenanigans. It’s running real time on Zoom and it’s actually running. Seven to two Concur detection streams at the same time. So you have just one machine running, seven to two, seven to two of them at the same time. So I think that is At least decreasing the computational costs a little bit.

And in terms of generative AI too, so we need to bring those AI optimizations to those other models that are doing maybe a little bit different operations or maybe implementing hardware optimized by of diffusion, who knows? So all of them are possible ways that we can reduce those computational costs.

John Koetsier: Absolutely. And people are designing their computer chips these days with AI specific cores that are really tightly and closely connected. Makes a ton of sense. Okay, let’s land the plane here. Impossible question, but I’m asking somebody who, your life is in this space, what does AI look like in three to five years?


Ilke Demir: Some people say that those hypes come and go, and I think just before generative ai, we had metaverse, but I think our. Both our future and I, our AI future is looking much more 3D from my perspective. How can we bring all of our experi experiences to 3D with ai? Of course.

And like now we are having this talk in 3D with video, but like I want, 

John Koetsier: look, this is stolen. Exactly.

Ilke Demir: Talk and walk can talk with you or not be bonded by this little frame around me. And 

John Koetsier: you want the holodeck. Exactly. You 

Ilke Demir: want Holodeck? I think Holodeck is already used. Another company is using holodeck, so I don’t wanna use that word. Yes. Okay. No, I’m kidding. I know it’s probably copyrighted to star Trek.

So anyway. Tre, yes. I dunno. Okay, cool. And I think like all of these generat AI techniques may be like decision models or text to 3d, or text to 3D video or like procedural generation or deep procedural generation in 3d, all of them will actually fill that content creation because one of the, I think, downsides of Metaverse not like taking up as, as fast as people wanted is that the 3D content and 3D content generation tools were not.

They are there, but they were not there enough so that everyone started creating content. Like just compare how everyone is creating so many videos on TikTok, Instagram, et cetera. Versus how many people are creating 3D content for Metaverse. Because the tools are not that accessible and that achievable, et cetera.

Yes. So I really hope, yes, that our AI future will actually empower all that 3D world virtual and augmented. 

John Koetsier: You know what you just made me think of? We redo this interview, and I’ll say 10 years because there’s some pieces that are hard to replicate and you’ll see them as I talk about it, but we redo this in a decade.

And my systems, my processes, wherever they live, whether they’re cloud, whether they’re local, whether they’re in my hand, whether they’re on my desk, are producing generative ai, producing an image of myself that is real time mapped to some of my motions and some of my lips and eyes and stuff like that.

Your systems are doing that too. They’re interfacing somewhere. I’m seeing that, I’m seeing that through my smart glasses and I’m seeing you in those. So I’m seeing you in the environment and we’ve picked, maybe we’re doing it on Mars, right? Maybe we’re doing it on, in the seas of gny me a mile, a mile beneath the surface of the ice.

Who knows? We’re doing it, in the South Pacific and it’s wonderful and warm and beautiful, whatever. And we’re seeing all that stuff. I think that is a likely future. The challenges are probably not so much on the generative side, although that exists. There are some challenges on capture side although those are largely fixed, but there, there are real challenges on the display side making that really work in a form that is portable.

Light and powerful which all those combat against each other. But this has been fascinating. I’ve really enjoyed it. Thank you so much for your time. 

Ilke Demir: Thanks a lot for the capture I need to name drop. Look at Intel Studios and how we created 3D Worlds for ar vr content and for the small capture devices.

Check out beyond Lolly. Friends are working there and they created the super customizable little. Lightweight device that you can use and yeah, I love the feature that you just envisioned. I hope my microphone will work at that time.

John Koetsier: Have yourself a great day. Thank you.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: