DEI in AI: Is diversity, equity, and inclusion a solved problem in AI?

dei in AI

Is diversity, equity, and inclusion (DEI)  in AI a solved problem?

I’ve written a lot of stories lately about AI. AI is critical to our future of automation … robots … relf-driving cars … drones … and … everything: smart homes, smart factories, safety & security, environmental protection and restoration. A few years ago we heard constantly how various AI models weren’t trained on diverse populations of people, and how that created inherent bias in who they recognized, ho they thought should get a loan, or who might be dangerous.

In other words, the biases in the people who create tech were manifesting in our tech.

Is that solved? Is that over?

To dive in, we’re joined by an award-winning couple: Stacey Wade and Dr. Dawn Wade. They run NIMBUS, a creative agency with clients like KFC and featuring celebs like Neon Deion Sanders.

Enjoy our chat with full video here, and don’t forget to subscribe to my YouTube channel …

Or, listen on the podcast

DEI in AI: solved problem?

Not a subscriber yet?

Find a platform you like and let’s get connected.

Transcript: fixing biases in AI

Note: this is AI-generated and likely to contain errors.

 Is equity, inclusion, and diversity in AI a solved problem? And are there new challenges with generative AI? Hello and welcome to TechFirst. My name is John Koetsier. I’ve written a ton of stories lately about AI because AI is critical to the future of automation. It’s critical to the future. Robots, self-driving cars, drones, and everything else.

John Koetsier: Smart homes, smart factories. Safety, security, environmental protection, restoration. A few years ago, we were hearing a lot about how various AI models weren’t trained on diverse populations. It created inherent bias in who they recognized, who maybe a model thought should get a loan, who might be dangerous.

 

In other words, the biases in the people who created the tech are manifested in our tech, what a shock. 

 

Is that solved? Is that over? To dive in, we’re joined by an award winning couple, Stacey Wade and Dr. Don Wade. They run Nimbus. It’s a creative agency with clients like KFC, featuring celebs like Neon Deon Sanders.

 

If you’re not into football because you’re into tech, he’s a Hall of Fame football player. And they won multiple awards over the past few years for their agency. They’re in a unique position to see the impact of AI on everyone, not just white ass English speaking tech workers. Welcome. How are you guys?

 

Hey, so pumped to have you guys, thank you for taking the time. Let me just start here. Why did you guys start digging into AI?

 

Dawn Wade: From a multifaceted perspective, um, I’m a researcher by nature. So when new things come out, I always look for gaps within that. And it was just a personal interest to me. But then as you see how our country is moving, it always makes me weary when things aren’t created in a.

 

And I’m going to use a wholesome way, but created by an individual, because having a computer engineering degree, I’ve recognized you’re only as good as the data, or you’re only as good as the input that goes into a system. So, it made me question, how is this being developed and how does it know enough about me to represent someone like me?

 

So that was the 1st facet as to why it was very interesting to me. But then the 2nd, 1 is we work in advertising. So it relies on a creative nature and creative has to be very nuanced to people, their habits, what they like, what they don’t like, how they talk, where they’re from. And that’s very hard to capture on a day to day basis and marketing.

 

So we were very interested in how does that work in something where we’re trying to resonate with a real person? How can a computer resonate emotionally? To convince you to buy something or to do something. So very two different ways in, but it’s continued to be interesting to us.

 

John Koetsier: What have you found when you’ve looked into generative AI and how it’s created people and imagery in your space in, in advertising, in, in marketing?

 

Has it been, um, compelling? Has it been interesting? Have you seen challenges with it? I think

 

Stacey Wade: definitely mixing challenges in it. I mean, um. You know, it’s similar. I think for me, when I, when I think about a, I, uh, it’s so new. I feel like we’re, we’re just like, barely creeping up the hill with this thing. Uh, but what stands out to me is something that, uh, was happening probably in 21, which was.

 

And even when we started the agency, which was representation in the industry, what we’re noticing is that the representation of those images that are being, you know, the output of those images are very similar. There’s confluence to what we experience in agency life, which is, are we represented in a way that is authentic to who we are?

 

And is the voice and tone also authentic? And that’s the 1 thing that there seemed to be some juxtaposition there, which was no. And that was something that was a little bit scary for me personally as an artist. I’m not. You know, uh, computer engineer, I’m not done. So I look at things completely different.

 

It’s very abstract for me sometimes. So to be able to see that shown in a way that look very familiar to when we started the agency 21 years ago, seeing that show up in a was a little

 

John Koetsier: concern. 21 years ago. You guys don’t look old enough to have started the agency 21 years ago. Is this generative AI in action right here?

 

You’re deep faking yourself.

 

Let’s dig into that, Stacey. Um, you, you talked about representation. Uh, you talked about authenticity. Don talked about authenticity. What were you seeing? That was not adequate representation. What were you seeing that was not authentic? Yeah. I just think

 

Stacey Wade: tonality, I think, you know, you’re so quick. A lot of those.

 

Conversations that were happening in 2021 about a show up today in a way from a gender from the generate generative experience. You’re seeing it show up in the advertising. So for us in advertising last year, it was all about, you know, the metaverse this year. It’s all about and the people that are using these tools.

 

Sometimes may bring in those biases in the tools that they’re using and the tools, the output of that is showing up and the output doesn’t look like me, it doesn’t have my voice. And I think that’s the part that we’re trying to, as an agency, luckily enough, we have right, you know, left brain and right brain on this call, you know, so to have somebody, you know, That understands that space and has, you know, the background to be able to kind of put, you know, just very logical, pragmatic thoughts wrapped around AI, what that looks like for us, and then to take more of an artistic approach to understand tonality and touch and feel and making sure that we show up as from a culture standpoint, we’re showing up and not being erased or being dumped down in a way that AI is basically pulling from these.

 

Inherently biases, inherent biases, pulling those into the images, I think, is something that we’re trying to aggressively, uh, have a conversation about and aggressively be a part of that conversation. So that. We can start to the same way that we came into an industry that left us out. We can start to include our thoughts and tone and authenticity inside of, as a part of the output of a,

 

John Koetsier: I think I’m wondering what that looks like.

 

Go ahead

 

Dawn Wade: is that is perfect as people we are imperfectly perfect. Right? So, if you’re looking at an image. That’s AI generated. All the strokes are going to be right. It’s going to be balanced. It’s going to be symmetrical, right? But a true artist that does that are going to have his signatures within that.

 

His brush of how he strokes are going to be slightly imperfect. And I think we as people are okay with being imperfect, but AI is looking to achieve that perfection that I think takes away from the authenticity. So like, that’s my lamest term of saying that in a way. Is that the nuances of that person or that artist are some of the things that somebody like Stacy is going to value because he’s an artist, but that generated image.

 

The eyes are going to have that slight, but that line means something on the eye, or they’re just certain attributes that is not going to see. But a real person looking for the beauty in something is going to be missing based off of the AI generated content.

 

John Koetsier: So Don, AI’s gift to you and generative AI from mid journey and stable diffusion is people’s hands.

 

They’re getting better, but they’re not great. But that’s, I know that’s a different thing than what you’re talking about there. I’m trying to dig into This thing about authenticity because I think that many people might have that from a diversity perspective, but also from an artistic perspective. Right?

 

Um, and, and I’m wondering, you know, do we have people sitting in, um, a high rise in New York City or, or somewhere in LA saying, give me an urban scene to, you know, mid journey or stable diffusion. And then it’s getting something very stereotypical or what’s happening.

 

Dawn Wade: That’s exactly what that is. It’s what you think you saw on TV or saw in some magazine or saw in some way you think that that’s representative of this particular scene, and it’s not until you live those experience.

 

You can’t dictate that for the next person. And that’s where DE& I comes into the space, because there’s not a program that teaches us diversity, equity, and inclusion. These are lived experiences, and I can say as an African American woman, but Stacey’s experiences are going to be different because he’s not an African American woman.

 

He’s a man. And my Hispanic counterpart is going to have a different experience, but this isn’t just a race situation. This is like when you look at the LGBTQ community, the disability community, like those are nuances. They cannot be captured in AI appropriately. So, you know, when it started. Really ramping up in 2020, it was all about facial recognition.

 

But when it comes to other things, it goes deeper. You can’t do that from an AI perspective adequately. That’s going to be representative of those communities, even amongst women. You know, but I think there’s a technical aspect of it. There’s also the community usage of what that means and then who is developed for, right?

 

So if it’s developed for a consumer versus Just an individual user is going to have different nuances in it. So I think that we can’t address AI with a broad stroke. It has to be chiseled in a way if we want it to be sustainable and safe to use.

 

John Koetsier: What’s that look like then? Um, because generative AI isn’t going away.

 

Um, if people are going to use it and frankly, many of those models are open source, they’re out there. There’s a massive amount of innovation and creativity happening there. And it’s kind of a land grab. And it’s also this explosion of capability of people who could never create something like this artwork or that scene or these people or whatever are also doing it.

 

So that’s not gonna, the genie’s not going back in the bottle.

 

Dawn Wade: Absolutely not. But we have, we can’t go from that. If you’re not first, your last mentality. And that’s what a lot of the software is. That’s what a lot of the platforms want to be first to the scene. You have to get away from that mentality when it comes to AI.

 

You have to invest in the time of connecting with those that you want to target. So if it’s AI that’s targeted to a certain group or certain usage, you have to bring in people who are experienced or have those cultural nods and allow them to give you the inputs to get it right. Because the one that’s going to be long lasting and most successful is the one that’s going to get it right.

 

The one that’s first is not going to make it to the end point. So I think that’s the, the, the. The, the moment in which you need to pivot the mindset, you don’t have to be first to market, but you need to be last in the market to make it successful because you’ll be the one who focused on getting it right versus getting it first.

 

And I don’t think many aren’t that way at this point. Stacey, talk

 

John Koetsier: about what that looks like. Uh, talk about what that looks like for a brand that plans on being around for a while. Plans on serving its community well. Wants to connect to its community, AI and generative, and it looks like a cheat code for boom, check, uh, done, got it.

 

Uh, there we go. Talk about how you believe that the lack of authenticity in that will impact that brand over time. I think that’s

 

Stacey Wade: something, even, even when you remove AI, that’s something that brands struggle with today. So now you’re throwing in another level of complication with AI, because you ever seen that book, you ever read that book, Blink by Malcolm Gladwell?

 

Where he just speaks about, you know, you just look at it and you just know something, even though it looks like me, there’s something just not quite, it’s not me. And I think that that’s what, listen, you’re, just as AI is changing the landscape. Let’s not get it twisted. The consumer is also changing and they’re changing really quickly and they’re becoming very smart and they can see something that’s authentic.

 

They can sniff it out. They know. So I think brands are as much as they want to jump into a, I, we’ve seen brands jump into a quickly to Don’s point. You see brands making this quick charge in to be 1st and what we’re noticing what we’ve noticed is that they’re getting it wrong. And now they’re trying to like, you start to see them kind of take steps back and not wanting to be first.

 

And now it’s becoming laggard. So now they’re trying to figure out, okay, how do we actually do this in a way that’s authentic? And I think, uh, that starts with brands actually being authentic so that they can understand the blink effect. Like, okay, I know that this is real. I know that this is not real. I know that we need to make this as perfect as possible, but we need to also bring in those cultural nods.

 

So you need to bring people that are able to see those nuances, able to understand tonality, able to understand, you know, the hat is actually not a Detroit lion’s head is actually fear of God. Those are very. Small details that don’t show up in ai because AI would take this image and make it into Detroit lines, but it’s not, I’m not, you understand what I’m saying?

 

Tigers, not lions. I get it, but it’s not, it’s actually, there is a nuance, a cultural aspect to the logo that has to come in on the output. And a lot of what you’re seeing, you even mentioned it when you talk about urban communities and AI inherently is going to bake in some biases, what it views as what an urban community is, but my urban community is completely different than, say, your urban community.

 

So brands are going to need, we say, you know, when in Rome, bring a Roman. They’re going to need people that really understand these cultural nods and nuances to be able to one, protect them from themselves and add a layer of authenticity to it that actually is going to be beneficial, not only to them, but the consumer that they’re trying to reach.

 

John Koetsier: It’s pretty crazy challenging, isn’t it? Because our technology is a reflection of ourselves, and sometimes it amplifies, uh, bits of what we do and what we create, and all that stuff. And if we look at our culture over the past 30, 50 years or so, And then you look at, um, the token person of color in the TV show in the 70s or the 80s or whatever, and how that person was represented or how that person needed to be represented, uh, the corpus of knowledge that AI is drawing on, whether it’s that or whether it’s just remnants of that.

 

Cultural detritus that accumulates over decades and then manifests itself in my image of what somebody in Detroit who is black and grew up there is versus your image. It’s such a. Insanely complex web of everything. How can anybody get it right?

 

Stacey Wade: You’re, you’re, you’re, you’re nailing it. But I think that’s where Don says something that’s so, it almost clicked like a flag in my own head.

 

It’s like, you know, it’s the ones that are taking the time to not. You know, rush down the hill, but are actually taking the time to walk and understand what’s actually happening so that they can give you the best output. So, being able to curate is similar to, you know, how we curate our own agency, being able to bring different people into the agency.

 

It’s not a black agency. It’s not Hispanic. It’s not a Hispanic. It’s not white agency. It’s a culture agency. It’s being able to weave in these different culture. To be able to slow it down fast enough so that you can speed up as the, as the technology is moving forward. So it’s not a matter of saying it’s going to go away.

 

We know it’s not going away, but we are saying that we want to offer up cultural nods and nuances to make it better.

 

Dawn Wade: But I think anything without checks and balances is dangerous. Anything that doesn’t have checks and balances is dangerous. So when it comes to AI generated content and things like that, it’s generating that, but what’s the check and balance?

 

When you get that output, do you then go and check to make sure that it’s going to resonate or that it’s safe or it’s not offensive? Or is it assumed because you did it using AI that it’s safe already? So where’s that safety check? Where’s that quality assurance that needs to take place? And those are things that I want to hear from, from the developers in terms of how that’s coming to market.

 

To have those safety checks, but time and budget often eliminates that, and that’s the reason for AI. So those are some of my watch outs when it comes to that.

 

John Koetsier: Don, talk about how you see generative AI developing over the next couple of years. Um, there’s a, there’s a, a vast group of people. Who now have access to whether they’re open source models or whether it’s a model, you have a subscription to, uh, in discord or on the web or an app or something like that.

 

Millions, maybe probably tens of millions right now are actively building stuff with generative AI. How do you see that developing over the next couple of years? And how do you want to impact the development of that to make it something that you think is net positive globally?

 

Dawn Wade: There has to be checks and balances, just like you develop a new food or a new drug, there have to be checks and balances before you can, to me, uh, infiltrate the, the, the world with it and that we don’t have that there.

 

And I think that there are some government oversight groups that are being developed to do that. But part of that takes, you know, it takes the fun out of it. So I think of if there were 10 million solutions, right? 90 percent of them may be great, but there may be 5 or 1 percent of it. That could really set us back.

 

And I think that’s why we need oversight because that 1 percent could really, um. Mess something up for us, whether it’s between our country and another country, when you can take somebody’s voice and their likeness and create a video that anybody from the human eye could not know is fake, what happens as we’re debating or working with countries that we don’t have the best relationship with.

 

And it looks like our president is sending a message that may not be true or doing something. And you can fake so many things who, and you only have minutes to react. So that’s what I mean about checks and balances. And as a person who loves my individualism, I don’t necessarily love the thought of having oversight at that level, but to know that.

 

It could be something very dangerous, and somebody could get hurt or killed. I think that anything that crosses those boundaries that could really hurt people, kill people, represent something in a way it’s not intended, requires that, to my understanding.

 

John Koetsier: I think. Honestly, everything negative that you just wanted to avoid there is going to happen and almost inevitable.

 

I, I, I think that there’s large language models that are out in the wild that somebody will train, uh, somebody who’s neo Nazi will train on that content. Um, I think there’s, there’s generative AI art models, uh, that are out in the wild that somebody will drain, uh, train on very racist ways of imagining how different people look like.

 

Thank you. I think that you will have weaponized AI and generative AI and deepfakes globally, and I don’t know that there’s any solution. I don’t, we’re going to have to invent some new technology. It’s funny, I mean, I think Elon Musk has created cultural vandalism with Twitter, um, and lots of other challenges, but he wanted to invent a new AI that will determine the meaning of reality.

 

Great! Um, who’s reality, um, but I, I don’t know how we’re going to avoid this semiotic catastrophe. This, this, this, this. Dissolution of meaning this, this destruction of truth, because I don’t know how we can possibly escape it. I hope somebody smarter than me has a plan. I feel like you, I feel like you

 

Stacey Wade: and Dawn could get together and she reads the books on this, like, this is like her favorite subject matter.

 

So I feel like you all could talk about this all day, but I agree. That’s the part that scares, scares me is, uh, you know, Don, I kid you not. Don’s been reading these. Books that speak to this, you know, grids, you know, hostile takeovers for a long time. It’s kind of like her, her, you know, when we go on vacation, that’s one of the things she always picks up a book.

 

John Koetsier: This is how she relaxes.

 

Stacey Wade: Yeah, it’s a hundred percent, but you know, it was crazy two years ago, even speaking with when she would. Throw this at me. You kind of like, Oh, it’s a nice book to see this come into reality. Like some of the things that she, you know, we’ll have conversations about to see some of them actually become real.

 

It’s almost like watching the Simpsons, you know, how they actually like predict the future. It’s like, you’re seeing it happen in

 

John Koetsier: real time. Don save us. You’re the technologist. What are you going to do?

 

Stacey Wade: Somebody has got to do

 

Dawn Wade: it. So. The thing is, if you can dream it now, it’s like, if you can dream it, you can do it, and that can be scary when people don’t have your best intentions at heart, you know, so I don’t know that anybody has a solution because it’s such, like you said, thousands and millions of solutions that are open source for somebody to take hold of that and to customize it, and 90 percent of the time it’s going to be for good, but it’s going to be some of those that aren’t for good, you know, And I’m just not looking forward to that in any way.

 

John Koetsier: It’s a crazy, challenging world that we’re moving forward in. And I guess, um, I’m going back to Gandalf’s wisdom in Lord of the Rings. We have to live in the times that we’re in. We have to do the tasks that

 

Stacey Wade: we have. No, I’m going to have this one. I’ll do this one. So keep going.

 

John Koetsier: Do the best we can. Uh, hopefully there will be some new technological solutions that will. Tag, uh, when something is artificially created, and I know that we can detect it right now, but it’s going to get to a level where the human eye, as you were saying, cannot detect it. The human ear cannot detect it.

Right? We’re going to need some solutions that tag something that is real. Even as our definition of real changes technologically, uh, it’s a crazy world we’re moving into, Stacey.

Stacey Wade: It scared me a little bit. I mean, I’m excited about it, but I’m also scared the same, the same. 

Dawn Wade: Yeah. When you think about like where we were a couple of years ago when they’re like self driving cars, you know, like, and I’m like, Oh, that’s cool. You know, but now like it really can happen, but then you see, you know, one or two really bad stories and it makes you doubt the, you know, efficiency, like, well, who tested this or how did that?

So can you imagine something like this as such a humongous scale? You know, so I think that within our generation, you know, and what’s going to be that in 50 years is the landscape won’t look the way that it looks now, you know, and that’s exciting. But I think we have to have the foresight to get ahead of it.

And figure out how to set boundaries so that, you know, we don’t mess it up for ourselves or for our future generations. So we’re going to, somebody has to have that brain power and that oversight to do it. 

John Koetsier: Yeah, and government is painfully slow in all these things. So they’re like 10 years behind. Right.

We’re going to have our generative AI “Senator, we sell ads” moment. It’s going to be, you know, 10 years too late, but hopefully someone will figure it out and the companies, uh, big tech will also do the right thing. That’s wow. We’ll see. Who knows? I gotta say it. Go ahead, Stacy. We hope. We hope. We hope.

Absolutely. I gotta say it’s been, this has been fun. It’s been, it’s been interesting. Um, it’s been great. I usually look at stuff. Um, and I talked to people who are inventing new technology. You’re dealing with the consequences of it and using it and, and, uh, talking about the impact of that is also very, very useful.

Thank you so much for taking some time out of your day. Thank you. 

Dawn Wade: Thank you so much. And I appreciate this.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: