Is AI killing creativity … or just making it easier to be average? 94% of creatives now use AI. But only 11% believe it actually makes them more creative.
So what’s really happening?
In this episode of TechFirst, John Koetsier sits down with Saeema Ahmed-Kristensen, former head of design engineering research at Imperial College London’s Dyson School and now leader of a £24M research portfolio at the University of Exeter. She’s worked with companies like Rolls-Royce and BAE Systems, and she brings data to the debate.
Her team analyzed 600 humans vs. 12,000 AI-generated ideas. The result? AI is excellent at fluency (lots of ideas) … but really bad at diversity.
Humans still dominate in flexibility and true novelty.
- Get the deepest insights concisely on the TechFirst Substack newsletter
- Subscribe to the TechFirst YouTube channel to never miss an episode
And, watch our conversation here:
Transcript: AI is killing creativity
Note: this is a partially AI-generated transcript. It may not be 100% correct. Check the video for exact quotations.
John Koetsier
Is AI killing creativity? Hello and welcome to TechFirst. My name is John Koetsier. It feels like we’re all using AI all of the time. Ninety-four percent of creatives say they’re using AI. I don’t know what’s happening with the other six percent. Maybe they’re lying. Only 11 percent of us, however, think that it makes us more creative.
Most say AI makes work feel soulless or empty. And guess what? Shocker. Most of us also fear replacement as AI gets better and better. Well, someone has a super interesting, noteworthy perspective on this. Her name is Professor Saeema Ahmed-Kristensen.
She led design engineering research at the Imperial College London Dyson School of Engineering and now oversees a 24 million pound research portfolio at the University of Exeter in London. She’s working with advanced companies like Rolls Royce and BAE Systems, and she says true creativity needs nourishment, not substitution.
What’s that really mean?
Welcome, Saeema Ahmed-Kristensen. How are you doing?
Saeema Ahmed-Kristensen
Very good, thank you. Thank you for having me, John. How are you?
John Koetsier
I’m super pumped to have you. It’s a critical question. It’s a question that I confront every day. I suspect you do as well. Let’s just start with the big, bad question. Is AI killing creativity?
Saeema Ahmed-Kristensen
I think for some people, yes. So that’s an academic answer: sometimes, some people. But what do I mean by that? Large language models—AI—is really good at producing a lot of ideas. That’s one measure of creativity, what we call fluency, how many ideas you generate. And if you’re not a particularly creative person, this is a brilliant starting point because you can produce lots of ideas. But then the second measure of creativity is producing something novel that’s different, that’s new. And there we have a little issue because a lot of the ideas that are produced by large language models or other forms of AI, particularly general AI, are very similar and grouped around similar concepts. So to get that truly novel thinking, that is something that currently sits in the domain of human beings—very creative experts.
John Koetsier
In other words, if you use AI and just accept what it offers, great, you’ve got sort of a baseline starting point. But if you want something truly unique and extraordinary, you have to go beyond. It does solve the blank page problem for a lot of us, right?
Saeema Ahmed-Kristensen
Yeah, it does. It does solve the blank page problem. But if I go back to the idea of producing things that are similar, most of the ideas are very similar. We’ve recently done a study with 600 human beings and 12,000 ideas that were produced by large language models. And what we could see there is that diversity—how different the ideas are from each other—human beings are much better at creating ideas that are very different. So this is a pool of 600, not thinking about one person’s creativity, but a pool of human beings compared with large language models.
If you think about how AI works, it’s based on the data that’s available. It may extrapolate or interpolate from that data, but can it completely make a massive jump and come up with something completely different? That is difficult right now. So yes, it gets you away from that blank bit of paper. If you’re not creative by nature, it’s a brilliant starting point. If you’re looking for incremental innovation, it’s a brilliant starting point. If you want something truly novel and innovative, it perhaps isn’t the best starting point.
John Koetsier
I find that fascinating because you looked at 600 people. You probably didn’t go find 600 of the most creative people in the world. You just grabbed 600 people. They have varying ranges of creativity, and yet within that sample, you found that they were far more divergent, far more creative than standard AI models.
Saeema Ahmed-Kristensen
Yes, that’s right. It’s a measure called flexibility—how diverse those ideas are from each other.
John Koetsier
Interesting. I want to talk about how you use AI, and I’ll start that with maybe an anecdote about how I use AI for this very purpose right now. So when I finish this podcast, I will get a transcript that AI helps create, and I’ll dump that into ChatGPT and into a custom GPT that I’ve made. And I’ll say, “Give me some title suggestions and give me an overview of what we talked about and some YouTube chapters.” It’ll give me five or six title suggestions. I almost never use one of them. Sometimes it’ll stimulate something, but I just have a different idea. It does help to kind of kickstart, and I’ll use maybe a piece of one or a different angle or something like that. How do you use AI?
Saeema Ahmed-Kristensen
Yeah, so this is as an individual. Obviously, I have a life as an academic. And I can say that when I have my postdocs or PhD students use AI to generate papers, I can see straight through it and it annoys me. That is because you’ve got a lot of fluff and not enough substance. I give it a C minus. It gives you a lot of fluff, which you then have to reduce and find where the content is.
So in terms of writing, I prefer it the other way around, where you’ve written it and then say, “Okay, condense these ideas into bullets.” And as long as it’s in the hands of experts—so you’ve got someone who can evaluate it and say, “Okay, yeah, these are good points”—that’s one way in my academic life.
The other way is probably routine tasks. If I have to write a difficult email, I could write all the points, but you need to get to the point quickly and get the right tone. So again, create the content and use AI to break it down and summarize it and get to the point. But then I adjust it again. I go through those loops.
I also have been doing podcasts recently, so like you, I use it to generate the show notes and the caption. But I also use it for the memorable title. The last title I used was “Lipstick on a Pig,” because that was an idea that in design you only cut the surface if you don’t change beyond the diversity of the idea. That’s probably something that’s not going to be picked up automatically. But I use it to edit sound and audio. That’s where tools are very good.
Beyond that, in my daily life, I wouldn’t say I always use it. Sometimes after having spoken to a finance advisor, I’ll double-check it. Then it’s quite useful. So for me, in personal roles, it’s really more evaluative. I create the content, it might help edit it, and then I still go back and reflect on it.
In our research, we do it in different ways. There’s the creativity aspect we’ve talked about, but there’s another aspect. One is thinking about how we can predict people’s user experiences. These aren’t general AI tools. These are tools we develop where you have to bring models into the system. The other example is evaluation of ideas. If you’ve got loads of ideas, AI and creatives can get you off the blank paper. It can shortcut a brainstorming session and create 200 or 300 ideas for you if you wanted it to.
One of the beauties of brainstorming when humans generate ideas is you build on each other’s ideas. You see wild ideas, you maybe misinterpret something and come up with something completely different. If you’ve got those AI-generated ideas, now you’ve got a challenge: how do you evaluate them? One of the things we’ve been doing in our research with one of our postdocs and my colleague is thinking about how you bring different large language models together to evaluate those ideas. But again, this is not a general tool. We have to put these models into systems.
John Koetsier
It strikes me that how you’re using AI personally is opposite of how many people use it. Many people go to AI, get something, then build something—or just take it wholesale, which is a really bad idea in a lot of cases. They’ll go there first and then make something. That may constrain our range of thought. It may also spur off different things. Different people are different.
I love what you said about “Lipstick on a Pig” as one of your titles. I don’t think ChatGPT or Claude will ever name something that. It’ll be more corporatized, more homogenized, more sanitary.
You said in our prep that 2026 is a pivotal year for how we use AI. It feels pivotal in a lot of ways because we just heard from the people at Anthropic that they used AI, their own AI, to largely code their next version of Claude. We also heard in OpenAI’s latest notes that OpenAI is using ChatGPT to improve itself. We see tremendous advances in certain areas, certainly coding and other things as well. Why is 2026 a pivotal year for you?
Saeema Ahmed-Kristensen
Yes, I think there’s recognition that the general AI tools currently available aren’t adapted to specific domains like design and manufacturing. What I think is going to happen—and is beginning to happen—is recognition that there needs to be some adaptation.
One direction is that the models themselves will find interfaces that can be adapted. It’s not necessarily changing the large language models but using things such as chain-of-thought reasoning—following how you reason but modeling that based on experts and building it into systems. So we’ve got an opportunity to take these generalized tools but make them relevant for different domains.
We also see movement in thinking about how people interact with AI. We’ve seen Jony Ive set up his company with Sam Altman, and we hopefully will see interesting ways of interacting with AI—maybe voice or physical interfaces.
The third aspect is around the homogeneity of ideas. If you see AI being used to create images, you can often say, “Okay, I know this is AI-generated. It doesn’t look quite right.” As more output comes into the world, the public is getting wiser in understanding the difference and recognizing the sameness. I think this will start with a little bit of a backlash, which will force technology development further.
John Koetsier
I wonder which public is realizing that. I’ll give you an example—the mother example. My mother is older and not tech-savvy. She saw someone streaming a video game on Facebook Live, like a Carmageddon-type game with wild animals roaming around. She was out of her mind, thinking it was real.
She’s 91, so some grace there. But I think there’s a huge mass of people who are not sophisticated in this stuff and not able to tell the difference between AI-generated content and what is human-generated. Do you agree?
Saeema Ahmed-Kristensen
I think that’s very interesting because I haven’t touched on different types of users. What I’m thinking about particularly is commercial use of AI to generate content like commercials. We’ve seen backlash there—for example, Coca-Cola’s AI-generated holiday content.
If we go to the other end, I’m a mother and my daughters are teenagers. They can instantly say, “This is AI-generated.” So there are generational differences.
If you look at podcast branding on Spotify, many people have used Canva AI-generated branding with a mic stuck in front. It’s recognizable. It might not be if you’re new to the domain. It’s a shortcut—faster and cheaper. But when everybody starts using it, it becomes recognizable.
John Koetsier
One of my questions was: what’s the bigger risk, replacement or homogenization? The more things look the same, the more replaceable you are. I’ve said AI has made it easier than ever to be mediocre. I just hope enough of us strive for the extraordinary and can recognize it when we see it.
Saeema Ahmed-Kristensen
I think that’s right. The more similar things are, perhaps there will be greater recognition of the need not to replace human beings in certain roles.
My personal perspective on AI is positive. It’s about using AI but recognizing where to use it and where the human in the loop is necessary.
One of the dangers is that AI hardly ever says no. It doesn’t say, “I can’t produce this.” It will always give you an answer unless there are guardrails. In the hands of a novice, you don’t know whether that answer is correct.
In product design or consumables, you want creativity, but you also want technical feasibility. Large language models don’t recognize that. It relies on the knowledge of the user to understand it. That’s where expertise becomes important.
John Koetsier
It’s super interesting to hear this conversation because I just submitted a fitness app to the App Store last night. I needed a stylist, so I described what I wanted and AI styled it. Is it the best it could ever be? Probably not, because I’m not a designer. But for me, it’s brilliant. Instantly average, which is much better than where I’d be without it.
You said without standards and boundaries, AI can hollow out creativity. What kind of standards and boundaries?
Saeema Ahmed-Kristensen
It’s hard to say exactly what it would look like. If hollowing out means things looking the same, one standard is the ability to judge and evaluate that sameness and provide feedback.
If you create a brand quickly through AI, have you got a way to evaluate how similar it is to everything else? Probably not. That ability to judge and create something more divergent isn’t built in.
There’s also the issue that large language models never say no. In the hands of a novice, you don’t know when something is incorrect. Health is a particularly dangerous example. You put in symptoms and get an AI summary. It’s one aggregated answer, not necessarily the correct one. It’s just an answer.
John Koetsier
Health care is a whole other issue. Where are you seeing the biggest challenges and opportunities?
Saeema Ahmed-Kristensen
I’ll start with healthcare. I see huge opportunity there. In design, we talk about personas—fictional characters representing user groups. In healthcare, services are often designed around limited patient experiences because it’s qualitative and time-consuming.
If you had large datasets representing many people’s experiences, you could design something more representative and inclusive. That’s a big opportunity.
In creative tasks like product or service design, the first stage is problem clarification—understanding user needs. Large language models are good at gathering that data quickly. So it’s that step before creative idea generation that’s really valuable.
Challenges include trust in information, verification, security, privacy, and inclusivity in datasets. If your dataset isn’t inclusive, products become dangerous because they exclude people.
So we need standards and boundaries that build experts back into the loop and ensure transparency.
John Koetsier
Gotcha. Super interesting. Saeema, this has been a wonderful conversation. I really appreciate your perspective and insight.
Saeema Ahmed-Kristensen
Likewise, John. Very entertaining. Thank you.
John Koetsier
Thank you so much.
Saeema Ahmed-Kristensen
Thank you.