Generative AI and podcasting: worldbuilding with story

generative AI podcasting

Imagine using an entire podcast as a prompt to GPT-5, or GPT-10. What would a massively capable generative AI do with a podcast?

Create a movie?
Score a soundtrack?
Build an entire immersive world?

John du pre Gauntt is the host of The Augmented City podcast. He’s also a journalist, analyst, founder, and storyteller who has been writing and building Burner Face for the past half-decade. Burner Face is an audio sci-fi ghost story about Seattle 100 years from now.

With a team, he created a video companion to the podcast with Midjourney that showcases just a bit of how a powerful generative AI of the future might make the world of a story, a book, or a podcast come to life.

In this episode of TechFirst, we’d going to learn how, why, and what it means for the future of podcasting and story telling. And we’re also going to interview Beini Huang, the artist and designer on the project, as well as Keith Ancker, the audio engineer.

Watch: generative AI and podcasting

(subscribe to my YouTube channel)

Subscribe to the audio podcast: TechFirst is on all major platforms

 

Full transcript: Generative AI and podcasting: worldbuilding with story

Note: this is generated by AI (how meta) and lightly edited. Watch the video or listen to the podcast if in doubt about a word, phrase, or section.

John Koetsier
How can generative AI change podcasting and maybe storytelling?

Hello and welcome to TechFirst. My name of course is John Koetsier here. For my most recent TechFirst on Geoship, I gave GPT-4 the transcript, had it create a summary of the podcast and a blog post based on it. So I’m using generative AI a little to help deliver parts of a podcast, but it’s small potatoes compared to what some others are doing.

And one of those caught my eye recently. John du Pre Gauntt is the host of The Augmented City podcast. He’s also a journalist, analyst, founder, storyteller. He’s been building an audio sci-fi ghost story about Seattle, 100 years from now, for the past half decade. His LinkedIn profile says “Extensible Storytelling,” which is interesting. We’ll talk about that. He created a video companion to his blog with Midjourney and a team. We’re going to learn how, why, and what it’s accomplished. Part of the team is Keith Ancker. He’s an audio and maybe a prompt engineer as well. Welcome, John. Welcome, all.

John du Pre Gauntt
Thank you.

John Koetsier
Good to have all you guys here. John, give us a 30 second bio. You’ve done a million things in your life.

John du Pre Gauntt
Hopefully a million and one after this interview.

I’m actually OG Steam. I have a background in information systems and also in English literature. And so, about six years ago, when Keith and I started The Augmented City podcast, we started bashing around this idea of culture and code that now we’ve reached a stage in our organizations and how we use technology to where they need to be on like parallel tracks.

It used to be, you know, tech invents a new capability, it gets tossed all over the wall, and the rest of us are sold. It’s really cool, deal with it, and tell us what broke. And on the flip side, we’re finding now that, especially when we’re talking about generative AI, it’s not quite a tool anymore. It’s not a full, you know, the stuff about replacing humans, I don’t think is going to happen any time soon, but at the same time, it’s not the usual thing.

So one of the main points that we do at Augmented City podcasts is we cover how does artificial intelligence and human culture kind of co-evolve and affect each other. And a big part of that now is how it affects storytelling. And that’s been a real drill down point for us for the last two years on the podcast.

And then Burner Face was us having a B2B podcast about AI saying, wow, we generated all this material. So our research file is pretty well done. Let’s move it sideways and write some sci-fi and see how we can depict a plausible day in the life with of course flying cars and talking cats, you know, can’t have a good proper sci-fi without that. And then go out and start engaging audiences and especially podcasters that there’s this new opportunity, and Keith can flesh it out in more detail, but we think there’s a new storytelling opportunity that can be driven by a podcast, but it’s also including elements of a graphic novel and a film.

And so that’s why we engage with Beini Huang to say, yeah, we’re an audio podcast, but we need a really distinct and pervasive visual identity. So that’s how we got to our current state of how we’re using generative AI.

John Koetsier
And we’ll dive into that. Beini’s going to talk about what you do, how it works, and all that stuff. I’m looking forward to that and a bunch of other stuff.

Keith, before we dive into some of the other pieces, you talked a little bit about the space between video and audio. What do you mean by that? And what does that look like?

Keith Ancker
Well, it seems like, you know, when you talk about content generally, you get audio or video. And there really is kind of this liminal space between the two where nothing really exists. And we’re trying to figure out a way to move in that space because there’s so much there, and there’s so much opportunity there.

You know, we like to think of podcasts for the most part as it’s a passive engagement. You’re listening to it while you’re something else for the most part. But what happens when you want more of that story without having to commit to sitting down in front of a screen and watching it the entire time? How do you get extra pieces of content? How do you get extra information? And how do you engage in that seamlessly?

Because right now, if I listen to a podcast and I want to know more, I have to get out of my podcast app, I have to go to my browser, go to social media and find all that stuff. And so now, we’ve been working with a company called MediaStorm Platform and it allows us to wrap this content around audio. And we really are kind of looking in that in-between space of how do we engage with audio and still give more? How do we enrich the experience for the listener?

John Koetsier
It’s really interesting to hear you talk about that because like we all come from our own perspectives, right? I come, I’m a writer, right? And so for me, when I think about content, I think about writing and I think about reading. And of course, I’ve come to podcasting in the past, what, four years or something like that. And I do a video podcast. So there’s audio that a lot of people listen to. There’s video that a lot of people grab. And so there’s all these things. And I actually publish my podcast in audio, in video, which of course has audio in text format as well. And you’ve kind of built something that mixes all these together, isn’t that correct John?

John du Pre Gauntt
That’s right. I mean, one of the key things about this is we’re trying to reach people, depending on how deep or how shallow they want to engage with our media brand at any one time. And that’s what I love about podcasts, that you can get a very rich, even linear storytelling that can conform to the rhythms of how people actually want to live.

When you think of the attention economy, strictly speaking, the perfect use case is someone vegging in front of the wide screen. But what are they doing? They’re scrolling through their phone, even though the industry doesn’t want to talk about that and they want to price their advertising against it, and you’re far more of the expert of how that’s being priced than me.

But that’s a critical thing that we saw in podcasts was that for this more kind of heads-up 3D internet that is coming to us to where it’s just there, like electricity, that actually audio storytelling is a really good spine for tethering and attaching all these types of enhancements. And if somebody just wants to press play on their smartphone and set it down and do their dishes, the audio story stands on its own. But if they also want to enhance that with those visual assets that we’re ready. And I think that in future, that’s where we want to take our audiences.

John Koetsier
There’s so much to think about there because we’re talking generative AI and we know we can create text with generative AI. We know we can create still images with generative AI. We know we can create video with generative AI. We’re early in that, but we know we can do it.

And we start thinking about not just listening to a story, but inhabiting a story, perhaps with generative AI around a three-dimensional space that you can live in, spend time in almost a metaverse type thing. Beini, I want to bring you in a little bit. Talk about what you’re doing to create this enhanced storytelling experience.

Beini Huang
Well, I think it’s interesting because in the past, if you wanted to create these different forms of media. I just watched the last episode of The Last of Us last night and that was, you know, a video game that was extremely popular and that was the first person you’re playing through these characters that then now they’ve made into a TV series that’s been very popular. And along with that TV series is the podcast, which is also very popular.

And I think it’s those different prongs that kind of contribute to this very full and immersive world that if you’re interested, like John said, depending on how much you’re interested in this show or these characters is how much you can sort of delve in and touch these different forms of media around this brand, let’s say.

So in the past, I think there’s been a lot more specialization. You had to have a specifically podcast driven studio to produce the podcast. You had to have graphic designers, you had to have an animation studio maybe to create the various illustrated images or marketing assets and then you had to have a separate studio to do the VFX and the games and all that. And I think what you see with a generative AI at least from my perspective, and I’ve been in the commercial and artistic space let’s say for the past 10 years, is that as a single person, I’m suddenly able to reach into all of these different arenas that used to be inhabited and dominated by studios. As a single person, I can maybe get to like 80% of what previously took these big studios to achieve. And maybe that’s quite scary for studios, but I can play such a huge role in creating these various types of media.

John Koetsier
It is amazing. And if you see the things that GPT-4 can do right now, people have literally created entire iOS apps with GTP-4. People have literally created, I know a person who is using GPT-4 to help create code for his drone delivery startup. Literally, right? Beini, talk about what you do to try visual and even in some cases moving video experiences around a story. How’s that work? What’s that process look like? What do you do?

Beini Huang
Well, specifically for the Burner Face project, it was really about synthesizing the material that John and Keith have already put together. You know, the podcast is like five hours of a solid story with characters and environments. So there’s been a lot of work that went into creating this world, creating the characters ahead of me coming in to then put the actual visuals to them.

So, you know, for me, I benefited a lot in this particular project from being able to be immersed in that core story. There was something of substance and something substantial there to begin with. And then basically the cool thing for this project was because it was a podcast, because it was audio driven, I had a lot of freedom in terms of visuals because there were no visuals. It was just an oral universe that I could play around with and imagine, and feel out what these scenes could actually look like visually.

So there was a lot of freedom in selecting the visuals that were going to go into each of these scenes. And from there on, starting with that base material, I really applied the standard legacy process of creating visual material, which is you start with a storyboard, you pick key images that would fit for each of the scenes that we were putting together in the trailer, and then through some back and forth with the rest of the team, selecting which visuals fit this world, which visuals did not fit this world.

And part of what I found that was fascinating to me was that a lot of times, if you’re an artist, if you’re an art director, if you’re a creative director, a lot of time is spent sketching things out. So you sketch out little thumbnail sketches, you do value studies, and all of that is kind of invisible work that you need to do to start to suggest the visual world that you want to eventually see.

John Koetsier
It’s all the stuff we see in “the making of” part of the movie.

Beini Huang
That’s right. In the making of stuff. And it’s kind of invisible, because you don’t want to freak out the end client and be like, oh this, you know and people are like, oh my god this kind of thing is what it’s going to look like.

But with generative AI, what was interesting was that that process almost could be non-existent because I could say, well, in this scene we want Seattle in 100 years very green or it’s going to look very dry and arid. And instead of going through the storyboarding process and sketching that out and figuring out the composition, you just ask something like Midjourney to generate exactly what that’s going to look like. And it gives you something that’s extremely, I mean, it’s very, very polished.

So instead of having that process of what is this going to look like, what is this going to look like? And not daring to really push too much, you know, visually because you don’t want to commit at the beginning of the process what is this visually going to look like at the end. Now you can basically get to the end at the beginning, in a way.

John Koetsier
How many prompts are, is it getting first prompt success? Are you working for half an hour on the perfect prompt that’s 30 words or 50 words long? Talk about that a little bit.

Beini Huang
Yeah, so it was like a serious experimentation because you can, it’s sort of like if you land on Pinterest and you’re like, and you’re thinking of, oh, I’ve got to buy like a coffee table, which I actually do need to buy. And you’re like

John Koetsier
AI is listening.

Beini Huang
Yeah, right. Give me a coffee table, you know? And then the first set of coffee tables you’ll probably see on Google Image or Pinterest is kind of like basic, right? It’s kind of boring. You’ve seen it before. It’s very, it’s just drawing from sort of a common denominator of what’s popular and what’s trendy at the moment. And it’s fine. It’s perfectly workable material.

And so it’s the case with Midjourney too. You can ask it to give you, you know, a cool cyberpunk character from 100 years in the future, right? And it’s going to give you something that looks pretty good. But at the same time, it’s pretty basic. It’s kind of general in the way that people have seen this character a million times. And so the further prompting really is the process of how do I take something that’s very generic and general, that people know to be beautiful, but it’s very standard and not very differentiated from any other work that you might have seen out there. How do I make that specific to this project? How do I make it more interesting? And I think that requires, I hope, you know, something of the human eye and human vision to kind of bring all this disparate parts to a point.

John Koetsier
John, I want to turn to you because you said something that caught my ear earlier. You talked about an antidote to automation because one of the things we think about when you think about AI and robotics is we’re going to be replaced by automation, right? These jobs will be replaced. Those jobs, we don’t need that anymore. Where’s the space for the human? Talk about what you meant when you said antidote to automation because there’s a going on here.

John du Pre Gauntt
Well, automation has sort of been the story of the 20th century and all the way up to here. How can we use technology and processes to make the widgets, however, defined in the biggest number at the best quality at the lowest price? That’s been the number one value that’s been pursued. An economic value is as much of a moral value as well. That’s the highest good. It is the most efficient. And I’m not belittling that at all, but that mentality is what replaced the craft mentality back in the 19th and 18th centuries and stuff like that, where you had craftspeople who were making the high technology of the day.

And I say this from personal experience because my father was a clockmaker. My late father, he used to repair antique clocks. And they were the high, we call them antique clocks today, but they were the high tech of the 19th century, metallurgy, mechanical engineering, the whole nine yards. And the way that you organize a craft workshop is very different from how you would organize a factory floor. And what I’m saying about the antidote to automation is that AI has come into the equivalent of the factory floor in a similar way that electricity took over from steam. And so if we stay organized like we always did, to where efficiency is the number one goal, AI is better. You know, it’s like, yeah, I got highly trained people with picks and shovels. Well, the bulldozer is still going to dig the foundation of the skyscraper. You know, you won’t.

So, to me the antidote to automation is how do we take this small team of three people who are not only bringing their core medium, which in my, you know, I was a writer, Beini is on visual, Keith’s audio, but also are highly fluent in somebody else’s medium too. Cuz every crafts person’s like that.

It’s a craft mentality. It’s like, yeah, I’ve got my main skill, but I’m also able to speak intelligently with people from other crafts. And so the main thing where I’m saying the answer to automation is, craft is not only what you make, but it’s also how you work and how you work with other people. And that’s the place where I don’t see AI really making significant inroads. It has the social skills of a mollusk as far as like the ability to a great tool. It can even be an answer machine. It can even be a generative colleague, but it’s still functionally autistic in the sense of what we can get out of it as a team member.

I view that as a feature, not a bug. And so when I want to tell creative people is, yeah, if your job is to keep making beautiful color palettes or write up copy edit for the Tri-State Fair or the new car dealership, full respect to them, you’re gonna really have to work hard just to stay still. And so one of the reasons why we entered this project we’re using with Midjourney is that we wanted to get messy with the tools, not in a sense to demo what AI can do, but to say, can we bring AI into a legit creative project and tell a full story in two minutes and do an arc like that? Because that’s the kind of learning that I want to know, because I’ll never learn all there is about Midjourney. I want to learn how to integrate Midjourney or competitor into a team led by humans.

John Koetsier
Yeah, absolutely. And I mean, there’s an aspect there as well, where an antidote to automation is actually using the AI, as a human, and being able to do, as Beini was talking about, much more than you could traditionally do. You don’t have a whole studio, right? You don’t have a whole group of people. You’re just actually doing something by yourself that is greater and much bigger. Beini, I want to bring you back in because another thing that you guys have been talking about is human language as a programming language. I said off the top that some people are creating an entire iOS app in GPT-4, right? Talk about how you’re using human language as a programming language.

Beini Huang
Do you mean by the prompts that I’m putting into Midjourney?

John Koetsier
That’s a big way, yes.

Beini Huang
Yeah, so I mean, I’ve been using this suite of AI tools in as much of the workflow that I’ve developed working with clients as possible. And a huge part of that is of course, the text generation writing scripts and creating the visuals that I used to sketch out and do value studies in advance. So I’m specifically looking at the workflow that I have now and sort of destroying my own workflow. Because I think that that’s, you know, there’s no way that this stuff is going to go away.

You know, when I remember when my family had the first phone that had a little camera in it, and it was like a 1.4 megapixel camera and we were like taking photos of our feet, you know, the table, the keyboard.

John Koetsier
A rock!

Beini Huang
Yeah, it was amazing, it could exist like in this phone. And now fast forward, I dunno, about 10, 20 years, everyone has photographs just like everywhere. You know, there’s not enough space in the world for all of our photographs. And so, you know, I think the same is going to be true for AI and AI tools. It’s not gonna go away, right? This is just the crummy 1.4 megapixel version of the AI that we’re experiencing.

John Koetsier
Now that’s a scary thought. All the stuff that we’re seeing right now are the 1.4 megapixel camera.

Beini Huang
Yeah, right.

John du Pre Gauntt
It’s true.

Beini Huang
And there’s no world in which I think that this stuff is not just going to snowball and accelerate much in the way that our phones are now so extraordinary. So I’m very much interested in seeing, okay, well, given the workflows that I’ve been working with, with commercial clients, with clients in TV, with clients in film, you know, how do I use these places that can be automated.

And so, to come back to your question, I’ve been using ChatGPT to write scripts and to finesse scripts that I’ve written that perhaps could benefit from a pair of alien, you know, computer eyes, and to do a lot of the heavy lifting visually that I would have had to do myself or that I didn’t have the skills or the resources to do by myself. But now I do.

So I’m trying to actively look at the workflow that I currently use and is practical and realistic to me and figure out the places that I can make use of this tool and in that way I think try to expand and see where this stuff is going to go.

John Koetsier
It’s super interesting. I estimate that in terms of being able to create a blog post with ChatGPT for my previous episode, I probably did it in, I want to say 10% of the time, maybe less and 2% of the cost that it would have cost to get somebody else to do that.

I write the story for Forbes myself because most of my episodes end up on Forbes and I write that story. But for my own blog, I would like to be able to give some text along with just a video. I don’t want to just plunk a video down there. I want somebody to be able to read something. I want Google to see something, right? I want the AI and Google to see something and give me some SEO credit and list me. And what do you think the efficiency gain has been for you? Because I estimated 10% of the time, 2% of the cost. What do you think it’s been for you using the AI tools that you’re using?

Beini Huang
Well, currently it hasn’t made me that much more efficient in a way. It certainly, I think it’s improved. Like I’ve been using ChatGPT, like I said, to write scripts for some of the corporate work that I do. So they’re fairly short form videos that help contextualize and explain various services and products that my corporate clients have.

And so in the past, they would send me a whole packet of their documents that I would read through, that I would digest, that I would then synthesize and write a script from. But I thought, wait a minute, this is the perfect task for ChatGPT, because I suddenly have all this information that I need kind of just reformatted into a one-minute, two-minute script. And so that’s what I did. I fed all this information and got to spit something out.

And in that way, you know, it was much faster than what I could do. It might have taken me four or five hours to go through that material and write something coherent, but it took it two minutes to do that. But then, I think it’s sort of like a give and take because that process is much faster now, but then the stuff that it writes, again, is sort of that Pinterest Google Iimage level of basic stuff that you can’t really be like, well, here’s the script because then it’s just like super basic and boring.

And so then it’s so that time I saved is now put into now I’m sort of an editor and I have to go through line by line and figure out, well, does this, is this a good story? Is it, you know, and you still have to have, at least for now, this personality behind how do I shape this work?

John Koetsier
Yeah, and I think that’s a really key thing actually, because if you wanna keep your job, or you wanna keep doing what you’re doing, you have to have a personality, an angle that is valuable, that is interesting, right?

Because, I mean, otherwise somebody can say, well, I’ll just get ChatGPT to do it, right? And you get a certain level of quality, and probably that’ll increase, and you can say write it in the style of Jack Kerouac, or write it in the style of Jack London, or somebody else, I don’t know, or Ursula K. Le Guin, or something like that, right? And that’ll probably work, but unless you have some expertise in your space, and unless you give some part of your soul and your heart to that, it’s less than it could be. So that makes a lot of sense.

John, I want you to project out a little bit. Use the crystal ball here, the ChatGBT crystal ball. And where is this going? Where do you see this? Where do you see this evolving to? Beini said, you know, hey, this is the 1.4 megapixel version of generative AI. Where’s the 50 megapixel, the 250 megapixel version for podcasts and for storytelling and the business that you’re in?

John du Pre Gauntt
Okay, well, I mean, the tools, you know, I’ve been writing more, I’ve been writing on Moore’s Law for decades now and so far has never disappointed. And one area that I like is what people are calling actually the middle of Moore’s Law. It’s not the fact of what does the bleeding edge do, but what is the median price point for having stuff still kick ass? You know, that’s one of the things that interests me a lot more on the tools. But one thing I do, you’ve been mentioning this 1.4 megapixel camera, just very quick, in 2003, twenty years ago I helped run NTT DOCOMO’s 3G demonstration lab in New York City and we had our own little 3G cell …

John Koetsier
So exciting!

John du Pre Gauntt
Oh yeah, yeah, yeah. It was awesome. But we also, what was exciting we had the latest camera phones from Japan. And with Kodak just way up the road in Rochester, they’d troop down in droves to check out these new camera phones from Japan. And their verdict was, well, it’s inferior to chemical film. Because they were judging this phone as a camera as opposed to a camera as something to socialize with.

And that was sort of the profound miss that they had in being able to look at, evaluate the technology and completely miss the meaning of the technology, because they were judging and saying, yeah, yeah, the pictures are inferior to what we can do with chemical film. Yeah, correct. Wait a few years, but more importantly look how behavior evolves with that.

So when I look at my crystal ball, where I’m actually looking at is not so much how will people behave, because I’ve learned not to do that, but what is the environment they’re gonna really be in? And that’s where you know when you got Satya Nadella saying that as computing becomes more deeply embedded into daily life it becomes indistinguishable from life itself, which sounds kind of, you know, kind of spacey, but really what they’re saying is that Tron is real.

You know, the premise behind that movie is real, but we didn’t atomize our bodies and stuck them on a chip. Instead, we took this giant hose and we’ve been spraying chips and sensors and micro motors. Now we’re just walking around inside. And in that kind of environment of computing, that’s where I see the ability of generative AI to help us make these unique experiences at the margin to reach people in our audiences in ways we could have never done under kind of a classic production oriented assembly line of a media company where you had your art department and your music department and your production, yadda yadda yadda.

So when I look into the crystal ball, I’m looking more about, you know, computing that we inhabit, like a habitat, and then what would be the native media experience that should go with that. And I’m making a bet and I’m glad, you know, there are people starting to join that. That podcast is actually extremely well suited for that type of environment where people are actually doing something as they are experiencing media. They’re not parked in front of a screen being that consumer or that listener.
And so that media needs to conform with how people want to live. And podcast is uniquely flexible. It’s the most intimate medium. You’re literally in someone’s ear. And it also really lends itself to AI because once I get an audio file, I can transcribe it. And once I transcribe it, I’ve got a proto script. And then once I got the proto script, I can start prompting an AI engine.

And that original, you know, it’s like podcast content is Miracle Grow for these types of engines because we’re always spitting out very deeply linked concepts because telling an audio story is tough.

You know, one thing I’ll give Keith a lot of credit with for the first two and a half years I’m trying to podcast, he’s like, stop speaking like you write. You’re using your writer’s voice. And that really struck me that, yes, he’s correct. I was thinking my writer’s voice. Now I gotta get my podcasting voice. Now I gotta get my visual voice working with somebody like Beini. So for the next 50 years I’m super, I’m not gonna say the E word, the corporate we’re super, no, I’m really jazzed about the environment for storytelling because you’ve never had a technology revolution that did not have a parallel media revolution. You know, and that’s not hyperbole.

Keith Ancker:
And more to your point about you were wondering, you know, if this is 1.4 megapixels, what does a 50 megapixel look like? I think there’s two key components to what this is going to evolve towards. One is an individual’s ability to start to train the AI on their dataset and keep that private to them, so that if Beini wants to take her artwork and put it into that thing to get elements of that in the output, that she can do that without losing control of her imagery.

So when we have, you know, a sandbox that we can play in where we know that our work is going to go towards someone else’s stuff, that’s going to be huge.

The other thing that is going to be really important for creators at any rate is being able to fine tune the controls. So when we put a prompt into ChatGBT or one of the image generators is when we get that back. Having a dashboard where we can make the individual little tweaks that we want to see, whether it’s a shade on a particular piece of it, or the shape of a particular object, when we can start to control the fine tuning of what comes out the other side, those are going to be huge.

And the reason that it’s super important for podcasters is that, I mean, like John said, we are giving massive amounts of, I mean, an episode is a prompt. And when we start to take the entire collected volumes of what we’ve done, and we’re able to feed those into generative AI to give us an individualized product based on our input, along with whatever else it’s coming up with to have the building blocks, that’s fascinating because that really puts the creative power back on the creator. Like we’re not giving up our individuality or artistic license to generative AI. We’re adding it to generative AI to create something that’s uniquely ours.

John Koetsier
The possibilities are literally stunning. Literally stunning. As I’m sitting here hearing what you guys are saying, thinking about it as well. I mean the very least of it is that somebody who is listening to a podcast or watching a podcast right now might inhabit the space of the podcast and actually be part of what’s going on inside of it and be as if they’re in the room with everybody there. But there’s so much more. Like you said, an episode is a prompt, you know, here’s the podcast episode. Here’s the game for it. There’s the room for it. There’s the world for it. There’s the whatever experience for it.

This is really, really amazing for storytelling podcasts especially, right, that are either fictional or historical or other things like that. Maybe less so for this kind of podcast where we’re talking about technology and other things like that, but that is really going to explode what is possible. Super, super exciting.

I want to thank you guys for taking this time. It is really, really cool. I look forward to seeing what you’re doing. I look forward to seeing where it goes and I’m excited, like you said, John, to see the next generation, the continually evolving generations of generative AI and what we’ll be able to do with them.

John du Pre Gauntt
Well, I’ll just leave you with the mantra I like to bang, which is storytellers have been nerds since the first cave dwellers mixed their colors. We’ve always been taking whatever the technology has, you know, the engineers are building it, God bless ’em. But we’re the ones that are taking it out in the parking lot and bashing and banging it around. And so, yeah, that’s what we’re intending to keep doing. So stay tuned.

John Koetsier
Wonderful. Thank you everybody.

John du Pre Gauntt
All right, thank you, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: