Tag - e4e4e4

Reinventing speakers: replacing 100 year old tech with MEMS chips

reinventing speakers with MEMS chips

Can a new innovation upend a $50 billion industry that uses 100-year-old tech … and that is probably in your ears multiple times a day?

Pretty much all of us use earbuds, often wireless ones, and they all rely on old tech … literally hundred-year old tech from the 1920s with roots in the 1800s. There’s a new option based on silicon: a microchip means of creating sound. And, it could be better while also being cheaper. In this episode of TechFirst, we chat with  Mike Housholder, a VP at xMEMS Labs.

Here’s our chat:

(Subscribe to my YouTube channel here)


Subscribe to the TechFirst podcast


MEMS and speakers: replacing 100-year-old tech

(Note: this is an AI-generated summary of this podcast. As such, it will take guests’ statements as undisputed fact. This should not be taken as been written by me, John Koetsier.)

The world of audio has come a long way since the invention of the coil and magnet speaker in the 1800s. This century-old technology has served as the foundation for sound reproduction in nearly all our devices, from headphones to speakers. However, it’s time for a change. Enter xMEMS, a company at the forefront of innovation that is set to disrupt the $50 billion speaker industry with their revolutionary solid-state semiconductor alternative. In this blog post, we’ll explore the technology behind xMEMS, its benefits, its current market presence, and the company’s ambitious plans for the future.

The Problem with Legacy Speakers
For decades, we’ve relied on coil and magnet speakers for our audio needs. While they have undoubtedly improved over time, they still suffer from limitations. These mechanical devices are prone to wear and tear, lack uniformity and consistency across speakers, and can be easily damaged by external factors such as water or drops. Moreover, they struggle to provide the level of audio detail, separation, and precision that consumers desire in today’s fast-paced digital world.

Introducing xMEMS
xMEMS is here to change the game. They have developed a solid-state semiconductor alternative to traditional coil and magnet speakers. By leveraging the advantages of semiconductor technology, xMEMS can produce high-quality sound with precision and reliability. Unlike their mechanical counterparts, xMEMS speakers are more robust, resistant to damage, and have a longer lifespan. They also eliminate the need for magnets, reducing weight and electromagnetic interference in wireless devices.

The Benefits for Manufacturers and Consumers
The shift to xMEMS speakers offers several advantages for both manufacturers and consumers. From a manufacturing standpoint, solid-state components are easier to test, scale, and integrate into products. They offer greater uniformity and consistency in loudness and phase alignment, ensuring a well-balanced audio experience. Additionally, these speakers are more reliable, surviving drops, water, and dust better than their mechanical counterparts.

For consumers, xMEMS speakers deliver superior audio quality. With a faster mechanical response, they can reproduce complex and dynamic sounds with exceptional detail and separation. Imagine hearing every instrument in a song or distinguishing between background and foreground vocals with utmost clarity. Furthermore, xMEMS speakers boast a flat phase response, which means that audio is faithfully reproduced without the typical phase disturbances found in conventional speakers. This results in a more accurate sound representation, akin to that experienced at live music performances.

Current Market Presence
xMEMS has already made waves in the audio industry, with their speakers featured in premium-grade in-ear monitors and hearing aids. However, their most significant market breakthrough is on the horizon. In November, the company will launch its first true wireless stereo (TWS) earbuds with xMEMS speakers in collaboration with Creative Technology. This consumer-friendly product will offer the benefits of xMEMS technology at an affordable price point, making it accessible to a wider audience.

The Path to Market Dominance
xMEMS has ambitious plans to dominate the speaker market. While their initial focus has been on personal audio devices, such as earbuds and headphones, their ultimate goal is to reinvent loudspeakers in every form. From smartphones and smart speakers to cars and home theater systems, xMEMS aims to revolutionize sound reproduction across the board. While the physics challenges are significant, the company is already working on pioneering a transduction mechanism called ultrasonic amplitude modulation. By moving outside the conventional auditory frequency spectrum, xMEMS hopes to bring full bandwidth audio to even the largest speaker systems.

Patents and Future Competition
xMEMS understands the importance of protecting their technology and has secured over 110 patents to safeguard their innovations. While they expect competition to arise in this growing market, xMEMS’ strong intellectual property portfolio provides them with the necessary freedom of operation. Although they may not be the sole source for this technology, xMEMS is currently leading the market and anticipates significant growth as they expand into various segments.

With their solid-state semiconductor alternative, xMEMS is shaking up the speaker industry. By delivering high-quality sound with precision, reliability, and innovation, xMEMS speakers offer significant advantages over conventional coil and magnet speakers. The company’s current market presence, including collaborations with renowned brands, sets the stage for a full-scale disruption in the industry. As they continue to push boundaries with their ultrasonic modulation demodulation scheme, xMEMS is well on its way to revolutionizing sound reproduction in everyday devices. Get ready to experience audio like never before!

Full transcript: reinventing the $50 billion speaker market

Note: this is an AI-generated transcript.

John Koetsier: Can a new innovation upend a 50 billion industry that uses a hundred year old tech, still uses a hundred year old tech, and it is probably in your ears multiple times a day. Hello and welcome to tech first. My name is John Koetsier. Pretty much all of us use earbuds, maybe wireless, maybe wired. They all rely on.

Pretty old tech. It’s a hundred year old tech. Now there’s a new option that’s based on, of course, silicon. It’s a microchip. It’s a means of creating sound that is very different. It could be better while also being cheaper. Here to chat and give a big product announcement at some point is Mike Householder, a VP at XMEMS.

Welcome Mike. Thanks for having me, John. Hey, super pumped to have you. Let’s start at the beginning. One of the things that I saw in the pitch when you said, Hey, let’s chat about this, was that headphones rely on a hundred year old tech. What is this a hundred year old tech? 

Mike Housholder: Yeah it’s the speaker that we’ve all been using are pretty much our entire lives.

So the coil and magnet speaker, has been our only means of experiencing sound our entire lives. And it was invented way back in the 1800s, perfected in the 1920s and just slowly improved ever since then. But it’s a mechanical structure. It’s got. First of all, a coil and magnet for actuation.

You drive a current through that coil. It moves the magnet that then pushes through various layers of suspension, a paper or plastic diaphragm that Moves air and generates sound fundamentally unchanged for a hundred years. Wow. So what we’re replacing it with is a solid state semiconductor alternative.

So now instead of a complex mechanical system, we can produce very sophisticated, higher quality sound. With a tip. So this is a speaker on a chip that can produce very sophisticated audio. So you’re getting really all of the benefits of semiconductor technology paired with sound generation. 

John Koetsier: So let’s unpack that a little bit and talk about what those benefits are.

And it’s really interesting because totally different industry, but about a year ago I interviewed somebody and they’re making a solid state silicon based fuse box for your home. And nobody thought it was possible. Nobody thought it could be done. And they did that. And you’re talking about something that is a totally different space, but is somewhat related.

It’s solid state and using silicon chips to recreate this. Why is that a good idea? Why is that important to do? 

Mike Housholder: If you look at the consumer electronics industry, there’s just a natural gravitational pull.

To solid state components, they’re scalable, more reliable, generally faster, better performing. And really, if we look at a, an average consumer electronics product today, be it a phone a TV or whatnot, there are very few non solid state components remaining in those devices. The one remaining one.

For the most part is the speaker coming after the last survivor, exactly. There are so many examples in the industry of a traditional mechanical device being replaced by a solid state semiconductor variant. I can walk through a couple of examples. If we go to the opposite end of the audio spectrum, the microphone, most people don’t know the microphone is mostly.

Semiconductor MEMS technology today for a majority of the microphones. The mechanical microphones still exist, but the majority of the unit volumes are MEMS, MEMS semiconductors. For those who, we all have phones and PCs today, the. The spinning mechanical hard drive for the most part has been replaced by a solid state drive.

Why? It was more reliable. It’s faster. You get your data faster than a traditional spinning mechanical variant. You brought up the fuse box. You look in automotive now. Everyone’s pushing towards solid state batteries. Once a solid state variant exists, it will, over time, take the majority of unit volumes for that function.

John Koetsier: Okay so that’s a little bit about how it’s constructed, how it’s built and this overarching flow of… The world of technology towards solid state. Why is this a good thing for speakers? Why is this a good thing for headphones? Here’s some headphones that you sent me that I’ve been testing and trying out.

Why is it a good thing to apply here? 

Mike Housholder: Yeah, good question. So I’ll I’ll answer that question from two angles. One is benefits to the manufacturer of the product and then benefits to the consumer. All right. So to the manufacturer they want a solid state component because again, of all the quality and reliability advantages versus their mechanical variant, easier to test, easier to scale.

They’re more uniform than a traditional coil based speaker. And what I mean by uniformity is each speaker. Loudness level is equivalent. Their phase alignment is more consistent. So that’s what you want out of audio. When you’re trying to match a left and a right, you don’t want them louder or quieter than the other side.

You want them perfectly phased aligned. So you don’t muddy any of the the audio response. So semiconductor is just going to be more uniform and consistent. It’s going to be more reliable. It’s going to survive a lot longer. It’s going to survive drops and, water and moisture and dust better than a mechanical variant.

John Koetsier: it’s one of my original AirPods I dropped and the audio was never the same after that. 

Mike Housholder: Yep. Yep. So more robust to drop, one of the things we get asked by a lot of our customers is could it survive a washer dryer cycle? Hey, who’s left their AirPods in their jeans and run them through a washing cycle.

So the answer is yes, you can run our speakers through a washing machine and dryer and the speaker will work. We can’t guarantee the rest of the electronics is going to work, but we can say confidently the speaker is going to survive. And we also remove the magnets. So we’re taking out weight.

We’re taking out a source of electromagnetic interference with the wireless antennas in a wireless earbud. so Those are really all the benefits to a manufacturer. They like solid state. So now, but the consumer may care a little bit less about whether it’s easier to test or this or that. So really what the consumer cares about is audio quality.

Are they going to, get better music quality by putting a solid state speaker in their ears versus a conventional speaker. There are three unique aspects to our sound signature that a conventional coil based speaker doesn’t do and can’t do. So the first characteristic, is is really in the mechanical response of the speaker.

The speaker actually moves up and down. Its mechanical response is about 150 times faster than a legacy coil variant. So what does that mean to the consumer? A fast actuation means that you’re going to pump air and then you’re going to recover and be ready for that next audio stimulus a lot sooner. So as the music gets really dynamic, really complex, lots of instrumentals, multiple voices coming in, you really care about detail and separation.

You want to hear one instrument clearly delineated from the next instrument. You want to hear that background vocal as well as the front vocal, and you want to. Believe that they are all separate and unique and the slower the speaker gets, speaking of legacy speakers, some of that detail starts to muddy together and you lose that precision, that sense of separation.

A faster speaker will present that detail in all its glory. So the speed gives you that detail and separation. So that’s one unique aspect. 

John Koetsier: It’s almost like having a higher resolution display in other words. 

Mike Housholder: Exactly. So it’s the equivalent of, HD video. Now you’re really stepping into, there is HD audio.

There’s high res audio out there. Can the speaker truly resolve that content? Are you taking HD video and running it over a CRT monitor or are you pushing that HD video over a high pixel density? HD screen, same equivalent, same analogy on the audio side, the content may be HD, but can the speaker truly resolve that content and present all of that additional detail?

That’s what you can do with this speaker. So the speed is number one for the detail and separation. The second aspect of the sound signature is really flat phase response. Most it’s really not known and really not well known, but conventional speakers have a phase disturbance, a phase shift in a 500 to 2 kilohertz region, which is a really sensitive region of the human ear.

But I think the human brain has adapted to it because it’s the only way we’ve ever experienced sound. That we, we just don’t hear that phase disturbance. 

John Koetsier: What exactly is a phase disturbance in sound? 

Mike Housholder: So it’s a, it, the shift in phase will basically alter the original recording of the music.

But again, I think the human brain has been tuned to just. Ignore it, as you saw from that Brian Lucy video, he’s a mixing and mastering artist in the music industry, his argument is, so these conventional speakers have a 180 degree phase shift at the resident frequency, which is 500 hertz to 2 kilohertz.

Typically, our phase is flat. Out to 10k. There is no phase disturbance. There is no shift in phase. And, for a professional ear like Brian’s, it was immediately apparent to him that, what he is mixing and mastering, how he wants the consumer to hear the audio. He typically doesn’t find a consumer audio product that renders it as cleanly as he wanted you to hear it.

And what he heard from our speaker with the kind of the pure phase response, he finally heard a speaker that presented the audio in the way he wanted it presented to the consumer. 

John Koetsier: Interesting. So that would be more aligned with how you would hear live music, for instance. 

Mike Housholder: Correct. And the and while we’re on the phase discussion, this gets back to the uniformity and consistency out of the semiconductor process, chip to chip, our phase consistency is there.

So this really steps into spatial audio. So as you’re moving audio from the left to the right, up and down, you want perfectly phased match speakers. To render that spatial content accurately and not muddy the spatial response. So having perfectly matched left, right speakers that are perfectly uniform, you’re getting more crisp and clear spatial audio.

So the phase is in two aspects, the phase shift and the phase consistency. The third characteristic to our our sound signature is the fact is really in the materials. So most speakers today use a diaphragm material that’s, paper or plastic, there’s certainly a lot more exotic variants as you get into really expensive speakers.

But the majority of consumer low, reasonably placed speakers, paper, plastic diaphragm, what you want out of a speaker diaphragm is something that’s stiff, rigid, but lightweight, because you want the material of the diaphragm to be stiff enough that when you drive it really hard, it doesn’t go nonlinear.

You want that whole diaphragm to just stay consistent to not muddy the audio, but what you’ll see in paper or plastic, because they’re very pliant materials. When you drive it really hard, part of the diaphragm goes up. Part of the diaphragm goes down. This is a concept called speaker breakup, and that will muddy the audio and it’s most present in the mids and the highs.

Not so much in the lows. So we, being a monolithic semiconductor speaker, our speaker diaphragm is silicon. And silicon is 95 times more stiff than paper or plastic. So you have a stiff and rigid material pushing up and down, generating sound. It does not go non linear, it does not muddy the sound. So you’re getting more pristine mids and highs that you wouldn’t get from a conventional speaker.

John Koetsier: Interesting. Amazing. Very cool. One of the things that the audio engineer that you shared, he was talking about it. He talked about pistonic pressure which is something that he said you typically get with many different headphones or earbud solutions. What is that? Why don’t you have it? 

Mike Housholder: Sure.

So a conventional speaker, that paper plastic diaphragm is typically isolated or sealed front to back. The front of the speaker and the rear of the speaker are not exposed to each other. So in the front chamber of that speaker, the part that’s connected to your ear. You have that pistonic motion of that speaker diaphragm, that coil and magnet pushing up and down and it’s generate, it’s pushing air to generate sound, but it’s also creating, in a vacuum, it can create some pressure buildup in the ear.

That’s why some earbuds typically have a spatial vent in the front chamber to vent out some of that pistonic pressure. But even though they have that vent um, it can still lead to fatigue over time. If you’re going to. Watch a movie on a plane for two hours or be on your earbuds for multiple hours. It will lead to fatigue.

I think everyone’s felt it. What’s unique about our speakers again, we’re dealing with micron level precision of a semiconductor process. We can actually do. What conventional to what conventional speakers is a no, we can actually integrate micron level slits in our speaker diaphragm, because again, we got that micron level precision that as we are doing that pastonic pressure, there are little vents that are opening up and any pressure that’s building up leaks out the back.

So what we’ve observed in our own testing, our testers are working with our speakers every day. We’ve got them in our years. We’re just sensing that we can keep these earbuds in our ears longer without having those senses of fatigue. 

John Koetsier: So sounds super cool. What’s the path to market dominance and is it related to the announcement that you’re going to make?

Mike Housholder: We’re we’re at the the leading edge of that right now. Yeah, we’ve been in the market with. Product since 2020, we’ve been in, we’ve completed our production qualification with our fab and our manufacturing partners since 2021 2022. so now those production speaker chips.

Are now in the process of being integrated into consumer products, and those consumer products are now reaching production. There are a few products out in the market today using our speakers more on kind of the niche market side. We’ve got some hi fi audio earbuds, probably thousand dollar products.

Believed enough in the sound quality of our speaker to say that this is different and it’s worthy of 1, 000 price point. Those are in the market now, so you can buy in ear monitors for hi fi audio with our speakers in them. There are some hearing assistance products on the market, hearing aids with our technology, but in November, there will be our first TWS customer true wireless stereo earbuds.

Reaching the market with our speakers. So we’re at that kind of that cusp of, getting into high volume, mainstream consumer earbuds that is happening in very short order. 

John Koetsier: So if somebody wants to check it out, wants to try it, maybe they want to buy a thousand dollar pair of headphones, or maybe they just want something that’s a hundred bucks, 50 bucks, whatever.

Are there any brands that, that, that are having this in the market right now that they can check out? 

Mike Housholder: Yeah, absolutely. So on the, the hi fi audio side there, there are two, premier grade in ear monitors one from a U S company called singularity audio is a very high end in ear monitor, about a 1, 500 price point.

And then there’s another Asian in ear monitor company called Ceramic that also has an in ear monitor with our MEMS speaker in it. So those are premium products, high price point really for the audiophile who invests in audio equipment, but it really, our interest as a semiconductor company is high volume business and reaching that mainstream consumer.

There’s a forthcoming product announcement for middle of November. From Creative Technology, a well known brand in both consumer audio and PC and gaming audio. They are releasing a true wireless stereo earbud with our MEMS speakers. They will be the first MEMS speaker TWS. Brand on the market and that is coming to market at a very consumer friendly price point.

John Koetsier: Very cool. Interesting. And do you, have you patented this technology? Is it possible for others to do the same or if this takes off and everybody starts demanding this and Apple wants it in their AirPods and everything else did they have to get it from you? 

Mike Housholder: Yeah. So we we’ve been very aggressive in patenting all of our innovations we have well over 110 patents granted.

We’re a five year old company and we already have 110 patents granted covering all aspects of our technology, our process, our manufacturing methods. So we’ve gotten good freedom of operations for ourselves and for our customers with that said, there are different ways to, to design the product.

And this is a large enough market where we would expect to have competition. So I wouldn’t say that we’re going to be the only source in the world to get this stuff that’s not going to be true. But we, we have certainly, we’re certainly ahead of the market and we’ve got sufficient protection to, to have freedom of operation.

John Koetsier: , you’re obviously looking at earbuds and headphones, , first, but there are speakers all over the world. There are speakers all over the place. And as we, , add intelligence to just about every product we have, , sometimes the ability to speak to it is handy. And certainly with Alexa devices and Hey Siri at risk of getting Siri upside here, or other things like that.

, there, there are speakers all over the world. , do you have grand visions of being. In everything. 

Mike Housholder: Yes. So the North star of the company was to not just reinvent personal audio speakers. The North star of the company is to reinvent loudspeakers. So it’s the speaker in your phone, the speaker in your watch the speaker in your TV, your smart speaker.

Your car, your human retainment system. We want to touch every corner and nook and cranny of the speaker market. The easier lift for us, the fastest path to market is getting close to the year. So starting in personal audio was that easier lift to get to market, get the revenue base going.

And to continue to fund R and D for the bigger lift, which is to produce full bandwidth audio in free air and free space in a little thin semiconductor chip. So you can imagine the physics challenges that have to be overcome to achieve that. So we’re taking things in a logical, staged approach to open up different corners of the market as the technology matures.

John Koetsier: As you do that looks interesting, and I know that’s not your initial focus, but if you look at a Sonos or you look at some of the other big stereo brands, I’m assuming that you can scale up the physical size of your chip, not the chip per se, but the bit of silicon that’s doing the resonating and make it work in a larger environment.

Mike Housholder: Yeah. So this is where this is where the fundamentals of semiconductors. Instead of being a benefit, as we’ve talked about actually become more of an inhibitor, if you want to, the logical approach would be if I’ve got a six inch mid range or a six inch woofer, okay, I’ll replace that with a six inch semiconductor speaker.

And pretty soon you’ve consumed an entire semiconductor wafer, and, just financially and fiscally, that just won’t make any sense. Semiconductors are good when they’re small so that presents a physics problem, which is how do you produce sophisticated sound in free air in really tiny packages?

And again, if you look at the side profile of a conventional, say tower speaker in a home entertainment system, that speaker has some depth to it. Why does it have depth? It needs… Displacement to push air. Okay, so fundamentally, a semiconductor will always be at a disadvantage from a physics perspective.

You don’t make semiconductors sticker. They’re always going to be really thin. So you will never have that displacement of a big, a big free air speaker. This kind of gets to a forthcoming product announcement that we’re going to be making in November, uh, around our Cypress technology, which we are reinventing the transduction mechanism.

John Koetsier: I would like to know what the transduction mechanism is. 

Mike Housholder: The conventional speaker, coil based speakers, even the speakers from XMEMS that exist today, work fundamentally on a push air transduction mechanism. You have an actuator. That pushes a diaphragm, moves air and generates sound. But that displacement is fundamental to your ability to generate sound at distance.

More distance, more displacement. We’re never going to have that displacement advantage. We need to find a different way to generate sound. And, in a hundred years, there hasn’t been a product that has generated sound in a different way that could produce equal or better sound. So we are moving to a methodology or a principle of ultrasonic amplitude modulation.

So we are using all the advantages of MEMS semiconductors, which is speed, which is uniformity, consistency, to basically build an ultrasonic modulation demodulation scheme to move outside of the audio frequency spectrum, operate in ultrasonic regions to modulate and demodulate, and then extract. You basically modulate a series of air pulses to the original audio signal, then you demodulate the ultrasound and extract the audio.

So it sounds 

John Koetsier: very sci fi. 

Mike Housholder: It sounds sci fi and actually sound from ultrasound has been in research mode since the 1960s. And no one has been able to achieve a performance. significant enough to commercialize this in a broad way until now. So what this ultrasonic modulation demodulation scheme gives us that our current generation speakers don’t is additional, we already, you’ve already listened to our first generation speakers.

That’s full bandwidth audio. You heard. The lowest frequencies to the highest frequencies. But if you want to move into freer air applications or leaky applications, you need to displace more energy. In the low frequency. So by moving into ultrasonic modulation demodulation, we are now putting ultrasonic air pulses into the audio envelope.

And low frequency energy is wider. It’s wider wavelength. Yes. So we can put more air pulses into a wider wave, wavelength low frequency energy so that more air pulses generates more air pressure, which generates deeper base. 

John Koetsier: It sounds amazing. But I’m totally missing how the ultrasound, which I cannot hear because it’s above the hearing frequency that my ears, especially my ears I’m, not 22 anymore, but above the frequency that my ears can hear how that gets translated in my open air environment, maybe my home theater, whatever, to something that I can hear.

Mike Housholder: Sure. So we’re getting an input audio signal from the sound source, whether it’s your phone, your laptop, or a receiver at home. We get the input audio signal and really our controller and driver will then basically implement. A ultrasonic air pulse scheme to map each ultrasonic air pulse to the frequency of the audio.

. . So we’re gonna use that ultrasonic modulator to generate air pulses, but then we’re going to demodulate that ultrasound. To then extract that, that audio signal. So it’s just another way to generate air pressure to create sound, but outside of the audible spectrum. 

John Koetsier: Are you almost virtualizing the speaker?

Is that a way that you could describe this? Is it is the actual sound production almost happening outside of the speaker assembly, the new speaker? 

Mike Housholder: No no. There’s definitely a, a control and amplification in a separate controller chip, there’s still, our MEMS semiconductor that is, all the MEMS structures are generating the ultrasonic pulses.

Through MEMS, demodulating through valves and letting the audio flow through. So there is still fundamentally a speaker that has moving parts in it but it’s now a semiconductor and not a mechanical device. 

John Koetsier: Fascinating. I look forward to seeing it. Thank you so much for this time, Mike.

Mike Housholder: Thank you, John.


TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Can immersive storytelling via VR change history? Maybe 1 mind at a time …

immersive VR

In this episode of TechFirst, I chat with Emmy award-winning XR director Michaela Ternasky-Holland about whether immersive storytelling via virtual reality can change the course of history.

Documentaries already have.

The Day After aired in November 1983 and is credited with changing U.S. president Ronald Reagan’s pursuit and signing of a nuclear non-proliferation treaty with Soviet premier Mihail Gorbachev.

Can it happen again?

Using her VR documentary project, On the Morning You Wake, as a case study, Michaela explains how the deeply immersive nature of VR can change the audience’s perception of a global threat – nuclear weapons. She compares the engagement and impact of VR experiences to traditional 2D experiences, highlighting how the narrative and the audience’s sense of agency play key roles in creating quality engagement. The discussion further explores the future of immersive storytelling, addressing their potential and challenges in the technology field.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast


Episode synopsis: the power of immersive storytelling

(Note: this is AI-generated)

Immersive storytelling is revolutionizing the way we consume narratives, blurring the lines between fiction and reality. In this blog post, we explore the fascinating world of immersive storytelling through an interview with Michaela Ternasky-Holland, an Emmy award-winning XR director who specializes in creating experiences in VR. We dive deep into her acclaimed project, “On the Morning You Wake to the End of the World,” a three-part VR documentary about the threat of nuclear weapons. Join us as we uncover the power of immersive storytelling and its potential impact on audiences.

The Project and its Inspiration
The interview begins with Michaela providing insights into the genesis of her project, explaining how it originated from Princeton University’s Science and Global Security program. The goal was to create a world-changing documentary that could shed light on the effects of nuclear weapons. Michaela reveals how immersive technology, especially virtual reality, was essential in making the audience feel a sense of intimacy with the subject matter. The project aimed to activate the audience and take them on an emotional journey rather than simply providing facts and information.

Overcoming Accessibility Challenges
Accessibility is a crucial factor in the success of any VR project. Michaela discusses the challenges of making VR experiences comfortable and approachable for users. She mentions the need for carefully managing the logistics of VR spaces, ensuring the audience’s comfort, and minimizing waiting times. Training docents or volunteers to properly communicate with users and create a welcoming atmosphere was also essential. Michaela emphasizes the importance of ensuring that users feel safe and comfortable not only with the technology but also with discussing intense topics like nuclear weapons.

The Impact of Immersive Storytelling
One of the core topics of discussion is the impact of immersive storytelling. Michaela shares fascinating insights into her research, revealing that users who experienced “On the Morning You Wake” through VR were more likely to engage with the topic, explore further information, and feel empowered to take action. Comparing the effectiveness of the VR experience with a 2D film version, Michaela explains how VR evoked more positive emotions, instilled hope, and made the audience feel like they could make a difference. The immersive nature of VR created a stronger emotional connection and increased engagement.

The Future of Immersive Storytelling
As the interview progresses, Michaela and John Koetsier, the interviewer, speculate on the future of immersive storytelling. They ponder the challenges of mass distribution and accessibility, acknowledging that VR technology is still evolving and perfecting its form. Michaela highlights the importance of integrating VR into people’s lives in a productive way, similar to how smartphones became integral to our daily routines. They discuss upcoming technologies such as smart glasses and immersive projections, which may shape the future of storytelling.

The Versatility of Immersive Storytelling
In the final section, Michaela and John explore the diverse possibilities of immersive storytelling. They agree that the ultimate expression of storytelling depends on the purpose of the project and the intended audience. Michaela draws parallels between interactive games and linear experiences, highlighting how each medium caters to different emotions and objectives. She emphasizes that there is no single perfect apex for storytelling, but rather a wide range of possibilities that can be tailored to create specific impacts.

Immersive storytelling is an ever-evolving field that holds immense potential for engaging audiences on a deeper level. Through our interview with Michaela Ternasky-Holland, we gained valuable insights into the power of immersive storytelling and its ability to evoke emotions, drive engagement, and effect change. As technology continues to advance and accessibility improves, we can look forward to a future where immersive storytelling becomes a mainstream medium, enriching our lives with new perspectives and unforgettable experiences.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Meet the man who made the first cell phone call ever in the wild

The first cell phone call all started with a stolen car.

In 1983 Chicago resident David Meilahn’s car was stolen. He bought a new one, a Mercedes Benz 280SL 2-seater. But then he needed to replace his old radio-phone … and the sales rep told him there was something new: a cellular phone. He was one of the first few to be selected, then won a race to place the very first cell phone call by a customer, which ended up being from Soldier Field in Chicago, IL, to Alexander Graham Bell’s grand-daughter in Germany.

This is his story, along with the story of Stuart Tartarone, the AT&T engineer who helped build that system and still works for the company to this day.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast


AI summary: the first cell phone call

The script titled “First cell call” is a conversation featuring the first commercial cell phone user, David Meilahn, and an engineer who helped develop the technology, Stuart Tartarone. The conversation is hosted by John Koetsier, and captures the historical journey of cellular technology from its birth in 1983 to modern times.

They discuss early mobile technology like the radio telephone, the development of cellular technology, and David’s experience as the first person to ever make a commercial cellular phone call.

AI transcript: chatting with the man who made the first cell phone call as a customer, and the engineer who made it happen

The year was 1983. President Ronald Reagan proposed the Star Wars initiative. Mario Brothers was just released in arcades. Rent was $330 a month. Ford Mustangs cost $6,500. A gallon of gas was 96 cents. And you could buy a brand new Timex Sinclair color computer for just $179 and 99 cents. It was fall in Chicago, October 13, the setting was Soldier Field, the stadium that the NFL Chicago bears had just started playing out in 1971. 14 cars were lined up for a very unusual race: the race to make the first commercial cell phone call ever in the history of the world.

One man won that race and he placed a call from Soldier Field in Chicago to the granddaughter of Alexander Graham Bell in Germany. 

This is his story. Along with the story of the engineer who helped make it all happen.

John Koetsier: What would it be like to be the very first person in the entire world to use a new technology that would end up utterly revolutionizing everything? Hello and welcome to TechFirst. My name is John Koetsier.

Today is a super special day for TechFirst.  We’re literally going to speak to the first person who ever made a cellular phone call.

The very first cell customer in the world. It was a car phone, of course, and he still has that phone, by the way. We’re also going to chat with an engineer who helped build and commercialize that very first cell phone service. Joining us are David Malon, the very first cellular customer and Stuart Tartarone, who grew up taking phones apart and eventually built and launched AT& T’s global first cell phone service.

Welcome David. Welcome Stuart. Thank you. Thank you. Good to be here. Good to be here as well. Thank you guys so much.

I got to say, the average age of TechFirst guests just rose a little bit.

Stuart Tartarone: So I’m glad. we Don’t want to talk about that, but I guess it must have.

John Koetsier: My friend, you were building cellular networks in the 1970s, so you’re not 25 anymore,

Stuart Tartarone: but I guess not.

Maybe not.

John Koetsier: David, I want to start with you. How did this come about? What happened? Did you see an ad in a paper? Did somebody talk to you? How did you learn about this opportunity?

David Meilahn: It all started with the theft of a car . I had my car stolen. And for business, I had a radio telephone, a good old fashioned radio telephone, which was very expensive to buy and very expensive to pay for minutes, and not the easiest thing in the world to use, but it was extremely efficient, all things considered.

So my car got stolen in 1983, and I bought a new car. I immediately wanted to get a phone because I really missed it. So I went in to purchase one and they said, we can do one of two things. You can do a radio phone again or you can get what’s called a cellular phone, which I had never heard the word before actually.

And that’s a brand new system that’s going to be coming up and they’re hoping to put it on line. In the next three months. So this was the middle of the summer of 83. So I made the decision that I’d rather be more on the cutting edge than on the back end of an old system. So I said I will do that.

They said, we’ll install the equipment. It’ll sit in your car for three months and then we’ll turn the system on. And I said we’ll see if it happens in three months, knowing what normally happens. Unbelievably, within about a month or two, they called and said, How would you like to participate in the kickoff of the cellular system?

We happen to be the first place that this is going to be kicked off for the nation. And I jumped at it because it sounded like a lot of fun. They said, We’re going to have it at Soldier Field, which is perfect because I lived in Burnham Harbor on a boat in Chicago, which shares the same parking lot with Soldier Field.

Nice. I could literally walk to the event, so to speak. So anyway, so on, on the day of the event, it just so happened it was my birthday, October 13th, so that gave me like a doubly blessed day. The result was that they ended up having a race to, with, I believe it was 14 cars, in order to kick off the first cell phone, official cell phone call.

The race had the 14 cars lined up side by side. And they also had the technicians, each technician that actually installed the equipment in each person’s car was lined up to run a 50 yard dash. When they ran the 50 yard dash, they had to get the keys from the owner of the car, unlock the trunk.

And put in the final chip that I’m calling it a chip. I’m probably mislabeling it … that activated the system and it was a cell phone. It wasn’t like a cell phone of today. It was a big box, just like a radio telephone in the trunk of your car that powered I kept calling it a princess phone, but a little phone in the trunk.

And the car. So my technician lines up and he says, Dave, I’ve got some bad news and some good news. What’s the bad news? The bad news was I’m going to be the last guy to the car. He was in his mid 30s. So he’s an old man for technicians and all the rest were young 20 year olds. And he said, but I’m going to have the chip in first, I’ll be the first one to install it and then he held up the chip and I believe Stu’s going to correct me. Probably. I believe that it had about 20 prongs on it and they were about 3 quarter inches each. He said they are going to bend them and they’re going to make it impossible to get it in efficiently.

John Koetsier: Was this a SIM card? The very 1st SIM card?

Stuart Tartarone: It was it was called the number assignment module, but as David said,

John Koetsier: a lot bigger. How big was this? Oh, that’s so big. Not huge, but not

Stuart Tartarone: like a SIM card today.

John Koetsier: Now we have micro SIMs and we have eSIMs, which … Back to you, David. Did he win the race or did you win the race?

David Meilahn: So anyway as he said, he was the last guy to the car. And he was the first guy to get this in, to get this plugged in. And he had to give the keys, they had to give the keys to the owner of the car. The owner had to unlock their car door, get in, start their car. He gave me a great piece of advice.

Jeff told me, When you get in the car and you stare at the car, just sit there and look at the phone because it’s going to light up like a Christmas tree. So once all the lights stop flashing, make your call, and don’t do it before or you will trip the system up, it’ll have to reset itself. So I listened to him, did exactly what he said.

And our call made it to what was a head car that was bridged across the other 14 cars. That’s where the first phone call went, and then from there… It was forwarded to Alexander Graham Bell’s, I believe it was his granddaughter, in Germany. Wow. So that was the official, technically the official first call for a commercial cell.

John Koetsier: So David, I have to ask a question. You talked about having a radio phone. I have no idea what that means. I understand the concept. It was perhaps a phone system that went over radio waves, perhaps to some central switching station that then interfaces with the landline system. But what is a radio phone?

David Meilahn: I think Stu’s going to know more than me but it was literally radio waves going to us. I think a central, there were operators involved in it and it’s so long ago that it’s hard to exactly remember it, but they converted it basically to the land to a land system.

John Koetsier: Wow. Stuart, what is a radio phone?

Stuart Tartarone: basically, long before cellular , probably dating back to the 1940s, there was mobile phone service.

And there was, a transmitter in people’s cars in the trunk, a handset, big handset and device in the passenger compartment. And as, as David said, it operated almost like broadcast TV or radio. There was one big antenna, in metropolitan areas that broadcasts over the entire area. But the big deal about it…

is that there are only 10 or 12 channels. So think about metropolitan areas like Chicago, and after 10 or 12 calls were made, the system exhausted. Wow.

John Koetsier: Wow. Okay, so would they, David, if you tried to make a call on your radio telephone before you had the cellular call, before you had the cellular phone, then, would it sometimes just fail because the channel was occupied, or would you talk over somebody?

David Meilahn: It probably had some of the characteristics of a party line um, from the standpoint that it had limited use, but it wasn’t so limited that it was a frustration. You just, you lived with the way the system, you understood worked, you understood it, and it really worked fine.

John Koetsier: I want to stick with you, David.

We’re going to go to Stuart in a moment and talk about the technology, the process, building the project, coming up with it, all that stuff. David, did you have a sense at this point that you were doing something that was world changing? That was revolutionary. That would literally culminate in what we have these days.

These tiny little devices in our hand doesn’t have to be in a trunk of a car. Did you have that sense?

David Meilahn: Not at all. I had a sense that, it’s just amazing what has happened, because my sense back then was, here’s the newfangled phone system, technology go, back then now, technology moves on with the speed of light, so we’ll see how long cell phones last and one of my thoughts was, my gosh, they’re going to physically put towers, dot the United States in towers, and that’s how we’re going to talk to each other, so it was a little unusual that I thought, Compared to satellite technology it, I’m not no expert, I said, it seems like you should use satellites, but I understood there’s a whole another layer of difficulty there.

So I just thought it was going to be another method of telephones, and in 10 years, we’ll be using something different.

John Koetsier: Really interesting. You took the first step on the moon and it was just the next day.

It’s interesting right now. Actually there’s a lot of people who are trying to do cell phone service, quote unquote, cell phone service via satellite. So maybe that is the next step, but it’s 50 years later. Stuart, let’s turn to you. You grew up taking phones apart. You’re the typical tinkerer.

How does this work? Take it apart. Your parents must have loved you especially for that, but you became an engineer and you started work on this project. What did you think about it when you first heard about it?

Stuart Tartarone: Going back to what you said yeah, I did take phones apart, except we weren’t supposed to do that.

And because, it was back to the old Bell system where everything was controlled by the Bell system and by the local telephone companies. And I fully expected, I went to an engineering school in Brooklyn, close to where I grew up in Queens. It’s now part of NYU. It’s called, it was called the Polytechnic Institute of Brooklyn.

And in those days, recruiters came to campus. And the Bell system would always show up with a recruiter from the local telephone company, New York Telephone from Western Electric, which was our manufacturing wing and from Bell Laboratories, which was later to become AT&T Bell Laboratories, which was our technology, our R& D organization.

And I fully was expecting to talk to someone from New York Telephone because as a New Yorker, I didn’t expect I was going to move out to New Jersey. But lo and behold, I was only given the opportunity to speak to someone from Bell Labs. And afterwards, I was unhappy about that and spoke to my advisor who said words to me that many people of my generation heard, and those words were, if you were given the privilege and used the privilege of working at Bell Labs, you have no choice but to accept.

So I said, Whoa, I said, yeah, I said, I can listen to that went off in those days to what was called a plant interview and drove down from from New York City to what was Holmdale, New Jersey, not too far, went off from Middletown, got off the Garden State Parkway, and I felt like I was in farmland and this was the sticks to me and many ways.

It’s not too different today. If you do have occasion to come, to come here and made a turn and drove down to the road, and there was this tower that was coming out of nowhere. And I was later to find out it was modeled after a transistor. It was the water tower that would supply water to the Bell Labs complex at Homedale.

And that complex was just this beautiful building which was designed by Saranin, who also did major architectural Things like like the TWA terminal in the Great Arch in St. Louis, he designed this building and you walked into it and you looked around. It’s just amazing. And how the interviews were set up at Bell Labs at the time, we got to talk to four different organizations.

And the first organization I got to talk to was an organization called Mobile Systems Engineering. And the interview in those days were a lot different than today. You weren’t put through tests, you weren’t put through having to, design something on the spot. It was a conversation. To me, it was probably a lot refreshing, as opposed to what we do today.

And I spoke to all the people there, and when I got done, the last person I spoke to, a gentleman by the name of Joel Engel, who said to me, Now you’re going to talk to three other organizations, and left this subliminal message in my head. Nowhere will you ever get the opportunity to work on something brand new, something that doesn’t exist today.

And he held up a book, which I… I can’t find my copy of it, was the technical report that AT& T presented to the FCC and what they would do if they were given the opportunity to create a new cellular communication system. They said, this doesn’t exist, and if you join us, you’ll have the opportunity to work on this.


John Koetsier: So you joined, you had that privilege, you took that opportunity, you came on board, did you start working on this project immediately or out of college or what did it take a couple of years till you got embedded in it? No,

Stuart Tartarone: How it worked then and how it works today. And you, yeah. You walk in the door as I did in the end of July, 1972, and you’re right into doing something.

And the opportunity I was given at a school was to work with at and t Marketing on doing a market survey of the opportunity for cellular communications so I could apply some of my statistical background in looking at data and working with this company. And they did. This is like 1973 by the time we got out there, very professional survey survey questions went out, there were focus groups and major markets, and I got to sit near the side of the glass and listen to customers and talk about what they might do with it.

And the conclusion from that survey was there was really no market for such a survey. This was 1973. No market for such a survey, for such a service.

John Koetsier: Amazing. And not necessarily shocking or surprising because when you come out with something entirely new, entirely different, you have to invent the market, right?

You have to show what is possible and you have to say, Oh. Interesting. I didn’t know I wanted that. In fact, I didn’t want that until I understood what it was. So you’re in a big organization, you’re in a massive organization that basically preeminent in its day. They have this new idea, this new technology, but the market surveys aren’t promising.

They aren’t saying, wow, this is a multi billion dollar opportunity. Jump on it right now. How did it actually start? Did somebody take a big risk?

Stuart Tartarone: Simple answer is yes. But at the time we were this very large company called the Bell system. A million employees strong and there were pockets of revenue to invest.

That was the great thing. If you look back over the history of Bell Labs. And a lot of, and if you think about it, if that didn’t exist, a lot of the technologies, if you think about the digital and wireless age was invented by AT&T Bell Laboratories, it led to the transistor, to digitization, information theory, solar panels, charge couples, devices. And cellular technology. All of those were invented by Bell Labs and all of those were invented in New Jersey. Amazing. And there was this thought, what would need to be done, and investments were put in place. Think of the transistor.

The transistor, which is the basis of everything. Millions of transistors in this device today. But the technology that was created by AT& T was given to the larger industry to build on and use. Think about what was the big first thing transistors were used for, transistor radios that came out in the 1950s.

John Koetsier: Okay, so you’re working there, you’re bringing out this technology, you’re about to launch it David is unaware at this point but, he’s, you’re starting to work with the network and the installers and everything like that. Talk about the technology. You what kind of, I came on to using a mobile phone when we had 3G.

 And, then LTE was a big deal, right? Which is essentially 4G, I believe. And now 5G is the thing, right? So give us a sense of where the technology fits on that scale.

Stuart Tartarone: Yeah, so I go back to what David and I talked about. The concept of one big transistor. The big underlying.

Concept of cellular was to be able to use low power transmitters and take the frequency spectrum. This is scarce commodity. It’s a scarce commodity back in those days, scarce commodity today and but able to reuse that many times. If you’re broadcasting at low, at low power, you can reuse that.

That’s the whole basis. Cellular technology. And this concept was brought forth by two people, Doug Ring and W. Ray Young, back in the 40s. They actually wrote a paper that talked about this. And Ray Young actually became my first department head at Bell Labs when I started, but he went back to the 40s and this.

So you have this concept, and how do you implement this concept? And back in those days, we had You know, cell sites, we call them base stations, originally cell sites, to, to provide the signal, you needed a smart controller, a central controller, and Bell Labs had invented electronic switching, came out in the 60s to be able to have stored program controls, control switching machine.

And the other element of it was a device in people’s cars. But one of the big deal things that just happened as I joined to talk about enabling technologies is the microcomputer was born by Intel. Without that, none of this would have been possible because that was the enabling technology, the game changer.

That made us possible to develop and deploy the system.

John Koetsier: Talk a little more detail over what the innovation was in cellular networks that we heard David talk about he had a radio telephone and you said, Hey, you could get like. 10 or 12 conversations going at the same time. And then you were out of spectrum.

You’re out of bandwidth. What was the key innovation in cellular technology? You mentioned the low power. So you’re it’s low power. It’s local. You’re talking to a local cell tower. So somebody else could be maybe on the same frequency, but five miles down the road, 10 miles down the road, they’re talking to a different tower, was that the only innovation or were there other innovations allowed thousands, millions to use phones?

Stuart Tartarone: So related to that is, think about it, it was a vehicular service at the beginning. And as cars drove around the city you had to track where those cars were and to be able to recognize that they were driving at an area that they were in and needed to be served by a cell site from a, from another cell site.

This is the concept called handling, handing off from one cell site to another. And the ability to do that, to track that, to receive the signal, to look at the algorithms by which you’re going to tell of, this vehicle, this device sitting in someone’s vehicle, to switch from one channel to another channel, that was a huge innovation and all the and part of it has to do with the distributed nature.

And one of the things I got to work on very early was all these things, were coming out is the distribution among. Different elements of the system from the switch to the cell site controller to the mobile controller and how to optimize that in the best way so it would support this growth.

John Koetsier: So landline phones were analog the signal was, transmitted as analog and recreated as sound in somebody else’s ear.

Were the first cell phones digital? Did you send the voice digitally?

Stuart Tartarone: No, because again, let’s roll back to the 70s. Just based upon what you said, it was analog voice. And behind that was a whole digital logical processing that went through with commands coming from the switch coming from the cell site to direct the mobile what to do.

So you had this, voice analog and you had the control structure, which was all digital.

John Koetsier:  Is that control structure what opened the door for SMS for texting?

Stuart Tartarone: It’s, we’re talking. We’re talking many years later. We’re talking many years later. So I’m not sure I would say that. Yeah, it was that basis.

But, by the time texting came around, I think we were into 3G uh, when it came around and lots of changes from 1G, the 2G, the 3G. That was all brand.

John Koetsier: Would you have characterized the transmission speeds or the transmission technology in the first days that David had his cellular car phone as 1G?

Stuart Tartarone: Yes, it was. It was 1G. And, but the quality of it, the voice quality, because we’re talking about someone in someone’s vehicle with a high power transmitter in their trunk. It was exceptional. It was interesting. It was as good. David can, can provide the feedback on this. It was good as a landline.

David Meilahn: It was crystal clear. It was excellent.

John Koetsier: Wow. So it was better in some ways than what we have now. Certainly in the days of 3G, voice quality wasn’t amazing. Maybe 4G as well. Huh?

Stuart Tartarone: And we knew it at the time because, think about it. This is a low power, this is low power, and you just could not get the same quality that you could with a high power transmitter in someone’s car and a similar receiver.

John Koetsier: So David, you were in at the very beginning of a revolution did you keep upgrading? Did you stay on the cutting edge and you still have that phone today, right? You still own that first phone? Yeah I

David Meilahn: actually there was an event, I think it’s about 10 years after 1983, after that went on and my apologies.

John Koetsier: It’s all good. Everybody,

David Meilahn: what can you do? Let me, it’s a new phone system too, I gotta find how to.

John Koetsier: I think if you hit it with a hammer, then it will stop.

Stuart Tartarone: That’s a necessary device.

David Meilahn: Those darn landlines. Anyways. About 10 years after the cell phone uh, started being used commercially in 1983, they went to the digital.

And they had an event to go to digital and they dragged me out for this next event. And I actually consigned my phone as a donation to the museum of science and industry. So I still have a car that I, that the phone was, a phone call was made in but the equipment is now at the museum of science and industry in Chicago.

John Koetsier: Amazing. Amazing. What was the car by the way?

David Meilahn: It was a 1983 Mercedes-Benz three 80 sl. Nice fun little car to run around in now. Not as easy to get it into as when I made the first call. .

John Koetsier: Excellent. David, if as you look back so you talked about at the time. You didn’t have the sense that this was revolutionary, that you were the first person to make a cellular phone call that this would take over the planet, like literally but as you look back and as you use your mobile phone today what’s it mean to you?

David Meilahn: I think it’s amazing how it started, what the average person thought about it. And the different, I’ll call them milestones as, for the user. And it, and whereas it was an instrument for business basically only, or the wealthy it has over the years progressed to, bag phones, the brick phone the then a handheld cell phone, flip phones, and then all of a sudden they became smart phones.

And the actual phone call part of a telephone is, Not necessarily the most important piece. It’s that everybody’s glued to their smartphone. And it’s able to be bought by everybody in the world. It does not have to be only for the business or the people who can afford it. Everybody can afford a cell phone.

And they use them like crazy,

Stuart Tartarone: right?

John Koetsier: Amazing. Amazing. Stuart, maybe some closing remarks from you, because you’re still working AT&T. Amazingly, I’m not going to ask your age, but you’re no spring chicken. This has been the work of your life in a lot of sense, and I’m sure you’ve done a million other things as well, but this is this the biggest thing that you’ve done in your career is launch this and being part of this?

Stuart Tartarone: Most definitely. So talking about first cell phones, this was. One of the very first cellular phones that existed. This was the control unit that went into these vehicles. I’ve had this with me all these years and it really was to come out of school. A lot of people don’t have this opportunity to come out of school and back to what I said earlier, to work on something brand new that didn’t exist, that people questioned the market for.

And then here we are today with the proliferation that’s occurred with, with the, with going from here to here. So what a huge transition. And yes I’ve got to work on lots of exciting things in my career from there to personal computers to lands and today involved with with as we look at our network as as we visualize, virtualize our network and, working on.

Today, tools to improve how we develop software, platform engineering, but, and, and even got to work on one of the first internet banking applications, but there would be nothing like. I got to work on my first 10 years with the company.

John Koetsier: Amazing. And what a privilege, as you were told by your advisor, your student advisor, way back when in the early 1970s.

And I have to echo that today. What a privilege to chat with you, David and what a privilege to chat with you, Stuart. I thank you for your time and thank you for sharing your story. It’s fascinating. It’s part of history and I really appreciate it.

Stuart Tartarone: Thank you so much. Thank you. Good to see

David Meilahn: you. Thank you.

Thank you. Good to see you.


TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Can generative AI make rockets launch faster?

generative AI in the enterprise

Generative AI won’t be building Falcon 9s or new space shuttles just yet (wait a few years!). But can it help with all the work that goes into running an organization that builds the future?

According to Kendall Clark, CEO of Stardog, yes.

Generative AI that democratizes access to data and insight and knowledge speeds up organizations can help with launching space ships, or anything else. For NASA, a generative AI solution is apparently helping the team to do in days what used to take weeks.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast


AI summary: using generative AI to speed up the enterprise

The script is a conversation between John Koetsier and Kendall Clark, CEO of Stardog, during a technology podcast. The discussion revolves around the role of generative AI in speeding up complex processes within large organizations. Kendall Clark discusses how Stardog leverages generative AI for data management, differentiating their approach from other companies like Salesforce by focusing on on-premise and hybrid multi-cloud environments. He also explains their strategy to prevent hallucinations or errors in AI-generated responses. The conversation concludes with the significance of creativity in AI applications.

AI transcript: generative AI and institutional knowledge systems

John Koetsier: Can generative AI make rockets launch faster? Hello and welcome to Tech First. My name is John Koetsier. I’ve done a ton of TechFirsts on generative AI. It’s getting pretty good. OpenAI just announced it can see, listen, talk back. What about the enterprise? What about companies, big organizations?

Specifically, what about NASA? A generative AI solution is apparently helping NASA do what used to take weeks, take it down to days. Thanks, of course, to generative AI. 

To dig in, we’re joined by Stardog CEO, Kendall Clark. Welcome, Kendall. Hi, John. Thanks for having me. Hey great name, Stardog. That’s an awesome name for a company, My first question is, so is NASA going to Mars next month , thanks to generative AI? 

Kendall Clark: when NASA returns to the moon and goes to Mars for the first time is more a function of the U. S. Senate to be honest, and, and those budgets and that sort of thing, it’s obviously an expensive endeavor.

 But, to answer your question, I think generative AI can help speed up a lot of complex things, largely around I think in this first wave, there’s going to be a bunch of waves, and I think most of the interest among the, Educated public, or people who are paying attention, right? It’s not everybody.

We should remind ourselves. The people who are paying attention largely because of its impact in what we call B2C space. Help me make a invitation for my children’s seventh birthday party and make it have dragons and fairies and something else. And I’ll pop these amazing photos.

We’ve all played with that and I love it. I’m addicted to it. Frankly, I can’t make a deck now without having some generative AI images so much so that. My employees now tease me about it. Stardog is a great brand partially because it, it lends itself so well to, I have a folder called astronaut dogs on my computer.

That’s got God knows how many variations I’m obsessed. In the enterprise. I think the first impact from generative AI will be in what we can call question answering. The movement from query writing to question answering. I would say that something like, and I’ll just make this up, 60 percent of the value, more than half of the value that enterprises get from technology information technology at all is in the area of answering questions of data.

And the dominant way that’s happened to date is there is some data in a database somewhere. There’s a lot of those, in fact. And someone who’s either very smart or someone who spent a lot of money on a product like Tableau, maybe, it’s got a BI tool, does something, manipulates an interface in some way.

And that results in a simple to very complex query, often SQL query, that SQL query goes to a system, gets executed, and answer comes back and value is achieved because the answer says false instead of true, or it says, 17, 000 instead of minus 17, 000, or it says false. John, instead of Kendall or whatever.

John Koetsier: Or more likely you discover, Oh, shoot. That was actually not the right question to ask the data. I should have asked this question. And then you send that question back to the data analyst. The data analyst goes, why did this idiot tell me the wrong question in the first place and then runs that query and there takes another level of time.

And then you go back and forth and finally dial in five or six levels down. You have what you think might actually be the real answer. You want it. 

Kendall Clark: You’re a hundred percent on it. And what that means is there is space organizational friction. Now, people were employed because of this friction to make it work anyway, but there’s space between my intent of wanting to ask a question of the data through the process that gets translated into a query, either by a piece of software by some people or both typically, and then it gets executed and then it comes back to me and, oh, it wasn’t quite, and this is a little bit like, for the folks in your audience who are it minded or history of it minded, this is a little bit what it was like to write code in the sixties.

So Or even in the seventies, like you’ve got two chances to ask questions, to make changes to your code base per day, because the compiler and the tool chain and the nature of the languages and the slowness of the computers back then, you start your morning run. It took four hours to compile.

You went and found something else to do. You came back at noon day. Oh shit. I forgot a semi colon or something like that, right? Oh, run it again. The day is shot, but now we use these, super fast computers, high level languages. Dynamic compiler, stuff like python, I can ask a million questions.

So the iterative cycle has sped up tremendously for programmers for generative AI and the enterprise. The first big impact, my prediction is not even a prediction. What I think is we’re already seeing is the movement from query answering query writing, which is this process we’ve been talking about, which works, but it’s slow and it’s error prone and it’s got a lot of extra people in it.

There’s a lot of people between the data and my intent. That’s going to get all compressed. And so question answering is. The LLM takes the place of all that space, right? And it says, Oh, if you’re asking what is the assessed or what is the test readiness of this subcomponent of the humans, of the man’s space capsule returned to moon thing.

And, give me the full lineage and traceability of all the. thAt’s a complex question that’s literally rocket science, right? And the answer, now if people can, not just technical people, but something like everyone, knowledge workers. Can interact with the data. I will say directly from their experiential point of view, although there’s a lot of stuff in between, obviously a big stack, uh, that’s going to compress all those cycles, much like the rise of dynamic languages did for programming, uh, programmer productivity.

We’re going to see that in what you would call just a, we used to call general office worker, people, but now business analysts, knowledge, or people who need whose job depends on. Interrogating the data where, you know, now, according to normal technology, interrogating is a metaphor. It’s like a fancy word in the LLM era question answering era.

Interrogating is not a metaphor anymore. We’ve concretized the metaphor. We’ve made it literal. You’re literally like, what about this? What about this? What about this? Firing questions at the data by typing them out as you say, opening extend this to this kind of verbal thing. But, That’s a fun thing, but it’s not going to really make a difference to the questions you get back.

I don’t think, uh, but yeah, that’s what’s good. That’s the first thing we’re going to see. I think that’s the original nub of your question. So in that sense, it can help everything that NASA does, everything that all of our customers, everything that everyone’s who’s engaging this technology. Can help them make their jobs go faster, better, because effectively, I like to say, democratizes access to data.

John Koetsier: So it’s interesting because when I got the pitch for you to jump on the podcast, what I immediately thought of was enterprise knowledge management and there’ve been huge. Products and projects that enterprises have been going on for, I want to say decades, and I don’t even think that’s an exaggeration of enterprise knowledge management.

What do we know? How do we categorize what we know? How do we put it in a place where people can access it? How do we search it? How do we surface it? And as we’re pre chatting before we start recording saying, hey, that’s not our space. We’re not about that sort of static data, that static knowledge and documents and stuff like that.

We’re about data. Talk about the evolution of knowledge management or how you fit or contrast to it. 

Kendall Clark: Yeah, it’s a fair question. I feel like I’m cynical about knowledge management, lots of but that’s the average view. I think, as we were talking before you, you said, you signaled some of that in your, in yourself.

I think the, let’s start with the fair thing to say. I think the fair thing to say is it has made sense at all points since, let’s say, I don’t know, 1970. For big companies to make some investments and really what we should call it as library science because that’s really what it is And I don’t mean that in a I guess I played it for a joke a little bit But I mean it I mean seriously librarians Serve a really super useful function in our society by organizing knowledge, right?

Like I like and I know you know, no one in your audience or who’s listening to us who’s under What would you say? 40? Certainly no one under 30 will know what these next words mean. But remember you used to be able to go to a library. It was a place right in the world. It’s not the mall and it had knowledge in it.

Primarily in the form of books, but other forms as well. And there were these people there who basically organize that knowledge and they sat there and waited all day for you to come in and ask a question. And they love to help you answer that question. That was a way that our society worked, right? It was this kind of a socialist vision, frankly, strictly speaking that knowledge was a common asset that we had created as a civilization.

We should all have equal access to it. You could even use to be able to call them on the telephone and say, I’m, I’m writing a story about. The winter migration of carrier pigeons in Finland, I don’t know, something, whatever. And they’d go, okay, there’s a book about that, come get it, they’d be very excited.

And then you would read it and you would know stuff. The web ruined that, or destroyed it, or changed it, altered it forever. But I still think there, but, and in some ways what’s happening with the web, what’s Google has done is find a way to make the machines do a lot of that work. And so with respect to documents, websites, web, it’s just a collection of documents.

After all, we mostly self serve. We go into the search bar. This is what people, everybody knows how to do. That’s replaced that call a librarian, but it’s the same. We’re satisfying the same human need. sO with respect to knowledge management, doing that for large bodies of information inside of a big enterprise, my hat’s off.

It’s on the side of the angels, right? I’m thinking, be cynical about that. What I think we can be cynical about is there were, there was always this obvious. Obvious to me, collision course between what we typically call data management, enterprise data management, ETL databases, data warehouses, and knowledge management.

Then those needed to as I like to say, be on a collision course, smash together, mutate into some new thing. And I think that’s been obviously, it’s been obviously, it’s obviously been the case for the last, say, three or four years that’s happened. LLM. In particular makes it, I think, I don’t think you can deny that anymore, that this question answering capability we were talking about previously, and then you can extend question answering to all the other traditional jobs to be done in data management, data modeling, data mapping, data quality discovery, metadata management, inference rules.

And then the tradition, and then the, the traditional realm of data science. That machine learning has eaten all those things. In a way that’s now really accessible to everyone, not just to Google. And that’s going to forever change the practice of data management, just like the web forever changed the practice of.

Library science and knowledge organization. That’s my non cynical take on what’s happening. 

John Koetsier: It’s really interesting, actually, if you could somehow study and understand what percentage of human knowledge, let’s say, resided in dead trees. And then how that moved to documents, electrons in hard drives, and how that transition is happening as a greater and greater percentage of our operational knowledge.

Transitions to maybe more dynamic forms of knowledge that are in databases that are measuring processes that are ongoing, that are real time in a lot of senses. And that’s an interesting transition of what percentage of the world’s knowledge is in different places. Certainly the percentage in databases that is a live database, that is growing, that is that is measuring live activity.

And that you want to query because you want to know the status of that live activity is certainly growing and being able to access that easily is really impressive. 

Kendall Clark: Okay. So this is a super interesting question. You forgot, or you didn’t mention, I should say a third important source, which is the knowledge that only resides.

In and between people. Yeah, exactly. That for a variety of reasons, people didn’t need to write down. They haven’t had time to write down or it’s just too fluid and it doesn’t fit in a database. The thing about, you don’t really put stuff into a database until it has this kind of a particular kind of ossification of form by which I mean, in a database of traditionally what we mean by database is a relational database, a particular data model.

It’s not the most agile, flexible thing. In fact, it’s rigid. Relational databases were typically intended for basically accounting data and accounting data, whatever else it is. It’s not dynamic and fluid and creative. The values may change. You have a. A status of the, of an account that changes, but the rules of structure, right?

The gap rules are pretty, it goes back to what? Seth 16th century, Florence, double entry accounting. A lot of that stuff is really old and well understood. Then you jump ahead to somebody like NASA, literally trying to do rocket science, get humans. Back and forth across the solar system and they’re learning new things every day.

They’re right on the cusp of the barrier between or the boundary, the border between knowledge and ignorance, on the other side of what we know is the black, scary, nothingness of ignorance. And if you’re trying to peck away at that, push that border out a little bit every day, you may need different techniques for data management.

What I think is interesting, and then you add to that the fact that while you say that the big historical trend is to go from books. So electronic form, all of the growth in the next 10 years forecasted for enterprise data is not in what we call structured data databases. It’s in semi structured and unstructured data.

So like this conversation 20 years ago. Was two guys talking right this conversation five years ago was a thing you could watch on YouTube this conversation Now or any time in the future, I push a button at the end You push a button out pops a transcript We stick that into some kind of knowledge platform and all the entities and everything we mentioned, you know I mentioned migratory patterns of birds and Finland and We mentioned libraries and you mentioned, like those accounting rules, those all pop up as nodes in a graph, the knowledge graph, right?

And connections between them and. Then, okay, this is a conversation of a different kind, but if we’re doing this for work, we, this might be work product, right? So that transition about where the knowledge is, what they call in the academia, the sociology of knowledge, the production of knowledge, thinking about knowledge, getting produced, like thinking about cars, getting produced.

It’s an industry and there are processes and there’s inputs and outputs, and you can measure it. And. There’s this whole big field since probably the eighties and academia and studying the output of knowledge, what we’re talking about. And when we were talking about knowledge management and data management, colliding and fusing into a new thing, taking a lot of those techniques, a lot of the new algorithmic insights and helping big companies.

Manage the data that they produce better. I will say, I’ll stop with this. I’ve been, I don’t want to give you a filibuster here. It’s interesting to think about company’s competitive landscape vis a vis one another, like you take two big global pharmaceutical companies. Maybe the most differentiated assets they have are their data sets.

Like you cannot, I mean you could maybe swap all the people and the people at one pharma can do the jobs and there’s some, winners and losers at the margins, but if you took the easiest way to destroy a company is a thought experiment is to take all their data and swap it with their nearest competitor.

Just so you come work on Monday and let’s say all of Glaxo GSK’s data belongs to, uh, Nova Nordisk and vice versa. Like you haven’t destroyed anything. Every bite’s preserved, right? But you just swapped like what happens the, they’re destroyed, right? So it’s difficult to, I think it’s difficult to overemphasize or exaggerate the importance of managing data and managing knowledge and yeah, generative AI, given that it produces text and all of this knowledge and data management ultimately more or less ends up in text.

Let’s say images or a form of text, right? Close enough. The applicability of these techniques to this area is pretty endless. 

John Koetsier: Yeah, it’s an interesting space, and I just came back from Salesforce’s big Dreamforce conference in San Francisco, and they’ve added a ton of generative AI to Tableau, which you already mentioned, other products as well.

Their vision also is that you can query your data. Natural language, anyone can do it. Everyone is a data scientist, all that stuff. And so that’s super powerful. Of course, if you are going to buy Salesforce, I’m pretty sure you’re in for significant charges for each user. And significant challenges there, but it is a compelling vision that all the data.

That company produces is at your fingertips that you have control over, that you have access to, that you have permissions for, and you can query it, you can know what’s going on, you’re a sales rep, you’re a sales manager, you’re a product manager, you can instantly know all this stuff. How does your vision differentiate from that?

Kendall Clark: Well, at a high enough level, it isn’t. It’s, that’s exactly what we want to do. I think there are differences that matter. First off, I would say Stardogs really focused, we’re focused on financial services, farm and manufacturers. Of a certain size and unlike Salesforce, now it’s interesting.

Salesforce is an interesting example because they did not start off as a data management company, enterprise data management company, become an enterprise data management company because strategically they decided to move in that direction because they make a lot of money and they got, frankly excess capital they need to deploy and they could have done many things, but moving into data management makes sense because they do control a critical.

Strategically critical corporate data asset in terms of the CRM. And that gives you some leverage. And so it’s clear with the acquisition of Mule, MuleSoft a couple of years ago, six years, whatever that was a big signal. Hey, we’re going to be a data management company, but I think it’s important.

I, the big, I think probably the biggest differentiation between our vision and theirs is Salesforce is really a cloud company. And they’re really best at managing and connecting data that exists in the cloud. But companies still have a lot of data in what we call on prem, not in the cloud. And our focus has always been on that data that either hasn’t gone to the cloud or we’ll never go to the cloud.

So Stardog is a cloud platform, but it also can operate on prem. It’s a Kubernetes platform, which. Technical folks in your audience just means, Kubernetes basically replaced the Java virtual machine as the enterprise delivery mechanism. The dominant one. But that just means you, we can operate our platform.

Our customers operate it both on prem and in any cloud environment. Excuse me. That just means startup could be adjacent to data no matter where it is, not just the part of the data that’s in the cloud, even if in the end, in the next 10 years, let’s say, 80 percent of all corporate data resides in the cloud, 20 percent of all enterprise data is still a very large amount of data.

And it needs to be connected. And what we’re really focused on is connecting data and then making it accessible with this LLM technology we’ve been discussing in what I like to call the hot everyone calls the hybrid multi cloud. So that part of the data that’s on prem hasn’t moved to the cloud yet.

Or again, my favorite statistic, 85 percent of all businesses, irrespective of size, have data assets at more than one cloud. Now for most businesses, that means they have Salesforce and HubSpot, right? Which is fine. And those are different solutions, different clouds, but really, that problem is going to get solved for SMB and small businesses by those vendors.

But it’s true of big businesses, like our big banking customers have data everywhere in every conceivable. Location format. And 

John Koetsier: it’s not comforting that the financial industry has this data everywhere. 

Kendall Clark: That doesn’t mean they’re not controlling it, but I just mean by everywhere.

Like what’s the newest, like most globally significant banks, unlike say Facebook and one important regard, Facebook is like a teenager of a business and globally significant banks are like grandparents there. On average, what, 75, 100, 150 years old. So they’ve existed longer than computers have existed, which just means you take a cross sectional slice of a big bank, you’re looking at the archaeology of the last 70 years of IT.

They started with mainframes or what was even before mainframe, boxes of punch cards or whatever. And they’ve had, they got one of everything, and there’s legacy all over the place. They have data everywhere in that sense. That may also not comfort you, which is fair. But that’s a tough problem to solve.

You’ve got a system that’s running. It works. It meets requirements. It’s just old. And you meet some it people in the smart, their smart view is don’t mess with that. Leave it alone. It’s running. Why mess with it? And then, somebody else equally smart says no, we need to modernize that they’re not, nobody’s right or wrong there.

It’s just, it’s a hard problem. Not to come on your show and make a brief for banks, but you get my point. Like they’re in a tough spot when your organization’s 150 years old. They are in a 

John Koetsier: tough spot. And that’s why we see the rise of neobanks, but we are straying way far afield, so we’re going to, we’re going to pull it back here.

So you’re building your solutions so that organizations can query their data. That’s great. NASA is using it. Others are using it. How do you solve hallucinations? That’s obviously a challenge with generative AI, and that’s the problem you cannot have in your scenario. That’s a problem I’m okay with. If I talk to open AI, I can, does it pass the sniff test?

I can double check it. Bar, Google just added or bar just added some double checking of what it says as well. How do you solve hallucinations? 

Kendall Clark: Yeah, look, there’s a cheating answer to this question, which is what we’re doing. And then there’s the hard research question of making the LLMs stop hallucinating.

I won’t address that. That’s a research question. It will get solved, I suspect, to some appreciable level of, quality, precision and recall. I think the first thing to say is LLMs are not databases, and they should not be treated like databases. The way we solve it, which is like somewhat cheating, is we don’t use LLMs and Stardog as a source of data.

We use it as a mechanism to discern human intent. Which is a kind of a fancy way of saying, what is the person talking about in their natural language? Whether that’s my, my, one of my co founders is my CTO is from, born in Istanbul. So he speaks Turkish and English. And I said to him at some point, I said like, how’s the LLM working in Turkish?

As as a joke, I knew they worked for many natural languages, but not necessarily for all of them. And he just showed me a demo. This was this summer. And it was just straight up Turkish and it just worked. It was amazing. We use the LLM as a way to figure out what the person is talking about.

Translate that into a query or a search or a hybrid query search or a data modeling, piece or a data mapping piece or a rule or something that then gets executed by our platform. And that cuts out the chance for hallucination. What it means is sometimes the LLM will get the human intent determination wrong.

But then that just means a query. Now we’re just back in the case. You said earlier, someone expressed the need for a business question. Some other mechanism translated that into some query and didn’t get it right. Yeah. Frankly, relative to the status quo. So what happens … you just redo it.

John Koetsier: And I’ve been in that scenario quite frequently, and usually it’s. Not the data analysts who got it wrong. Usually it’s me who asked the wrong question. 

Kendall Clark: Not specifically, but yes, almost always. It is just frankly, and I tell the team this all the time, an LLM is not magic. It cannot determine intent when no crisp intent exists yet, but that’s okay.

People often find, we find our way by. Asking a kind of, frankly, not very good question, and then we ask a slightly better question, and then we iteratively improve, and then we discover what our intent was all along, more like probably we create the intent in this iterative process, and then we retroactively attribute it to ourselves, and that’s a, it’s a psychological thing we do, and that’s fine.

Oh, what I meant all along was this probably not. It’s what you mean now, and here’s the answer. Fine. Nobody’s throwing rocks at those two. It’s just normal human stuff. Yeah. So in our approach, we don’t treat the L we don’t ask the l m any questions that, where it’s hallucinations can bother us, right?

. . . Cool. And so in the near term, that’s the best solution. It’s not su, this is use case specific in context relative. That’s what you want to do in a regulated industry where the questions really matter. But, if I want another cool picture of an astronaut dog and, mid journey or something, I want that slightly random quote unquote wrong component because that’s really the source of creativity.

John Koetsier: Absolutely. And creativity is a wonderful thing. Just not always when you’re querying data from your own company. That’s exactly right. Kendall, this was interesting. It went places I didn’t expect it would. Thank you for taking the time. 

Kendall Clark: Thanks, John. I appreciate it.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

DEI in AI: Is diversity, equity, and inclusion a solved problem in AI?

dei in AI

Is diversity, equity, and inclusion (DEI)  in AI a solved problem?

I’ve written a lot of stories lately about AI. AI is critical to our future of automation … robots … relf-driving cars … drones … and … everything: smart homes, smart factories, safety & security, environmental protection and restoration. A few years ago we heard constantly how various AI models weren’t trained on diverse populations of people, and how that created inherent bias in who they recognized, ho they thought should get a loan, or who might be dangerous.

In other words, the biases in the people who create tech were manifesting in our tech.

Is that solved? Is that over?

To dive in, we’re joined by an award-winning couple: Stacey Wade and Dr. Dawn Wade. They run NIMBUS, a creative agency with clients like KFC and featuring celebs like Neon Deion Sanders.

Enjoy our chat with full video here, and don’t forget to subscribe to my YouTube channel …

Or, listen on the podcast

DEI in AI: solved problem?

Not a subscriber yet?

Find a platform you like and let’s get connected.

Transcript: fixing biases in AI

Note: this is AI-generated and likely to contain errors.

 Is equity, inclusion, and diversity in AI a solved problem? And are there new challenges with generative AI? Hello and welcome to TechFirst. My name is John Koetsier. I’ve written a ton of stories lately about AI because AI is critical to the future of automation. It’s critical to the future. Robots, self-driving cars, drones, and everything else.

John Koetsier: Smart homes, smart factories. Safety, security, environmental protection, restoration. A few years ago, we were hearing a lot about how various AI models weren’t trained on diverse populations. It created inherent bias in who they recognized, who maybe a model thought should get a loan, who might be dangerous.


In other words, the biases in the people who created the tech are manifested in our tech, what a shock. 


Is that solved? Is that over? To dive in, we’re joined by an award winning couple, Stacey Wade and Dr. Don Wade. They run Nimbus. It’s a creative agency with clients like KFC, featuring celebs like Neon Deon Sanders.


If you’re not into football because you’re into tech, he’s a Hall of Fame football player. And they won multiple awards over the past few years for their agency. They’re in a unique position to see the impact of AI on everyone, not just white ass English speaking tech workers. Welcome. How are you guys?


Hey, so pumped to have you guys, thank you for taking the time. Let me just start here. Why did you guys start digging into AI?


Dawn Wade: From a multifaceted perspective, um, I’m a researcher by nature. So when new things come out, I always look for gaps within that. And it was just a personal interest to me. But then as you see how our country is moving, it always makes me weary when things aren’t created in a.


And I’m going to use a wholesome way, but created by an individual, because having a computer engineering degree, I’ve recognized you’re only as good as the data, or you’re only as good as the input that goes into a system. So, it made me question, how is this being developed and how does it know enough about me to represent someone like me?


So that was the 1st facet as to why it was very interesting to me. But then the 2nd, 1 is we work in advertising. So it relies on a creative nature and creative has to be very nuanced to people, their habits, what they like, what they don’t like, how they talk, where they’re from. And that’s very hard to capture on a day to day basis and marketing.


So we were very interested in how does that work in something where we’re trying to resonate with a real person? How can a computer resonate emotionally? To convince you to buy something or to do something. So very two different ways in, but it’s continued to be interesting to us.


John Koetsier: What have you found when you’ve looked into generative AI and how it’s created people and imagery in your space in, in advertising, in, in marketing?


Has it been, um, compelling? Has it been interesting? Have you seen challenges with it? I think


Stacey Wade: definitely mixing challenges in it. I mean, um. You know, it’s similar. I think for me, when I, when I think about a, I, uh, it’s so new. I feel like we’re, we’re just like, barely creeping up the hill with this thing. Uh, but what stands out to me is something that, uh, was happening probably in 21, which was.


And even when we started the agency, which was representation in the industry, what we’re noticing is that the representation of those images that are being, you know, the output of those images are very similar. There’s confluence to what we experience in agency life, which is, are we represented in a way that is authentic to who we are?


And is the voice and tone also authentic? And that’s the 1 thing that there seemed to be some juxtaposition there, which was no. And that was something that was a little bit scary for me personally as an artist. I’m not. You know, uh, computer engineer, I’m not done. So I look at things completely different.


It’s very abstract for me sometimes. So to be able to see that shown in a way that look very familiar to when we started the agency 21 years ago, seeing that show up in a was a little


John Koetsier: concern. 21 years ago. You guys don’t look old enough to have started the agency 21 years ago. Is this generative AI in action right here?


You’re deep faking yourself.


Let’s dig into that, Stacey. Um, you, you talked about representation. Uh, you talked about authenticity. Don talked about authenticity. What were you seeing? That was not adequate representation. What were you seeing that was not authentic? Yeah. I just think


Stacey Wade: tonality, I think, you know, you’re so quick. A lot of those.


Conversations that were happening in 2021 about a show up today in a way from a gender from the generate generative experience. You’re seeing it show up in the advertising. So for us in advertising last year, it was all about, you know, the metaverse this year. It’s all about and the people that are using these tools.


Sometimes may bring in those biases in the tools that they’re using and the tools, the output of that is showing up and the output doesn’t look like me, it doesn’t have my voice. And I think that’s the part that we’re trying to, as an agency, luckily enough, we have right, you know, left brain and right brain on this call, you know, so to have somebody, you know, That understands that space and has, you know, the background to be able to kind of put, you know, just very logical, pragmatic thoughts wrapped around AI, what that looks like for us, and then to take more of an artistic approach to understand tonality and touch and feel and making sure that we show up as from a culture standpoint, we’re showing up and not being erased or being dumped down in a way that AI is basically pulling from these.


Inherently biases, inherent biases, pulling those into the images, I think, is something that we’re trying to aggressively, uh, have a conversation about and aggressively be a part of that conversation. So that. We can start to the same way that we came into an industry that left us out. We can start to include our thoughts and tone and authenticity inside of, as a part of the output of a,


John Koetsier: I think I’m wondering what that looks like.


Go ahead


Dawn Wade: is that is perfect as people we are imperfectly perfect. Right? So, if you’re looking at an image. That’s AI generated. All the strokes are going to be right. It’s going to be balanced. It’s going to be symmetrical, right? But a true artist that does that are going to have his signatures within that.


His brush of how he strokes are going to be slightly imperfect. And I think we as people are okay with being imperfect, but AI is looking to achieve that perfection that I think takes away from the authenticity. So like, that’s my lamest term of saying that in a way. Is that the nuances of that person or that artist are some of the things that somebody like Stacy is going to value because he’s an artist, but that generated image.


The eyes are going to have that slight, but that line means something on the eye, or they’re just certain attributes that is not going to see. But a real person looking for the beauty in something is going to be missing based off of the AI generated content.


John Koetsier: So Don, AI’s gift to you and generative AI from mid journey and stable diffusion is people’s hands.


They’re getting better, but they’re not great. But that’s, I know that’s a different thing than what you’re talking about there. I’m trying to dig into This thing about authenticity because I think that many people might have that from a diversity perspective, but also from an artistic perspective. Right?


Um, and, and I’m wondering, you know, do we have people sitting in, um, a high rise in New York City or, or somewhere in LA saying, give me an urban scene to, you know, mid journey or stable diffusion. And then it’s getting something very stereotypical or what’s happening.


Dawn Wade: That’s exactly what that is. It’s what you think you saw on TV or saw in some magazine or saw in some way you think that that’s representative of this particular scene, and it’s not until you live those experience.


You can’t dictate that for the next person. And that’s where DE& I comes into the space, because there’s not a program that teaches us diversity, equity, and inclusion. These are lived experiences, and I can say as an African American woman, but Stacey’s experiences are going to be different because he’s not an African American woman.


He’s a man. And my Hispanic counterpart is going to have a different experience, but this isn’t just a race situation. This is like when you look at the LGBTQ community, the disability community, like those are nuances. They cannot be captured in AI appropriately. So, you know, when it started. Really ramping up in 2020, it was all about facial recognition.


But when it comes to other things, it goes deeper. You can’t do that from an AI perspective adequately. That’s going to be representative of those communities, even amongst women. You know, but I think there’s a technical aspect of it. There’s also the community usage of what that means and then who is developed for, right?


So if it’s developed for a consumer versus Just an individual user is going to have different nuances in it. So I think that we can’t address AI with a broad stroke. It has to be chiseled in a way if we want it to be sustainable and safe to use.


John Koetsier: What’s that look like then? Um, because generative AI isn’t going away.


Um, if people are going to use it and frankly, many of those models are open source, they’re out there. There’s a massive amount of innovation and creativity happening there. And it’s kind of a land grab. And it’s also this explosion of capability of people who could never create something like this artwork or that scene or these people or whatever are also doing it.


So that’s not gonna, the genie’s not going back in the bottle.


Dawn Wade: Absolutely not. But we have, we can’t go from that. If you’re not first, your last mentality. And that’s what a lot of the software is. That’s what a lot of the platforms want to be first to the scene. You have to get away from that mentality when it comes to AI.


You have to invest in the time of connecting with those that you want to target. So if it’s AI that’s targeted to a certain group or certain usage, you have to bring in people who are experienced or have those cultural nods and allow them to give you the inputs to get it right. Because the one that’s going to be long lasting and most successful is the one that’s going to get it right.


The one that’s first is not going to make it to the end point. So I think that’s the, the, the. The, the moment in which you need to pivot the mindset, you don’t have to be first to market, but you need to be last in the market to make it successful because you’ll be the one who focused on getting it right versus getting it first.


And I don’t think many aren’t that way at this point. Stacey, talk


John Koetsier: about what that looks like. Uh, talk about what that looks like for a brand that plans on being around for a while. Plans on serving its community well. Wants to connect to its community, AI and generative, and it looks like a cheat code for boom, check, uh, done, got it.


Uh, there we go. Talk about how you believe that the lack of authenticity in that will impact that brand over time. I think that’s


Stacey Wade: something, even, even when you remove AI, that’s something that brands struggle with today. So now you’re throwing in another level of complication with AI, because you ever seen that book, you ever read that book, Blink by Malcolm Gladwell?


Where he just speaks about, you know, you just look at it and you just know something, even though it looks like me, there’s something just not quite, it’s not me. And I think that that’s what, listen, you’re, just as AI is changing the landscape. Let’s not get it twisted. The consumer is also changing and they’re changing really quickly and they’re becoming very smart and they can see something that’s authentic.


They can sniff it out. They know. So I think brands are as much as they want to jump into a, I, we’ve seen brands jump into a quickly to Don’s point. You see brands making this quick charge in to be 1st and what we’re noticing what we’ve noticed is that they’re getting it wrong. And now they’re trying to like, you start to see them kind of take steps back and not wanting to be first.


And now it’s becoming laggard. So now they’re trying to figure out, okay, how do we actually do this in a way that’s authentic? And I think, uh, that starts with brands actually being authentic so that they can understand the blink effect. Like, okay, I know that this is real. I know that this is not real. I know that we need to make this as perfect as possible, but we need to also bring in those cultural nods.


So you need to bring people that are able to see those nuances, able to understand tonality, able to understand, you know, the hat is actually not a Detroit lion’s head is actually fear of God. Those are very. Small details that don’t show up in ai because AI would take this image and make it into Detroit lines, but it’s not, I’m not, you understand what I’m saying?


Tigers, not lions. I get it, but it’s not, it’s actually, there is a nuance, a cultural aspect to the logo that has to come in on the output. And a lot of what you’re seeing, you even mentioned it when you talk about urban communities and AI inherently is going to bake in some biases, what it views as what an urban community is, but my urban community is completely different than, say, your urban community.


So brands are going to need, we say, you know, when in Rome, bring a Roman. They’re going to need people that really understand these cultural nods and nuances to be able to one, protect them from themselves and add a layer of authenticity to it that actually is going to be beneficial, not only to them, but the consumer that they’re trying to reach.


John Koetsier: It’s pretty crazy challenging, isn’t it? Because our technology is a reflection of ourselves, and sometimes it amplifies, uh, bits of what we do and what we create, and all that stuff. And if we look at our culture over the past 30, 50 years or so, And then you look at, um, the token person of color in the TV show in the 70s or the 80s or whatever, and how that person was represented or how that person needed to be represented, uh, the corpus of knowledge that AI is drawing on, whether it’s that or whether it’s just remnants of that.


Cultural detritus that accumulates over decades and then manifests itself in my image of what somebody in Detroit who is black and grew up there is versus your image. It’s such a. Insanely complex web of everything. How can anybody get it right?


Stacey Wade: You’re, you’re, you’re, you’re nailing it. But I think that’s where Don says something that’s so, it almost clicked like a flag in my own head.


It’s like, you know, it’s the ones that are taking the time to not. You know, rush down the hill, but are actually taking the time to walk and understand what’s actually happening so that they can give you the best output. So, being able to curate is similar to, you know, how we curate our own agency, being able to bring different people into the agency.


It’s not a black agency. It’s not Hispanic. It’s not a Hispanic. It’s not white agency. It’s a culture agency. It’s being able to weave in these different culture. To be able to slow it down fast enough so that you can speed up as the, as the technology is moving forward. So it’s not a matter of saying it’s going to go away.


We know it’s not going away, but we are saying that we want to offer up cultural nods and nuances to make it better.


Dawn Wade: But I think anything without checks and balances is dangerous. Anything that doesn’t have checks and balances is dangerous. So when it comes to AI generated content and things like that, it’s generating that, but what’s the check and balance?


When you get that output, do you then go and check to make sure that it’s going to resonate or that it’s safe or it’s not offensive? Or is it assumed because you did it using AI that it’s safe already? So where’s that safety check? Where’s that quality assurance that needs to take place? And those are things that I want to hear from, from the developers in terms of how that’s coming to market.


To have those safety checks, but time and budget often eliminates that, and that’s the reason for AI. So those are some of my watch outs when it comes to that.


John Koetsier: Don, talk about how you see generative AI developing over the next couple of years. Um, there’s a, there’s a, a vast group of people. Who now have access to whether they’re open source models or whether it’s a model, you have a subscription to, uh, in discord or on the web or an app or something like that.


Millions, maybe probably tens of millions right now are actively building stuff with generative AI. How do you see that developing over the next couple of years? And how do you want to impact the development of that to make it something that you think is net positive globally?


Dawn Wade: There has to be checks and balances, just like you develop a new food or a new drug, there have to be checks and balances before you can, to me, uh, infiltrate the, the, the world with it and that we don’t have that there.


And I think that there are some government oversight groups that are being developed to do that. But part of that takes, you know, it takes the fun out of it. So I think of if there were 10 million solutions, right? 90 percent of them may be great, but there may be 5 or 1 percent of it. That could really set us back.


And I think that’s why we need oversight because that 1 percent could really, um. Mess something up for us, whether it’s between our country and another country, when you can take somebody’s voice and their likeness and create a video that anybody from the human eye could not know is fake, what happens as we’re debating or working with countries that we don’t have the best relationship with.


And it looks like our president is sending a message that may not be true or doing something. And you can fake so many things who, and you only have minutes to react. So that’s what I mean about checks and balances. And as a person who loves my individualism, I don’t necessarily love the thought of having oversight at that level, but to know that.


It could be something very dangerous, and somebody could get hurt or killed. I think that anything that crosses those boundaries that could really hurt people, kill people, represent something in a way it’s not intended, requires that, to my understanding.


John Koetsier: I think. Honestly, everything negative that you just wanted to avoid there is going to happen and almost inevitable.


I, I, I think that there’s large language models that are out in the wild that somebody will train, uh, somebody who’s neo Nazi will train on that content. Um, I think there’s, there’s generative AI art models, uh, that are out in the wild that somebody will drain, uh, train on very racist ways of imagining how different people look like.


Thank you. I think that you will have weaponized AI and generative AI and deepfakes globally, and I don’t know that there’s any solution. I don’t, we’re going to have to invent some new technology. It’s funny, I mean, I think Elon Musk has created cultural vandalism with Twitter, um, and lots of other challenges, but he wanted to invent a new AI that will determine the meaning of reality.


Great! Um, who’s reality, um, but I, I don’t know how we’re going to avoid this semiotic catastrophe. This, this, this, this. Dissolution of meaning this, this destruction of truth, because I don’t know how we can possibly escape it. I hope somebody smarter than me has a plan. I feel like you, I feel like you


Stacey Wade: and Dawn could get together and she reads the books on this, like, this is like her favorite subject matter.


So I feel like you all could talk about this all day, but I agree. That’s the part that scares, scares me is, uh, you know, Don, I kid you not. Don’s been reading these. Books that speak to this, you know, grids, you know, hostile takeovers for a long time. It’s kind of like her, her, you know, when we go on vacation, that’s one of the things she always picks up a book.


John Koetsier: This is how she relaxes.


Stacey Wade: Yeah, it’s a hundred percent, but you know, it was crazy two years ago, even speaking with when she would. Throw this at me. You kind of like, Oh, it’s a nice book to see this come into reality. Like some of the things that she, you know, we’ll have conversations about to see some of them actually become real.


It’s almost like watching the Simpsons, you know, how they actually like predict the future. It’s like, you’re seeing it happen in


John Koetsier: real time. Don save us. You’re the technologist. What are you going to do?


Stacey Wade: Somebody has got to do


Dawn Wade: it. So. The thing is, if you can dream it now, it’s like, if you can dream it, you can do it, and that can be scary when people don’t have your best intentions at heart, you know, so I don’t know that anybody has a solution because it’s such, like you said, thousands and millions of solutions that are open source for somebody to take hold of that and to customize it, and 90 percent of the time it’s going to be for good, but it’s going to be some of those that aren’t for good, you know, And I’m just not looking forward to that in any way.


John Koetsier: It’s a crazy, challenging world that we’re moving forward in. And I guess, um, I’m going back to Gandalf’s wisdom in Lord of the Rings. We have to live in the times that we’re in. We have to do the tasks that


Stacey Wade: we have. No, I’m going to have this one. I’ll do this one. So keep going.


John Koetsier: Do the best we can. Uh, hopefully there will be some new technological solutions that will. Tag, uh, when something is artificially created, and I know that we can detect it right now, but it’s going to get to a level where the human eye, as you were saying, cannot detect it. The human ear cannot detect it.

Right? We’re going to need some solutions that tag something that is real. Even as our definition of real changes technologically, uh, it’s a crazy world we’re moving into, Stacey.

Stacey Wade: It scared me a little bit. I mean, I’m excited about it, but I’m also scared the same, the same. 

Dawn Wade: Yeah. When you think about like where we were a couple of years ago when they’re like self driving cars, you know, like, and I’m like, Oh, that’s cool. You know, but now like it really can happen, but then you see, you know, one or two really bad stories and it makes you doubt the, you know, efficiency, like, well, who tested this or how did that?

So can you imagine something like this as such a humongous scale? You know, so I think that within our generation, you know, and what’s going to be that in 50 years is the landscape won’t look the way that it looks now, you know, and that’s exciting. But I think we have to have the foresight to get ahead of it.

And figure out how to set boundaries so that, you know, we don’t mess it up for ourselves or for our future generations. So we’re going to, somebody has to have that brain power and that oversight to do it. 

John Koetsier: Yeah, and government is painfully slow in all these things. So they’re like 10 years behind. Right.

We’re going to have our generative AI “Senator, we sell ads” moment. It’s going to be, you know, 10 years too late, but hopefully someone will figure it out and the companies, uh, big tech will also do the right thing. That’s wow. We’ll see. Who knows? I gotta say it. Go ahead, Stacy. We hope. We hope. We hope.

Absolutely. I gotta say it’s been, this has been fun. It’s been, it’s been interesting. Um, it’s been great. I usually look at stuff. Um, and I talked to people who are inventing new technology. You’re dealing with the consequences of it and using it and, and, uh, talking about the impact of that is also very, very useful.

Thank you so much for taking some time out of your day. Thank you. 

Dawn Wade: Thank you so much. And I appreciate this.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Apptronik has a totally different approach to building humanoid robots

humanoid robots apptronik

Who will win the race to have the world’s first usable general purpose humanoid robot? I thought I knew all the companies making general purpose robots:

  • Tesla
  • Sanctuary AI
  • Figure AI
  • Fourier Intelligence
  • Agility Robotics
  • Boston Dynamics

I was wrong … there’s probably a bunch I don’t know. But one that popped up as interesting is Apptronik. They’re based in Austin TX, they’re partnering with NASA, and they’re building Apollo, a 5’8” 160-pound robot.

In this TechFirst podcast, we chat with CEO Jeff Cardenas. And we learn that he has a completely different approach to building a humanoid robot than probably every other robotic company out there. Keep scrolling for more, or hit the Forbes post for my take on one part of what Cardenas said regarding robotics and international competitiveness.

(Subscribe to my YouTube channel here)

TechFirst audio podcast: Apptronik’s ‘Apollo’ humanoid robot

Subscribe on your favorite podcasting platform:


Transcript: humanoid robots, work, and the future with Apptronik CEO Jeff Cardenas

John Koetsier:  Who will win the race to have the world’s first usable general purpose humanoid robot? Hello and welcome to tech first. My name is Jonkitz here. I thought I knew all the companies that are making general purpose robots, right? There’s Tesla, there’s Sanctuary AI, there’s Figure AI, Fourier Intelligence, Agility Robotics, Boston Dynamics.

I’m sure there’s a bunch more that I don’t know, but I know that I was wrong because one that just popped up that launched this week is. Apptronic, they’re based in Austin, Texas. They’re partnering with NASA. I want to hear more about that. And they’re building Apollo, a five foot eight, 160 pound robot.

Here to chat is CEO, Jeff Cardenas. Welcome, Jeff. 

Jeff Cardenas: Thanks for having me here. 

John Koetsier: There’s so much competition right now. What is going on? 

Jeff Cardenas: It’s an exciting moment. I think for robotics and the world as a whole, we finally reached an inflection point. And it’s funny because when we first got started, everyone told us not to do humanoids.

And now everyone’s getting into the race. But it’s been exciting and interesting to see it all play out. 

John Koetsier: Everyone is doing humanoids, and that’s a real challenge, right? There’s pieces that are known and capable. Shoulders, necks, maybe, right? Walking with a bipedal robot is not necessarily the easiest thing in the world, but it’s been done, and it can be done.

Really challenging parts, obviously in the brain and the hands and fingers, we’ll get into all that. Tell us about Apollo, which you’ve called the world’s most capable humanoid robot. 

Jeff Cardenas: Yeah, so Apollo is the result of many years of hard work and research and development. At Abtronic, we’ve built 13 different robots overall in eight iterations on humanoids.

So we started in humanoids when we were still working in the lab at the University of Texas at Austin. Started working with NASA back then for the DARPA Robotics Challenge. Two of my co-founders, Dr. Nick Payne and Dr. Luis Cintas, were on the Valkyrie team. And basically, Aptronik was created in 2016 to commercialize the work out of NASA.

So back then, Valkyrie was… Millions of dollars. It was 300 pounds, but it was one of the very first electric humanoid robots. So we felt like, Hey, general purpose robots, more versatile robots are going to be the future. Electric is going to be the way to go. And we want to build a commercial version of this.

And so. Got it out into the world now, finally, seven years later, in many years of work to get here. It’s really, 

John Koetsier: It’s great to hear that because you launched with Apollo yesterday, right? And so the world wakes up and says, Oh. What’s this new company? What are they doing? Right? And it’s like that old phrase, like an instant success.

You just weren’t around to see all the work that went into it, right? So you built a bunch of robots already. You built a number of iterations on the humanoid robot. Talk about that journey a little bit. 

Jeff Cardenas: Yeah, for us, we always saw that this was really a technology problem more than was a market problem.

I think a lot of entrepreneurs and other folks are looking to get into this space because they see the market opportunity. But for many years, the technology problems had to be solved to make it viable. So it’s interesting to hear you say, walking, we can do that. That was not the case even.

Five years ago, we had no idea how to do dynamic walking. Boston Dynamics was really 10 years ahead of academia in terms of the type of walking that they were doing. And everybody was trying to figure out how to catch up and do what they were doing for many years. And so, piece by piece, we had to solve these problems.

And, the way that we viewed it was, this is a technical challenge, and we need to solve the key pieces that are needed to make this real. How do we go towards viable, commercial, general purpose robots? And we basically just broke the problem down into, and from, and solved it from first principles.

So started with electric actuation for humanoid robots, we’ve done over 35 iterations on electric actuators. Some of those are small, medium, large of the same family of actuators, but a tremendous amount of R&D there. Elon’s talked about the need for actuators for these robots. And that’s been our body of work.

That was my co-founder, Dr. Nick Payne. That was his thesis in grad school was next generation actuation for legged robots. And, the electronics didn’t exist. We needed more real time communication because you have a lot more sensing in these robots. And then certainly the software, which I’m sure we’ll get into, but we started basically at the foundation and we bootstrapped the company.

So I mentioned, it’s funny that everyone’s into humanoids now, because when we got started, everyone told us, do not build hardware, focus on the software, focus on the AI. And the problem was the robots didn’t exist. And we’re like, well, what are we going to put this AI on eventually as it matures and develops, we don’t have the robots yet.

So we have to build these systems. And so we bootstrapped ourselves and what we would do is we would work for other companies. We’ve worked with several big automotive companies looking at things like humanoid robots and we’ve helped them. Build their systems. We’ve built and delivered a variety of systems, including some of the robots for sanctuary.

We’ve partnered with sanctuary in the early days and they’ve been a great partner along and we built their first prototype for them. And each time that we would. We would build these robots. We would iterate, we would learn something new, all getting towards the point of building the robot. We always ultimately wanted to build, which is Apollo.

And so, the thing that I love about robotics is, you got, at the end of the day, you can talk about what a robot’s going to do, but you have to show it in the real world. So our philosophy has always been show versus tell. So we didn’t have a need to really get out there and say, Hey, we’re going to do this.

Our view was like, well, let’s do it. And then we’ll show off what we do. And we’ll let our work speak for itself. And so we really had our heads down over these years, just. Trying to get this stuff working, solving the technology problems iterating pretty quickly as well. And, we’ve had robots walking for 7 years.

We’ve iterated on, we’ve built full systems in 3 months. So it’s not that we’ve been taking our time doing this. We’ve actually been cracking these problems and we’re at this point where. We’ve met the threshold where everything is good enough, which I think of this like the personal computer in 1982, right?

It’s like, it’s the beginning of, a lot of things had to build on each other and converge for this breakout moment to happen, but we’ve got that work behind us. And I think the robot we’ve put out there we’re really proud of and excited to see where it goes. 

John Koetsier: I love that approach. I really love that approach because you didn’t put a guy in a suit and walk up on a stage and say, here’s our robot.

Right? That has a couple hundred year history. I’m glad you didn’t do that. Super. Interesting in terms of the approach talk about before we get into the details of the robot, give us the high level specs. I mentioned five foot eight, 160 pounds. I think it’s five hour battery life. Is it a swappable battery?

Is it a hook it up and charge? How fast can it go? What can it do? 

Jeff Cardenas: Yeah, so 5 ft 8 weighs 160 lbs, it can lift 55 lbs, and it has a 4 hour battery initially, and it’s swappable. So we’re targeting 22 hours a day, 7 days a week, uptime. It can also be tethered as well, and opportunity charged, like what in autonomous mobile robots.

It is fully electrically actuated. We, as I mentioned, we’ve had a tremendous amount of iteration in that space over the years. So, we think that we’ve got a really unique solution for performance and cost, right? Performance at cost. So there’s a trade off where you’re, you can purely focus on performance, which to me, Atlas is an amazing machine.

The Boston Dynamics robot … It’s like a formula one car. It’s really performance optimized but is very difficult to mass manufacture Atlas, it’s got custom hydraulics and other things in it. And so what we’ve really focused on is how do we get performance at cost? How do we find the right trade off and ultimately build a commercial product that we can build for less than $50,000 as our target.

And it can still have the performance that’s needed to do the work that we needed to do. And then through COVID, we learned a lot about the supply chain. And so a lot of the ideas that are now in Apollo are getting around. Supply chain constraints so that we can really scale this thing up into big volumes and we don’t have any single source vendors in terms of what it does.

Initially, we’re focused on what we call gross manipulation, and that’s compared to dexterous manipulation, and we’ve learned that because we’ve built a lot of robots over the years, and dexterous manipulation is very difficult. We have a ton of respect for folks that are going after that in this space.

But it’s a really difficult problem. And the exciting thing for us at this juncture is we don’t have to solve that problem to get these things out into the world. Turns out there’s a huge shortage in logistics. And a lot of those tasks are just moving boxes or totes from point A to point B. And that’s something that we know how to do now.

And so that’s where we’re going to start, but then the beauty of a general purpose robot. It’s a software update away from doing something new. And so we’ll continue to get more advanced as we move into this and get them out into the world. 

John Koetsier: Super interesting to talk about hands and gross manipulation versus dexterous.

One robot CEO that I chatted with before said, the robot’s basically a hand delivery mechanism because the hands do all the work. Right. And he said, actually, creating a hand, a robotic hand, like the human hand and capability is beyond our capability right now. It’s not just beyond any one company, it’s beyond human capability right now.

There might be some different opinions on that we’ll see, but that’s where, of course, maybe half of the degrees of freedom that a robot might have exists. So, it’s a really challenging thing. And then, of course, wear and tear. All those little motors in the hands, skin, whatever you use for skin, super, super hard.

So I see that challenge. You talked about a software update, which is amazing. That’s incredible. Is there a possibility of a hardware update? So let’s say three years from now you crack human hands, maybe not quite as good as this cause you want to hit your 50, 000 target level, 90%, 70%, whatever enough for many manufacturing type jobs.

Can you like, take a hand off and plug a new hand in. 

Jeff Cardenas: Yeah. Yeah. So Apollo is modular. There’s this big debate of wheels versus legs too from the traditional folks in automation. Like what do you need legs for? We can use wheels in all these applications and what we’ve done with Apollo is just taking everything we’ve learned because we’ve been building these robots with customers over many years.

And so, We’re able to take all of that learning and inject it into Apollo. And some people are going to want these things on wheels. There’s a big, huge number of advantages to legs. We think legs will win the day. Overall, there’s this problem with legs that the robots can fall over. And so in some cases you can have wheels.

The beauty of robots is you can have your cake and you can eat it too, and that you could build them to be modular. So Apollo is modular at the torso. So if you want to put it on wheels, you can throw it on wheels. We think that will demonstrate that legs will be the most versatile platform longterm and.

The reason we know about the challenges of wheel bases is because we’ve designed them. And so we’ve deployed versions of humanoids on wheels and we’ve learned from that. And so the same thing is true with the end effectors. I agree that I’m sure that’s Jordy that’s saying hands and, I agree that long term the humanoid needs hands, but in the near term.

There’s many applications where you don’t require a full five fingered hand and you can do things with a one degree of freedom hand. There’s a whole range of things you can do. There’s a whole range of things that robots do today with pincher grippers or, and so you can expand and you can, you don’t have to solve all the problems at once.

I have a ton of respect for Geordie. I’ve worked with Geordie over many years. He’s a visionary. But we’ve just taken a different approach in terms of how we’ve thought about that. And we want to partner with folks like Sanctuary. They’ve been a big partner on it with us. They can put their hands on a robot like Apollo, and we can work together there as they start to crack the dexterous manipulation problem.

So yeah, it’s modular at the chest. It’s modular at the end effectors, it’s also modular at the head as well in terms of putting different sensor payloads on it. So we have a standard sort of camera based vision system, but there’s also debates about LIDAR. Do you need lighter or not?

Our vision approach doesn’t need LIDAR, but in some cases, when you start to put these robots outdoors, if you want to add LIDAR, you can. This is something I think Boston Dynamics has done a really good job of with SPOT, is they created the ability to put different mission payloads on the back of SPOT, and something that we learned from along the way and part of what’s designed into Apollo.

John Koetsier: Love it. And as you hinted, Geordie is Geordie Rose. He’s the CEO of Sanctuary AI and the former CEO of a quantum computing company that sold a 15 million quantum computer to Google, but it’s still around. I want to, there’s so many places to go here. I do want to talk about the brain. That’s really challenging, right?

You’ve got to, How are you building intelligence into your robots? Is it pre programmed maneuvers? Is it versatility with a certain level of intelligence? Talk about how you’re doing that. 

Jeff Cardenas: Yeah. So I think the long term goal is to start to get towards more and more intelligence overall.

But I think in terms of AI and intelligence as a whole. For humanoids, you can really break it down into two buckets. So the first bucket is physical intelligence. So that’s like coordination, hand eye coordination, the ability to balance walking as part of that’s physical intelligence. 

The other side is cognitive intelligence.

So how do you make decisions? How do you reason about the world? How do you abstract ideas, things like that. What I’d say is that we’ve really focused on building from the bottom up and there’s different approaches. You can go from the top down, as in, start with the intelligence and think about how to build a machine around that we’ve gone from the bottom up, which is start with the actuators, the motor controllers, the electronics, really the basic building blocks and then build up into intelligence.

My view was that you want to build. The most capable platform you can possibly build and then you can think of these intelligences as software that you can put on top of the robot. There’s people that disagree with that and say, well, in order to get to full intelligence, you need deeper integration. And I think we’ll see, but we’ve really focused on this physical intelligence and.

The exciting thing for me, and you had a question that maybe we’ll get too deep, but it’s like, where are we at? I think of this as really an evolution of what’s already being done out in the world, and we don’t have to solve new problems to get humanoids out into initial applications to show utility for humanoids to be fully realized.

Yes. You need much higher levels of intelligence than we have today to be fully realized. Humanoid, I think, but we have a lot of the building blocks already today. So for example we in, if you think of the evolution of robotics, 2004 collaborative robots came out. Those are human safe robots.

By 2010, compute got good enough. Batteries got good enough. We could have mobile robots. And we started to do things like SLAM and navigation. By 2016, machine learning came to the scene, and we could do intelligent grasping. So what we’ve done is build on all these things that we’ve seen work in production, and really build from the bottom up, and integrate those things together, and taken maybe more a conservative approach than some people are taking, to basically, use what we know works, and use what we’ve known, we can deploy into the world today.

And then. We can always add these other sort of more difficult sort of R and D problems later on down the road. And so, we can dive in deeper where we want to go. 

John Koetsier: It’s fascinating to see the different approaches and that’s the beauty of the sort of free market innovation system that we operate in the Western world, at least where you do have those people who are coming top down and want to build intelligence and the intelligence will do everything.

That’s a risky bet. It’s an amazing bet if you make it and you win, because if you win, you’ve solved everything, quote unquote, everything, right? But if you don’t win, you end up with an expensive boondoggle that doesn’t accomplish anything. It doesn’t, it’s either. Really good, because you can’t have a robot out in the wild, maybe making a sandwich or slicing something up or using a tool that could be dangerous that is potentially dangerous.

Your approach in software mirrors your approach in hardware, which is starting from the ground up. What can I do? What do I know I can do? What do I know I can do today? And that seems to be a very Pragmatic and practical way of doing it. I think that’s super interesting. I do want to go deeper into what this means and what it looks like, but maybe before we do that you’re doing this in Austin, Texas.

And you feel that is significant. Why? 

Jeff Cardenas: I think if you look, there’s people out there that are saying there’s going to be more humanoid robots than people one day. Like I said, it’s funny to me to have that out there when everyone thought these things weren’t viable even.

Five years ago there were novelties. And I think that’s exciting. I’m not sure that they’ll all be humanoids, but I think there will be a lot of humanoids. And I think it’s the most versatile platform you can build. But the reason I think Texas is important is, where are we going to get all the robots we need?

So if you look at the world today. There’s hundreds of humanoids, maybe right? There’s not very many of these systems. So how are we going to go from hundreds to thousands to millions to maybe billions? And I believe that Mexico is going to play a big role in that for North America. I think if you look at Mexico relative to other cCountries where we’re doing manufacturing today.

It’s got a lot of advantages. It’s about a third cost of labor, depending on how you measure it. It’s geographically located, really close to the U. S. market. So we can get from Monterey to Texas in 2. 5 hours anywhere in the U. S. in 24 hours. And they have the skill sets to be able to pull this off.

And so I’ve always felt like the Texas Mexico corridor was going to be one of the most important manufacturing corridors in the world over the coming decades. And this is something that I was saying, coming out of grad school we were, we had two key ideas when we started Abtronic.

One was that robots had to become more versatile. And two was that somebody had to be building. Robots domestically here to serve the U. S. Market. We didn’t have any major domestic O. E. M. S. And we got started. And so if you agree with the premise that, hey, we’re gonna need domestic manufacturers long term, the question is, where is that going to happen?

And Typically, it’s been on the east or the west coast. The big hubs for robotics have been California or Boston, but I think Texas actually has a number of unique advantages over those hubs where I really believe that Texas has actually much better adjacencies than those other places and so I always felt like Texas was the place to do it.

We’re already producing gearboxes and a lot of the heavy machinery. We’re doing a lot of that for the energy industry before. But as you look towards what’s coming next, is there’s a lot of people in Texas that are looking for. Okay, as the oil and gas boom ends, which it will at some point, where do we apply all of this industrial base that we already have built?

And for a number of reasons, I think robotics actually makes a ton of sense for that. And that’s why I think Texas is going to be important. 

John Koetsier: Well, it is interesting. Cost of living is certainly less, proximity to Mexico. I was going to make a joke, what is this access to labor that you’re talking about?

Aren’t the robots going to build the robots? But I’m sure there’s going to be some tricky jobs for humans in all that stuff. Let’s look forward a little and let’s say we’re in a future where we have tens of millions, and we’re progressing. How does this change our world?

How’s it changing our economy? 

Jeff Cardenas: I think that it fundamentally changes the way that we live and work. And the reason I think that is because as humans, our most valuable resource is time and our time here is limited. And, you had great thinkers in the last century, John Maynard Keynes famously predicted that we would have a 15 hour work week.

And so what I think changes is that instead of doing things that we have to do, that somebody’s just got to do, we can now have machines that do that for us. And what that does is free us up to spend time on things that we really value. Why do we spend more time at work than we do with our families?

And the answer to that today is, well, someone’s got to do that. We’ve got to keep the economy running and going. We have to provide goods and services to our fellow man in order to keep all this moving. But I think what robotics has the potential to change is to change that equation. What if the cost of goods and services dramatically falls because They basically will, slope towards the cost of the raw materials as the cost of labor continues to go down.

So goods and services could become much cheaper and much more abundant than they are today. And that frees us up to do, spend time in a way that we want. And there’s this interesting quote that I heard, what did Darwin, Galileo Newton, what do they all have in common? They were all very wealthy, and so they had time to think and contemplate their existence and think about these higher level ideas.

And today you have people that are stuck in a cycle of working all the time to make ends meet. And I think, an optimistic version of the future is you start to free people up. They’re able to think about things like their own health, about taking care of each other. We start to fix the health care, the education system.

And ultimately we evolved as humans. And I think that, applied in the right way. It could be a really positive thing. 

John Koetsier: Super interesting. Okay. So, you’ve launched the robot, you’ve launched Apollo. What can somebody buy or rent today? 

Jeff Cardenas: So today we’re working on pilots and, my whole philosophy, a core sort of value for us at Aptronic is show versus tell.

So what we’ve done is we’ve built a demo center here on site at Aptronic. We’re mocking up the use cases that we’re looking to deploy Apollo into. So for the remainder of this year, we’re basically signing up pilot customers and we have some of the the marquee customers in the world that have already signed up, and we’re doing these on site proof of concepts through the rest of this year, where we’re demonstrating it might what I tell the partners we work with.

If I can’t do it here at my facility, I can’t do it at your facility. So make sure that you’re comfortable with the performance of the robots here, and then we’ll get it on site early next year. So next year we start with The initial sort of fielded pilots and we feel that a lot of systems up into this point.

So we’ve already put one of the things I tell people when they come to Apptronic, they’re looking for all the robots and we have a lot now. But for many years, every robot we ever made, we sold because that’s how we stayed alive. That was our, that was our business model. We were entirely funded on revenue for the first five years.

We only just raised money in the last couple of years. And so, but next year, we’ll get them out into the world. We’re not putting out pricing just yet. But a big part of what we’ve cracked is the ability to make these things affordable. And so that’s going to be a big part of our value proposition as we move ahead.

John Koetsier: You appear to have been remarkably capital efficient. I know many of the other entities, companies, departments that are building or trying to build general purpose robots. They’ve raised 100 million dollars. They’ve, they’re part of a Trillion dollar company or a multi-billion dollar company.

You’ve been scrappy, you’ve been bootstrapping, you recently raised what, 14, $15 million which is interesting. But that all seems to accord with Connie, your philosophy of start small, build what we know and even to the, to your go to market strategy of bringing people in, seeing what can do super, super interesting.

And I really liked that actually at 50, 000, assuming you can get there. Cause that’s your goal. It’s not there yet. And you’re not in mass manufacturing yet. I’m assuming it’s 50, 000, that’s a very interesting price point, because if you look at the kinds of jobs that you’re going to place us in and logistics and stuff like that, you’re spending probably a bit more than that.

If you look at entire costs for having an employee and benefits and other things like that. And. That sounds interesting. Do you know how you’re going to bring them to market? Are you going to sell them outright or are you going to have a SaaS solution? What’s your thinking there? 

Jeff Cardenas: Yeah. So, once again, the way that I asked, right.

John Koetsier: Robots as a service, not software as a 

Jeff Cardenas: service, robots as a service. And the, one of the exciting things about why this is feasible now is we already have business models that exist for selling mobile robots. It didn’t exist before the AMR market took off, but now there’s lots of companies that are selling mobile robots to these exact same markets in logistics.

And so customers now know how to buy them. The two ways they’re buying AMRs today are either robots as a service or CapEx. Typically with a SAS component to that. So some, the larger companies want to own their own fleets, they want volume discounts and other things. And so they’ll buy them outright. But I think largely what you’ll see in the early stages of the humanoid market is robots as a service model where they want to try them out.

They want to see how this works. There’s people that are worried about technical obsolescence, right? Like how quickly is this going to mature and develop? Am I ready to buy a fleet yet? Or do I want to wait some time? And so for a number of reasons, I think robots as a service is going to be important for this.

And now as an industry, we know how to do that. There’s third party financing groups that have already set up around this. It’s now something that the market understands, which wasn’t the case, five years ago. 

John Koetsier: I keep thinking about additional questions. We’re having a great conversation.

I want to ask this one. Where do you situate the United States and maybe North America a little more broadly in terms of global innovation of humanoid robots? Because you’re obviously U. S. based. There’s a bunch of others .. Figure. 

We’ve talked about the one in Vancouver, right? Geordie Rose’s company. 

We’ve talked about Boston Dynamics, right? So yeah, Sanctuary is the one in Vancouver. I visited Robot Island. It was sort of the name for it in Denmark. It’s gotta be about three years now, just pre COVID where there’s been, there has been for about a couple of decades, a global concentration of companies in automation and robotics is Odense Denmark.

And I haven’t seen anything come out of there. And they’ve been instrumental with some of the big manufacturing companies globally with large scale robots and automation stuff. But I haven’t seen a humanoid robot come out of there. Why am I seeing so many in the States and where do you situate the U. S. in terms of global innovation for humanoid general purpose robots? 

Jeff Cardenas: Yeah, so this is a topic that is important to me because I think the race is on. There’s a lot of interesting stuff that’s coming out, all over the world. I think one of the reasons that you see the U. S. leading right now is because the U. S. Government has invested so much money and the R&D that was required to make this happen. So whether it’s Figure, Boston Dynamics, or us … any folks have teams that have their roots in something called the DARPA Robotics Challenge. The DRC, they injected tens of millions, I think it was over 100 million total DARPA did to really advance state of the art for general purpose robots.

And that was 2013 to 2015. 

And the seeds were planted back then. And we’re just now seeing the fruit that’s being beared off of that investment. But that’s really what made Boston Dynamics big, was DARPA funding in the early years. Atlas came about for the DRC. So the very first Atlases were designed for the DARPA robotics challenge.

And that’s really what spurred a lot of the innovation. There has not been as much government funding in this space since the DARPA Robotics Challenge. And I think that if the U. S. wants to continue to lead, the government’s going to need to step in a big way and really inject more money into it.

But, this is the same thing that happened with autonomous driving. There was something called the DARPA Urban Challenge, and there was a couple DARPA challenges that… That really seeded the technology that moved it from the lab and may give it the 1st nudge out into the commercial space. And then companies were built out of that.

And so, Jerry Pratt, that’s over at Figure … he had a big role in the DARPA robotics challenge and did great work there. And then Boston Dynamics certainly came out of that in terms of who’s leading in the world and where do we go from here, I think China is making a big push in humanoids in particular.

And the government there is really stepping up to make it happen. And so I really want to see the US government respond. This is a race that I think is important long term, how should these be deployed? And I think it’s an area that we can lead, there’s other great countries doing amazing things as well.

There’s great work happening in Korea, certainly in Japan as well. And then all across Europe the stuff coming out of ETH Zurich and groups like antibiotics, they’re doing really great things though, not in humanoids and more versatile, cutting edge next generation robots. 

John Koetsier: It’s important.

It is really important. Hey, because if you win here, you win. We talked about the cost of information approaching zero. We’ve talked about the cost of software approaching zero because replication is essentially free, right? If you achieve a working and capable general purpose humanoid robot, the cost of labor, as we’ve already talked about, starts to approach zero as well.

And all of a sudden, that opens up a ton of capability for manufacturing cheaply. Onshore manufacturing, other things you want to do, jobs that were not, you couldn’t pay for them before, maybe environmental reclamation, maybe public works projects, you couldn’t pay for them before, you couldn’t afford them before, all of a sudden they become affordable and desirable, and everything that a society wants to, as well if you.

You are concerned about the declining population in some of the older nations of the world, Europe, Japan, those sorts of places. China as well. Those that’s all critically important. 

Jeff Cardenas: Yeah. And I have an interesting story about that because the U.S. actually invented the very first industrial robot. So it was invented in the late 50s, went into a General Motors factory in the early sixties, it was called the Unimate arm and the company that built it, Unimation actually ended up folding in the eighties.

I’ve heard a variety of different stories on what happened, but long story short, they didn’t. They didn’t keep getting funded. They didn’t keep getting backed in a critical time. General Motors, I think, was involved somehow and pulled out well. So the U.S. invented the very first industrial robot and effectively lost the first wave of industrial automation.

The big four that were producing all the industrial arms, two were Japanese Fanuc and Yaskawa. One was Swiss ABB and the other one was German KUKA. And so, I think this is important for policy makers and others because this next wave dwarfs the first wave in impact and size. And so I think it’s important to get it right.

But it’s an interesting story and something that I try to tell everyone that will listen when, whenever I get a chance. 

John Koetsier: Jeff, this has been a wonderful conversation. Thank you for taking the time. Yeah. 

Jeff Cardenas: Thank you very much.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: