Tag - e4e4e4

Making games with your voice, with Roblox chief scientist Morgan McGuire

AI in games with Roblox

How will AI change games? How is AI changing games?

In this TechFirst we chat with Morgan McGuire, Roblox’s Chief Scientist and a former Nvidia research scientist. He tells host John Koetsier how AI is not only enhancing game creation through generative AI but also revolutionizing multiplayer game safety with advanced AI moderation systems.

We chat about the explosive growth of Roblox and share insights into how AI is shaping the future of interactive, social, and immersive gaming experiences.

Ultimately, McGuire says, we might be creating games with our voices in the not-so-distant future …

(Subscribe to my YouTube channel)

Subscribe to the audio podcast

 

Transcript: making games with your voice, with Roblox chief scientist Morgan McGuire

This is AI-generated; it is not perfect.

John Koetsier (00:01.72)
How will AI change games? Hello and welcome to Tech First. My name is John Kutzer. The scope of the question I just asked is impossibly broad. AI is being used and will be used in many cases, have been used for a long time in all aspects of game development. We’re talking gameplay, game management, smart NPCs, endless levels, 3D interactions, faster development, more insane art, global communications with billions of players, regardless of what language they speak.

and importantly, democratization of game making. There’s so much. Here to chat about some of the biggest changes is an OG of AI. He’s a professor of computer science, a former research scientist at Nvidia. Tell me you kept your stock options. He’s a consultant at Unity. He’s the senior architect, was the senior architect at Oculus technology, and he’s the current chief scientist at one of the hottest game platforms on the planet, Roblox. Welcome Morgan.

Morgan (01:00.924)
Thanks John, it’s great to be here with you.

John Koetsier (01:03.87)
It is great to have you again. Last time we chatted, we were on stage at Collision in Toronto. What’s new since then?

Morgan (01:11.868)
So lots is new. One of the most exciting things for me at Roblox was you and I had talked about our 4D AI initiative we had just announced publicly at Collision. And this is the idea of going beyond 1D text, 2D images and materials, even the kind of nascent state of the art of can you make 3D models with generative AI, which is very much an emerging area, not an established category.

What we announced at Collision was going beyond that to 4D and interaction is that fourth dimension. This is how it weighs into games. So not just can you make 1D, 2D, 3D objects with AI, but can you bring them to life? And the exciting thing for me is that we’ve since shipped several of our initial features in that space. The most exciting one, I think, is our avatar auto setup, because it goes directly to letting anybody make their own 3D persona.

from scratch now instead of having to just choose from an existing catalog. So we get the full catalog, all the clothing, all the makeup, all the accessories. And if you can’t find what you want, you can make it now with AI. So super exciting.

John Koetsier (02:23.234)
This is super exciting and we’re going to dive super deep into that because this is like next generation, next level stuff. mean, like we’re getting excited about, you know, image generation. That was maybe a year ago, half a year ago. We’re getting excited about video generation, AI video generation, right? And so, and that’s, that’s still, you know, you see some cool stuff and you see some, wow, that went weird places, right? And you’re talking about like real time interactive multiplayer.

you know, stuff that works. So that’s super hardcore. We’re to get into all that. I want to hit some stuff first. One thing that I saw VC Matthew Ball, he wrote a post and wow, it was pretty impressive. Some of the numbers on Roblox, three hundred eight million monthly average users. That’s twice steam. That’s three times PlayStation. That’s three times the annual users of Switch. That’s five X Minecraft. Wow.

and more than twice Fortnite. And he said it’s likely Roblox has more monthly users than the entire AAA gaming ecosystem combined. This is astonishing stuff. Six billion hours spent a month on Roblox, that’s double what people spend on Disney Plus. And also the hours per daily average user are growing. So it’s not just numbers growing, but the amount of time that people are spending is growing.

growing in APAC, rest of the world, not just Canada, the United States, other places like that. Interesting fact he mentioned was that Roblox never contracted after the pandemic. mean, most games did, right? Games exploded as a category during the pandemic at home. Lots of time. What are you going to do? Play games. Roblox never contracted. And he said, your run rate spending should hit $4 billion this year more than any other game. Like, wow, what’s going on?

Morgan (04:15.548)
So yeah, it’s a very exciting time, I think, for the space, for the platform, for Roblox. I’ve really valued my past interactions with Matthew. We’ve definitely met on stage and interacted a lot. And I think especially his early writing on the metaverse, which is a term that I think in its classic sense, the sort of snow crash metaverse social 3D UGC interaction.

Roblox is definitely one of the most credible platforms in terms of trying to realize that science fiction vision. And so think Matthew is very educated in this space and does a great job in it. Setting aside his exact article, the directionally, yes, the excitement is real. I just pulled up, I was curious, I was looking at our daily active user chart from 2018. So well pre -pandemic.

to Q2 2024, it is basically a straight line. Like you cannot see a pandemic lip on it. So when you say, has Roblox has not contracted since the end of pandemic, we exited 2021 with about 40 million daily active users. We just wrapped a quarter at 80 million. So not contracted is a funny way of saying doubled.

So I think the important message there is not the 80 million daily active users, 79 .5 to be precise, but it’s the fact that that trend has been so powerful. And what’s most exciting to us inside the company is not the raw number. Our goal is a billion people connected with positivity and optimism and 80 million, 100 million.

John Koetsier (05:58.062)
Mm

Morgan (06:08.272)
whatever, like we’re, we’re, we are going for a good chunk of the world, experiencing a civil online, 3d social space together. So where we’re at now is only the beginning of the growth. The exciting thing is what you referenced. So our growth in Japan, in Korea, the fact that we are growing more rapidly outside of our sort of home base in North America, if us and Canada, where Roblox started.

The fact that we’re growing more rapidly outside of our classic demographic of under 13 is now 17 to 24 is our biggest demographic. To me, that’s the story, not the absolute number, but the diversity of users, the diversity across countries of experiences, kinds of 3D interaction. That to me is really fulfilling. That’s where we’ve always targeted. And we have such great traction going in 2024 halfway through, and it looks like our best year ever.

John Koetsier (07:06.84)
Amazing. And that upscaling in the age demographic bodes well for monetization as well. That said, we didn’t come to talk about numbers or monetization or other stuff like that. We want to talk about AI in games. And you’re working on some very, very cool stuff, image generation, translation, safety, which is critical, obviously, when you still have a lot of kids on the platform. And this 4D generative AI, where do you want to start?

Morgan (07:33.306)
Let me start with safety because that’s as a company where we start.

John Koetsier (07:38.008)
Go ahead. what do you, let me frame it a second because we’ve seen a lot of attempts to use AI to manage safety. A lot of them with meta, right? Facebook doing certain things and it’s hard. False positives abound. Other stuff comes through. People find interesting ways of spelling words or referring to things in cryptic ways. And it’s.

This is a challenging problem.

Morgan (08:09.852)
Absolutely. So safety is a form of security from a technical perspective. And security is always a case of the bad actors are going to keep upping their game and you just have to stay ahead of them. It’s not a static thing where you can release a technology and say, we’ve checked the box on safety. You have to keep iterating constantly. You have to stay ahead. And as technology evolves, it has good uses and it has less good uses.

Our job is to make sure that the good uses are outweighing, especially on the safety side. So a great example of a new safety tech that we just released, and we moderate everything on the platform with AI now, 100%. So video, audio, images, text, whole 3D experiences, avatars themselves, clothing. So it’s a really monumental task. It’s all the kinds of media you can imagine. We are moderating everything, not just text.

John Koetsier (08:44.258)
Mm -hmm.

Morgan (09:09.748)
And there’s all kinds of cute things people try to do to circumvent them. And one of the most powerful features we have is that we’re now using AI throughout that whole process. It used to be a collection of classic safety technologies. It’s called traditional natural language processing. So parsing sentences for nouns and verbs, taking speech and turning it into text and then moderating the text. But then you miss the nuance. You miss the sarcasm.

One of the most exciting things to me is not just that we’ve rolled out AI across the platform for safety, but that we did it for voice and we did it for voice for all ages. And this has been a holy grail of safety tech. And, you know, for, I would say for better or for worse, but I think for worse, traditionally the reputation in 3D spaces of voice chat is pretty bad. So mostly industry has not been able to keep up with moderating. It hasn’t been able to handle slang.

John Koetsier (10:00.184)
Mm -hmm. Mm -hmm.

John Koetsier (10:06.85)
Mm -hmm.

Morgan (10:07.104)
and hasn’t been able to handle just the enormous technical challenge of the bandwidth of millions of audio streams simultaneously and in real time, trying to moderate those. So that’s been a real challenge for the field as a whole. And I think the gaming segment in particular has done a lot of work on that has tried hard, but has not been successful historically. And so this was a challenge. It was one of the things that we set out when we founded sort of our R and D lab at Roblox when it came back three and a half years ago.

John Koetsier (10:16.78)
Mm -hmm.

Morgan (10:36.166)
was we said we want to do real -time voice. Ultimately, we want to do voice translation so you can speak to people in different languages. We want to ultimately do voice fonts, it’s called. So basically voice role -playing kind of transition. So I’m speaking in my voice, but you’re hearing a dragon. So it fits my character, right? So the avatar is not just the virtual 3D persona, but it’s also your voice. And that’s also important for privacy. We’re not there in either of those technologies yet.

John Koetsier (10:55.914)
Yes.

Morgan (11:05.692)
on the translation and the voice transformation. However, we do have research papers. We’re getting pretty close. It’s mostly about closing the gap on making it cost effective in real time. What we have, I think, really nailed is the voice moderation. So we have the world’s first voice moderation system that will monitor 100 % of voice chat on Roblox. This is deployed. And it works for all kinds of voices. Really difficult.

Most traditional voice stuff, if you look at even projects like, you know, big things like Siri and the Google assistant and Alexa, they’re trained on adult voices. They’re trained on adult voices in a noise free environment. You have a 12 year old, a 15 year old on a playground holding their phone. The microphone’s getting wind noise. So, so this is a really difficult, this isn’t the easy you’re standing in a sound booth trying to filter.

So we’ve cracked that case. solved that. So for all of our 13 and up is what we enable voice chat for today as an extra safety precaution. And if a user has enabled voice chat, goes through a moderation system and we’re able to flag all kinds of bad action there. And then we have a really nice way of dealing with it. Cause again, you know, that might be a 13 year old, right? This isn’t, they repeated something they shouldn’t have said. They said something out of context, think instantly banning someone from a platform.

is basically you’re inflaming a situation that didn’t need to be inflamed. And so we want to turn down the flame usually. And so what we do is when we detect problems, depending on the severity, we give you a little prompt just explaining, you hey, you’re in time out. Look, you might not have noticed, but the thing you said was not appropriate in this context. Here’s why we’re turning off your mic for 30 seconds so you can cool down and reflect on what just happened. And we’re going to turn it back on.

And if this happens again, it’s going to escalate quickly. And again, it depends on the kind of infraction. But I think this is really key is that we wanted to use AI not in a way where we would instantly be punitive, but in a way where it actually contribute to steering the community in a more constructive direction. And it’s a really effective way of doing it because it means that then we also have an opportunity to have an appeals process, to have ultimately human moderator if we want.

John Koetsier (13:13.602)
Mm -hmm.

Morgan (13:28.026)
and take a look at the exact case, the transcript, the context, the 3D. Because we want to be fair to everyone. And our goal is not to sort of have winners and losers, but to have everybody be a winner. And so we’re trying to move players who are in that category of saying inappropriate things into the category of being constructive and staying on platform.

John Koetsier (13:38.239)
Mm

John Koetsier (13:48.302)
So you blew my mind when you said that 100 % of the content on Roblox is being checked by AI. We mentioned already 6 billion hours a month spent. There’s people talking. That’s real -time interactions. That’s text, which is obviously easier. But it’s the things like clothing that you mentioned. Unbelievable. Do you have any metrics as to how successful you are or how much you catch or?

or anything like that, how do you gauge your success?

Morgan (14:20.614)
Yeah, it’s a great question. So that we have a tremendous number of metrics for looking at all aspects of performance and safety. And as I said earlier, safety is ultimately this game where people are going to keep trying to thwart the system. Sometimes playfully, they don’t know better. Sometimes harmlessly maliciously. They’re trolls. And sometimes seriously maliciously. They’re trying to do something really bad on the platform.

John Koetsier (14:47.864)
Mm -hmm.

Morgan (14:49.688)
excellent at catching people who are trying to do something really bad on the platform, preventing the harm. We work really closely with law enforcement on that. Obviously the exact numbers around that aren’t something I can share, but it’s sort of a point of pride to me that we are, I think, one of the absolute best in the entire industry at making sure that we protect our users, especially our younger users, and in some cases, specific groups that have not found a home online.

that have found that Roblox is a safe space for them. To me, that’s just a really values line, really important thing. And it goes beyond business operations to core values and about the kind of world we want to live in. So I can’t solve all the world’s problems, but maybe I can help solve all the world’s problems when they come to Roblox and give them an ideal world there. On the moderation side for the assets and voice,

John Koetsier (15:29.408)
Mm -hmm.

Morgan (15:45.616)
Basically, we are currently at levels that we measure as superhuman. years ago, we had large numbers of human moderators, and that was never desirable. So it’s obviously expensive, but more than that, it’s a job nobody wants. Yeah.

John Koetsier (16:00.5)
It’s an awful job. People get so, so damaged by it.

Morgan (16:05.404)
Exactly. So the idea is like no human being should be exposed to that kind of content. You know, even if they’re an adult, even if it’s their job, even if they have training and it’s just, that’s something that is a great use for AI, right? It’s taking a task that no human wants to do, automating it and in doing so protecting privacy and all that. But we deploy when we hit superhuman levels. So we don’t deploy when we say, 80 % of human, but there’s a cost savings.

We actually target saying, is this the best way to do it? And when it is, we’ll switch over. And so for us, there is a cost savings. It’s been great for our operations. It’s been great for scalability. It used to be as more, right? That doubling in the last couple of years, we had to double our moderators if we were doing everything manually with safety. And by automating, means that, we get these economies of scale from our supercomputing clusters.

John Koetsier (16:53.73)
Yeah.

Morgan (17:00.93)
And so it costs us less than twice as much to do the moderation, but we can scale without limit there. We can just keep throwing more machines in there. We don’t have to hire more and more employees. So we’re able to have sort of a stable size of Roblox, the company. We’re able to automate some tasks that no human wants to do. And we’re providing best in breed safety to the whole world. So win -win.

John Koetsier (17:23.254)
Love it. Love it. You reminded me of the Florence Nightingale quote there. can’t do everything, but I can do one thing. So we handled safety. I do want to talk about translation briefly, but you kind of already talked about it, right? Because you’re enabled. If you need to know, if you’re going to moderate, you need to know what people are saying. You need to be able to translate that. I want to make sure we leave enough time for image generation, which is critical for game making and democratizing game making.

and your 4D generative AI. Maybe briefly hit translation. You’re trying to build a community, a safe place for a billion people. That’s a lot of different languages.

Morgan (18:04.924)
It is. We operate currently in about 180 countries around the world. there are about 20 languages that we support, sort of supported, but there’s about 45 languages people speak. And in some cases it’s a little tricky because what’s a language, right? There are dialects, are sort of within a language, there’s all kinds of slang. There’s, you know, there’s Portugal speaking Portuguese, but there’s Brazil speaking Portuguese. And it’s a slightly different variation. It’s a different.

John Koetsier (18:16.974)
Mm

John Koetsier (18:24.301)
Yes.

John Koetsier (18:32.45)
Yes, Yep.

Morgan (18:34.842)
accent, and both of which are big countries on Roblox. that was one of our first languages that we did beyond English was actually Portuguese. Most companies go to French as their next language. But we happen to have a huge population of Brazil that was embracing the platform. said, OK, we want to make sure that all the safety works for them as well. so it’s definitely absolutely a lot of languages.

John Koetsier (18:46.476)
Yep. Or Spanish.

Morgan (19:01.116)
We tend to train in a small set of representative languages first when we’re doing our testing on AI. And then the next thing we do, and we make sure to specifically, we don’t just start with English, which is sort of the dominant language within the company. We start with about five languages and then we quickly spread out and make sure that everything we’re doing will transfer. to be clear about the voice, which is very exciting and we’re now looking at this for images as well.

John Koetsier (19:07.521)
Mm -hmm.

John Koetsier (19:22.51)
Mm -hmm.

Morgan (19:31.098)
For voice, we don’t turn it into text for moderation. We’re directly moderating the voice. So that’s how we get all of the nuance. And that was the technology that was great collaboration of R &D and engineering partners across the company. I got to personally work on it because it was one of the things that I was really excited about in terms of that safety, internationalization, and sort of hit all of my personal goals. We released that model, by the way, as open source. So you’re an open model.

John Koetsier (19:37.058)
Mm -hmm.

Morgan (20:00.422)
We have a technical paper that just came out. went to the inter -speech conference. We told the world how we built the safety system. And we thought, this is so important, right? Like I had said, the rest of the industry has not done a great job on voice safety. I think we had some really good breakthroughs. Let’s share this. So we took a version of the model, not the exact one that runs on Roblox. But we took a version of the model and released that to the whole world so that other companies can go ahead and use that because we think it’s too important.

to that kind of technology to ourselves. We really want safety to be everywhere, not just on Roblox. So for the translation, yeah, we try to do everything effectively native in its language. We don’t translate from media like voice to text. We also don’t translate Indonesian to English and then moderate. We do natively in Indonesian.

John Koetsier (20:50.167)
Yeah, that was interesting. I read your blog post about that and you didn’t do like a translation module of a language to a language. You did them all at once in some sense. Super interesting, super cool.

Morgan (21:00.463)
Mm

Morgan (21:05.178)
And it’s, as a scientist, this goes beyond the scope of what we need for the product in the short term, but I think in the long term is the right kind of thinking. You have to be careful when you draw analogies from AI systems to the way that humans think that there isn’t a lot of credible evidence that those are necessarily particularly similar, even though it’s a biologically inspired model. It’s pretty heavy on the inspired rather than the directly modeling.

But it is the case that when you build these systems, it doesn’t just help you solve the problem in front of you. If you do it right, and this is the point of R &D versus engineering, gives you insight about the problem itself. And that will help you solve the next three or four challenges. You haven’t just knocked one thing off. You’re understanding the space better. So what is the space of human communication? What is the space of safety moderation? And so I think one of the early

speculative insights that we’ve gained from doing a lot of this kind of safety and moderation technology is that there seems to be some commonality across the language. The transfer is really good between even very dissimilar languages and they can get extremely dissimilar. You can have languages that are tonal. You can have languages where the verb is at the end of the sentence versus in between the direct languages that have gender, languages that don’t have gender.

So I think we are starting as a community to learn a bit about language, about what it means to have a constructive interaction with someone independent of language. There’s some things that seem to be sort of deeply embedded in human culture. And I think this probably, you you mentioned you’re recently in Honduras. I don’t know if you speak Spanish fluently or not, it’s perfect.

John Koetsier (22:56.302)
Poco Spanish, Poco Spanish.

Morgan (23:00.378)
So it’s definitely the case that even in a language that you cannot communicate in very effectively, I’ve traveled around the world, it’s one the best parts about being a scientist, that I find that your body language, your tone of voice, interaction, the politeness, the exact details of etiquette vary across countries. But then in general, there is sort of a pan -human culture of I’m friendly, I’m polite.

I need assistance, I’m offering assistance. And I think we’re starting to see beyond what sort of you know, sort of linguistics has gotten us. We’re starting to see that because AI can process huge amounts of data, right? This corpus of all communication that’s been happening on Roblox this year, that’s even just the texting, that’s a trillion messages, right? Nobody’s ever been able to analyze data at that scale and put it into a single system. So we’re starting to get some insights about

it seems like we can’t exactly put a box and say, this is what positivity and civility is as a mathematical equation, right? That’s not going to happen. But we can definitely say this interaction seems much closer to positive and civil than it does to a negative interaction. And that’s really powerful. So I think the future of AI moderation is going to go beyond, here’s the exact terms of service, here are lists of appropriate words, things like that.

And in the far future, I’m speculating that it’ll be much more about in the way that a human does, looking at an interaction, looking at the participants, looking at their relationship. this is a parent and child. When they’re saying go to bed, that’s okay. But there are other cases where it might not be okay for someone to say go to bed or I’m taking you to bed now. And having a system that can understand that level of nuance, I think that’s the potential of AI. And that’s one of the things that we’ve never seen before in safety systems.

John Koetsier (24:58.082)
Well, that just blew my mind. You’re building a universal translator with emotional intelligence and contextual relevance. super, super interesting. Okay. We got to hit 4d generative AI because I mean, that’s kind of the Holy grail, right? You run a world, you run a virtual world. And if you’re going to have generative AI, sure. You can have generative AI for objects and clothing and you have that and we will get to that as well.

but everything has to exist, not just in a 3d space, but in a moving 3d space, not just in a moving 3d space, but an interactive 3d space where things change. Talk about what you’re doing there.

Morgan (25:39.708)
So the key to Roblox is growth, right? We had this runaway growth that started a little before the pandemic, went straight through and keeps going. And I mentioned this sort of, you I’m very proud of this 80 million daily active users, right? That was a nice milestone, even though it’s a small step along the way to where we’re going. That growth is fueled by having a great platform, running on every possible device. So we got Android and VR headsets.

and set -top boxes and consoles. But the content, the content is why people are there, right? It’s social interactions with their friends, both real -world friends and online friends, and doing something inside of a 3D world, having a great 3D evidence. So all of that content is what keeps players coming back. That amazing content, Roblox produces zero of it. So part of our community, and hopefully someday 100 % of our community.

creates content. It’s a user -generated content platform, UGC. And there are people who make their entire livelihood selling content on Roblox. There are people for whom it’s a fun pastime. So just like any kind of creation, there are people who make clothing professionally, and there are people who are knitting at home for their family. So that’s where the content comes from. It’s from our wonderful community. We have millions of creators. There are five million

active experiences every day. So there’s an even longer tale of total experiences, but there’s five million of them that somebody went to every single day. And so that’s incredible amount of content. As the user base grows, we need the content to grow. It has to grow roughly proportionally. And the challenge there is that Roblox has created with Studio, with our importers, wonderful best of breed.

John Koetsier (27:15.608)
Mm -hmm.

Morgan (27:36.188)
previous generation tools. So they’re the easiest 3D generation tools to learn, but it’s still 3D generation. You still have to know a lot about 3D modeling, about programming. It doesn’t matter that we have the friendliest programming language on earth. It’s still programming, right? That’s a real thing. You got to learn. AI came along, generative AI came along, and we started embracing it, started playing with it at just the right time in our growth curve, where we said, we’re going to have this problem of how do we produce

John Koetsier (27:43.01)
Mm -hmm.

Morgan (28:06.684)
And going back for roughly the last 15 years, the games industry and the film industry had exactly this problem of they couldn’t produce content fast enough. They were victims of their own success. And it was one of the things I’ve worked on in previous roles in the industry and given a lot of talks about, you just draw these curves and you say, Hollywood won’t be able to make a movie in 10 years because the budget of the movie will be the entire planet.

John Koetsier (28:14.158)
Mm

John Koetsier (28:33.422)
10 billion.

Morgan (28:34.492)
So everybody on Earth is going to be like editing pixels that Lucasfilm or something. So Roblox is in the same space. It’s still 3D content. It’s a different level of fidelity. It’s a different set of tools, but still 3D content. That’s hard. AI is the answer. AI is what enables us to lower the barriers to creation so that people with great ideas, the strength of their ideas will shine through, and they don’t have to spend one year, three years, five years

learning how to program, learning how to do 3D modeling, learning how to do rigging, animation. So our avatar auto setup already does this for you for the avatar, so for your character. And it’s amazing. It’s one of the most technical aspects of 3D content creation is making characters. They have so much nuance and you have to stay out of the uncanny valley and all of that. So we’ve automated that. You can go back and edit it if you are a professional. We’ll save you a ton of time. But if you’re not a professional,

John Koetsier (29:21.25)
Mm -hmm.

Morgan (29:32.548)
It means that you can go zero to avatar by yourself for the first time. We have our studio assistant. It’s a conversational AI. You chat with it, literally chat, microphone chat, and it helps you to build things. So you’re inside of our studio tool, but it’s doing a lot of the work for you. And I think we’ll get to the point where in the future you could, if you wanted, make a game entirely with voice. So by talking to the system, working with it over the course of an hour,

John Koetsier (29:36.56)
Mm

Morgan (30:01.454)
And it’s just, it’s up leveling what you’re doing. Instead of typing every single thing using your mouse, we can do have AI take your intent and produce it for you.

John Koetsier (30:12.098)
make an entire game with your voice. That would be mind blowing. That is mind blowing. Absolutely incredible. And it’s interesting as well because you talked about, hey, we need to grow the amount of content as the number of users scale. Absolutely. But people are also getting different perceptions of what a game should have, what a game should look like, how a game should operate, how rich it should be, how good it should be.

how interact, all that stuff. And so you not only have to create more, you have to always be upping the ante for quality also, correct?

Morgan (30:47.376)
Yeah, and one of the, and I think quality is exactly the right word. And I want to dig into a little bit about what quality means to us. Quality, one part of it is visual fidelity, but it’s honestly, it’s the smallest part. And this is really important to understand. And I think to understand why is Roblox so popular? Why is Roblox growing so fast?

Because if you look at Roblox on the lowest end devices, we’re really proud of the fact that we scale all the way down to 32 -bit, 2 -gigabyte Android, really low -end devices by the industry standards. Orders of magnitude below what my enthusiast desktop gaming PC is. So obviously, the visuals you get, if you have the latest e -sports gaming rig with a 4090 GPU rig, the visuals you get are phenomenal there from a AAA game that probably took

about 5 ,000 person years to produce. Most of the games I worked on in the industry were about that. So a few thousand people working for a few years. That’s very different than what you expect from the visual fidelity from the typical Roblox game, which is three person months of development. So really fast iteration cycles on Roblox. And yet the players are saying, they’re voting with their feet, they’re voting with their wallets, and they’re saying, this is high quality. This is in some sense, higher quality.

John Koetsier (31:59.714)
Mm -hmm.

Morgan (32:11.11)
than some of the AAA games or movies, right? That’s where a different generation is spending their time now is on Roblox. And it’s because it’s quality is visual fidelity, but it’s also, it’s the interaction. It’s the community, right? It’s having that positive supportive community around you so that they’re building you up instead of pulling you down. And it’s about new kinds of interaction. So one of the experiences on Roblox that’s been really popular lately is…

John Koetsier (32:13.516)
Mm -hmm.

John Koetsier (32:29.25)
Mm -hmm.

Morgan (32:40.304)
Dressed to Impress. And I started playing this with my team. On Fridays, we take our lunch break and we all play different experiences on Roblox together. And we try to get away from our recommendations, right? So we’ll use someone else’s recommendation, because we want to find things that are different than what we personally are used to experiencing. And we hopped into Dressed to Impress. it’s this, you’re thinking, I’m going to play a video game or it’s a video game that the marketing images make it look as targeted at female identified players.

John Koetsier (32:48.918)
Nice. Smart.

John Koetsier (32:58.53)
Mm -hmm.

Morgan (33:10.428)
I don’t know how this is going to map to my experience. And it was amazing. We loved it. We were going to play it for 10 minutes, and we ended up playing it for the full lunch hour because it was our favorite game that day. And what Dressed To Impress is, is you’re sort of in a mall with your avatar. And your avatar is kind of like a model catwalk kind of character. And you make an outfit to a prompt. The prompt will be fall fashion or something like

John Koetsier (33:37.728)
Mm

Morgan (33:37.828)
and you go and you pick out your plaid skirt and your nice hat and on your handbag. And then there’s sort of a community rated, the timer goes off, everybody’s avatar walks down the catwalk and everybody communally rates each other. And so you have, it’s kind of like a game show. It’s kind of like a party game, right? It’s very different than what you might think of when someone says video game. This is an area that we categorize as social role playing.

Morgan (34:06.266)
Also very different from what role playing traditionally means for video games, where it’s sort of you’re walking through a movie, but you have some choices. Role playing on Roblox is literally, you’re embracing a role. It’s kind of improv, right? It’s theater like. Dressed to impress is huge. Charli XCX just announced a collection on it after she had done her concert on Roblox. I honestly only know who Charli XCX is because she happened to have a…

John Koetsier (34:10.093)
Mm -hmm.

Morgan (34:34.448)
music video crossover with Billie Eilish last week that my daughter was showing me. But I’m informed she’s cool, unlike me. So big things happening, right? Lots of crossover from different media, lots of different worlds coming together. So Dressed to Impress, great example of something that maybe it’s a game, maybe it’s not a game, it’s some other kind of thing. But that’s the kind of diversity, creativity that to me is getting at.

quality, right? No one was offering an experience like that in AAA gaming. That didn’t exist.

John Koetsier (35:07.296)
And it can only happen, it can only happen in a 3d world that is social with elements. Yeah.

Morgan (35:12.27)
Absolutely. And where it’s user generated, right? A million people are making new ideas on Roblox every day. And if one of them is a winner, then it’s great for all of us. We all get to experience that new thing.

John Koetsier (35:26.168)
So cool. Very, very cool. Absolutely love it. Our time is pretty much done. Maybe let’s tie a knot on this and say, hey, if there’s one thing you could wave a magic wand at and fix in terms of AI, generation, and gaming in the next three or six months, what would you do?

Morgan (35:54.328)
One thing I would fix in AI and gaming.

Morgan (36:01.248)
I’m just saying my list is like I have a thousand things. So I’m trying to figure out which are the most important or the most interesting of them. So I think, well, I’m go with most important, but I’m gonna try and spin it as most interesting. So I think one of the biggest challenges in AI today sounds completely uninteresting when you work in the field, which is realistically,

John Koetsier (36:03.47)
Now we’re in prioritization.

John Koetsier (36:11.628)
Let’s go interesting.

Morgan (36:29.403)
AI is primarily not limited by the quality of our data anymore. 10 years ago, we were, we’ve now gotten really good as a field and Roblox as a company at producing high quality data. Roblox has been really leading about making sure that creators control the rights to their data so that we’re training on stuff that is on platform. It’s appropriate, but for which we’ve got an explicit permission from our developers to use. So that used to be the limitation.

John Koetsier (36:48.579)
Mm

Morgan (36:57.83)
Today, the limitation is actually machine efficiency. And so when we look at programs, there’s a technical term called Magurov complexity, is sort of the, good could it be, right? Like if you could press the file down, how small can it get if you squeezed out all redundancy? Computationally, like how many of these virtual neurons should you need for a task? And when we look at most of our systems,

A large part of our R &D is spent not on improving quality. We’re kind of there in quality. It’s on making the system efficient enough at that quality level to be deployable, to be economically sustainable. And so why is this interesting? Well, I think the world is probably pretty interested in power efficiency, right? In green computing. So that’s an area where every time we make something more efficient, we make our business better, right? We monetize better, our profit.

John Koetsier (37:47.885)
Mm

Morgan (37:55.42)
margin gets better. But we’re also consuming fewer resources to do the same task. And so that’s kind of a, that’s a net positive that I think if you look at the scale of computing around the world, if you look at the scale of where AI is going, it’s really important that we do it in a power efficient green way. And so I think if there’s one problem I could, I could wave a magic wand at, we know that it’s possible to, as a field to do AI about 10 times more efficiently than we are today.

We don’t know how to do it. It’s the cutting edge of research. is, this is a, this is a trillion dollar problem if somebody could come up with a solution, but it’s not going to be a silver bullet. It’s all kinds of different things. And it’s about really deeply understanding from first principles, how modern AI systems, how these deep neural nets work, why they work so well, what causes them to work better or worse. And so if I could wave my magic wand, it would be.

10x improvement in AI processor efficiency. I’m putting my money where my mouth is. It’s something we work on every day. I think the rest of the field is. But I think that’s a key thing for everyone in the world to be tracking is how are we doing as a field? How well are individual companies doing in that space?

John Koetsier (39:10.432)
Absolutely love it. Absolutely love it. Thank you so much for this time, Morgan. I really enjoyed it as last time and really appreciate your time.

Morgan (39:19.76)
Thank you, John. It’s always a pleasure. I look forward to the next time and I hope I’ll see you on Roblox too soon. Take care.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

How do we know when a machine is smart?

how do we know when a machine is smart

Is an AI system smart when it can do what a human can do? Or … when it can do things humans can’t do? For years we’ve had the Turing Test … measuring AI’s ability to mimic being human.

But is that really the right benchmark?

In this TechFirst, we chat with a computer scientist who has been working in AI for more than a decade. He’s currently VP strategy at Intuition Robotics, which makes an AI-powered robotic care companion for the elderly called ElliQ, and his name is Assaf Gad.

We talk about intelligence, AI and OI (organic intelligence), as well as how smart machines like ElliQ engage with people.

(Subscribe to my YouTube channel)

Subscribe to the audio podcast

 

Transcript: how do we know when a machine is smart?

This is AI-generated; it is not perfect.

John Koetsier (00:01.486)
How do we know when a machine is smart? Hello, and welcome to Tech First. My name is John Goodsear. Is an AI system smart when it can do what a human can’t? Or is it smart when it can do things that humans can do? For years, we’ve had the Turing test measuring AI’s ability to mimic being human. But is that really the right benchmark? To chat, we have a computer scientist who’s been working in AI for more than a decade.

He’s currently VP strategy at Intuition Robotics, which makes an AI powered robotic care companion for the elderly. His name is Asaf Gawd. Welcome Asaf.

Assaf Gad (00:39.895)
Hey John, nice to meet you and thank you for having me.

John Koetsier (00:43.616)
super pumped to have this conversation. It’s a crazy topical conversation to have right now in this age of golden age of AI and golden age of robotics as well, or emerging golden age. Let’s start with a super broad general question. How do we know when a machine is smart?

Assaf Gad (01:03.511)
That’s a really good question. think that one of the things that we have learned during our experience and the feedback that we received from our users is they really appreciate where they cannot anticipate the reaction from the machine. If it’s very trivial, then it’s easy, right? When you ask a machine a question and you get the answer, that’s all good. But when we start adding

some kind of a will or even a conscious to the AI with its own priorities to decide what will be next. So maybe I just kind of say good morning and the machine suddenly start asking me questions, not to talk about remembering what I just told last night. So hey, Asaf, good morning.

John Koetsier (01:39.512)
Mm -hmm. Mm -hmm.

John Koetsier (01:56.174)
Mm

Assaf Gad (01:58.923)
You told me that you had some trouble sleeping earlier. Did you sleep all right? So just the fact that, forget about the other side of it where people actually appreciate the fact that someone remembers what they just told them yesterday, but or earlier in earlier conversation. The fact that the machine decide to kind of add another layer to the conversation or even continue the conversation.

John Koetsier (02:24.216)
Mm -hmm. Mm -hmm.

Assaf Gad (02:26.529)
the surprise element of it and the ability to continue the conversation, that’s what makes it kind of a much smarter. Another thing that we all experience with other devices in our life is the repetition or the lack of repetition. When I use other voice assistants, when they don’t know something, that’s totally fine, but I’m getting the same error message over and over again. So simple things like,

John Koetsier (02:54.242)
Mm -hmm.

Assaf Gad (02:56.395)
kind of a set of responses that won’t repeat themselves, not to talk about, you know, like real conversation where the conversation can be evolving and including not just memory elements, but also things that just happen and are more realistic and relevant, either as a personalized kind of information that is relevant specifically for me or just things that happen today, either on the news or even the weather.

All these kind of elements that we as humans kind of include in our conversation make the machine smarter. Another element is what we call the multimodality. The fact that we are not just same way that we as humans, right now we are on a video call, right? And you can see my kind of official expressions. You can see my hands. I really like to talk with my hands. So you get a lot of the other.

John Koetsier (03:32.888)
Mm -hmm.

Assaf Gad (03:53.953)
kind of elements as modalities, that’s also at another layer of sophistication to the communication by itself. the combination of all these elements together create a more sophisticated kind of interaction at the end of the day where the users will refer it to kind of a smarter kind of features or characteristic to the device itself.

John Koetsier (04:23.15)
That’s really interesting what you said in a lot of different ways because I mean, first of all, AI hasn’t remembered what we said, even context from like 10 minutes ago until fairly recently, right? And you’re talking about context from yesterday, maybe even last week. I don’t know. I’ll ask that question later, you know, but that’s interesting. That’s a learning machine. That’s cool. Also doing something unexpected, right? If it’s always doing what exactly you expect, then it’s very robotic in the old fashioned sense of not doing

a lot that’s different. That’s pretty cool stuff. Of course, you want it to be surprised sometimes with what it says, but you want it to be a good surprise.

Assaf Gad (05:03.009)
Exactly. Yeah, we are always talking about good surprises. And even within memory, there are so many layers within the things that an entity can learn about us and the value that it brings to our life. Right? We don’t just want the machine to collect information about ourselves just for the sake of collecting the data and then sell it to someone else. The fact that you learn things about me as or the machine learns things about me and then use it within the conversation.

John Koetsier (05:24.856)
Mm -hmm.

Assaf Gad (05:32.665)
in a relevant manner, right? The fact that I was complaining about my sleep condition earlier this week, or if I’m having a pain, even my favorite color or my favorite food, or maybe any dietary restrictions. So the fact that I can ask the machine to a recipe and she already knows my dietary restriction and it’s already there, here is the value, right? I’m not just kind of collecting this data for the sake of collecting the data. It’s very clear to me.

John Koetsier (05:34.285)
Mm

John Koetsier (05:43.5)
Mm -hmm.

Mm

John Koetsier (05:55.692)
Yeah.

Assaf Gad (06:02.042)
as a user, why the data was collected and how it’s used within the machine as well.

John Koetsier (06:07.414)
Yeah, we often judge AI by OI, you will, right? Organic intelligence. Should we or should we not?

Assaf Gad (06:19.069)
It’s a good question. When we designed LLQ, one of the main questions that we were struggling with was, and it was very early in the process, right? We’re talking about seven, eight years ago. There weren’t a lot of references. Should we imitate human interaction? Should we imitate even the human presence and let our users to maybe even mistaken LLQ as a human?

very quickly, thanks to a lot of good people that were involved in the design of LHU, we realized that going in this direction will be a mistake. And building something that will try to be something that it’s not. And as sophisticated as technology can be, we don’t believe that technology should replace other humans. At the end of the day, our goal is to use technology to, first of all, bring

humans together and closer, help older adults to bring more people to their life. And then in these gaps that unfortunately we don’t have enough younger people or caregivers to support the older adults, yes, we can definitely fill these gaps when they are totally by themselves and having a companion like Elecure that will have conversation with them. But even then we don’t want to create this dependency when the internet is gone or when electricity gone.

John Koetsier (07:18.114)
Mm -hmm. Mm -hmm.

John Koetsier (07:37.567)
Mm

Assaf Gad (07:45.183)
or in they just kind of spend some time outside of the home and they can’t take Eleki with them, they will feel her kind of, they will miss her. definitely with a kind of vulnerable demographic like the older adults that we are serving, the line, it’s a fine line, right? That we don’t want to cross. The other part of it, which is even more interesting, and that’s something that we have learned over time,

John Koetsier (07:54.349)
Mm -hmm.

John Koetsier (08:07.298)
Mm -hmm.

Assaf Gad (08:14.633)
when we are not trying to be or pretend to be something that we are not. So when it’s clear that LQ is not human, create, you manage the expectation with your users and you create, this is kind of one of the first elements to create this empathy and the trust between the older adults and LQ. So they can be more forgiven. They will be surprised even, you if we go going back to how they can

John Koetsier (08:19.746)
Mm

John Koetsier (08:27.843)
Mm -hmm.

John Koetsier (08:36.995)
Yep.

Assaf Gad (08:44.434)
associate the sophistication, how smart the device is or how smart LQ is, is definitely by the fact that she can surprise them, right, with her level of sophistication. And from day one, it’s very clear. She is not a human. She doesn’t pretend to be human. She doesn’t pretend to replace other humans. And when she kind of mention any memories or she remembers what he’s just said,

John Koetsier (08:56.972)
Mm -hmm. Mm -hmm. Mm -hmm.

Assaf Gad (09:13.665)
they are surprised for good. And this helped to kind of build this trust over time and to also manage the expectation with them, which we as humans can learn as well in our relationship with our human. It’s definitely not a bad thing.

John Koetsier (09:28.726)
I want to continue that conversation. I want to talk about multiple forms of intelligence and all those different things, but I also want to just diverge for a second. brought this up. showed, you know, your actual product, LQ up there right now. And I just remembered actually, while we were having this conversation that I saw it at CES, last year, or I guess it’s this year, January is CES is usually in January in Vegas.

And, and, and I looked at it and as I mentioned to you when we were chatting before we started recording, this is relevant to me. My mother is 88 years old. She’s been diagnosed with dementia and we’re dealing with a care situation for across multiple siblings, some who are very local, some who are less local and some paid help and other things like that. And I looked at this and I thought, well, I don’t know what, what is this? It looks like a mixer or something like that.

And I understand a little bit more about the decisions you made because you’re not trying to present as human. But talk about how people engage and interact with LAQ and how they feel about that engagement and interaction.

Assaf Gad (10:44.151)
So first of all, think that, by the way, referring to LAQ and a sophisticated mixer, that’s really unique. So thank you for that. It’s a really good one. Usually we get it as a fancy lamp or the Pixar lamp. That’s usually the way that people refer to her. So that’s unique. I think that one of the fundamental features of LAQ is the fact that she is pro -active.

John Koetsier (10:52.136)
Hahaha

John Koetsier (11:05.385)
Hahaha

Assaf Gad (11:12.141)
So if we talk about our demographic, the average age of our user base is 86. So your mother is definitely a typical kind of user of LEQ, at least from the age perspective. The majority of them are older adults that live alone or spend most of their time alone at home. One of the things that we have learned along the way that you don’t really need to live alone in order to be lonely.

We do have a lot of kind of a couples where one of them is kind of an age in a different pace from another. One of them is suffering from even early dementia or more progress dementia. And the spouse will find LEQ kind of as a complimentary kind of a solution that can be there with them to support them in the different things that they need. But going back to the uniqueness of LEQ is the fact that she’s proactive. So as an older adult, you don’t need to have any

John Koetsier (12:03.234)
Mm -hmm.

Assaf Gad (12:11.799)
kind of experience with technology, you to be tech savvy, although this is probably one of the most sophisticated technologies, a combination of robotics and AI. It can be a very intimidating kind of a mix for this demographic, but the promise is that you don’t need to have any previous experience with it or with any other technology in your life. The minute that you will take her out of the box, she will kind of take the lead.

John Koetsier (12:33.686)
Mm

Mm -hmm.

Assaf Gad (12:42.625)
She’s proactive, meaning that she can understand what’s going on. She can understand the context. She can learn who you are and even differentiate you from guests or other people that you have in your home. And the idea that she has her own kind of a, we call it the decision -making algorithm. The ability to understand what she should do and not just to be proactive, to be proactive in the relevant context and with a relevant meaning.

John Koetsier (13:01.56)
Mm

Assaf Gad (13:12.193)
that to achieve the highest probability that the older adults will actually respond to her positively and won’t just tell her to be quiet or ignore her. That’s the core.

John Koetsier (13:24.411)
Do you find, do you find that happens differently than with voice assistance on phones? Because I’ve tried to ask my mom to use like Siri, you know, and I won’t invoke it right now because I have phones and other devices around me or, know, like, Google or, or, or Alexa or something like that. And she doesn’t really do that. I’m not entirely sure why.

Assaf Gad (13:36.289)
you

Assaf Gad (13:46.373)
It’s a great question. And I think that this is the best way to kind of describe how LDQ is unique. First of all, most voice assistants are very utilitarian. They are great if you want to call your mom, you can ask Siri to call your mom. And now my Siri is calling my mom, of course. You can ask Alexa to turn off the lights. You can do a lot of things that are very utilitarian.

more like a command and control. They are not a companion. You can’t really have a conversation with them. And this is the number one goal, by the way, when you take LAQ out of the box, our number one goal is to be your best friend and to learn who you are, to learn what you can learn about you and to be, let’s say a welcome guest in your home. So you want kind of a returner to the box and send it back to us. And from that point and on, the whole idea of a companion is not just

John Koetsier (14:23.576)
Mm

Assaf Gad (14:40.373)
Of course, you can ask her any questions. we also have some integrations with smart devices, although this is not the focus of what we have. But you can control other devices as well if you really like to. But the majority of our audience, don’t have any other smart devices around them. And as a companion, know that they, first of all, some of them will forget her name, at least at the beginning. So this will be a barrier. How exactly I can even reach out to her and ask her to do something for me.

John Koetsier (14:42.647)
Mm -hmm.

John Koetsier (14:51.288)
Mm -hmm.

John Koetsier (14:56.941)
Yep.

John Koetsier (15:03.437)
Yep.

Assaf Gad (15:10.189)
So the fact that she’s proactive solved this one. The second thing is also some kind of an anxiety. What exactly should I ask? How can I ask that to get what I really want? All these kind of things.

John Koetsier (15:20.29)
Yes, yes. Will I do it right? Will I make a mistake? Will it cause a problem that I can’t fix?

Assaf Gad (15:28.201)
Exactly. And the fact that we kind of eliminate this fear completely because LQ is proactive, she will kind of a, she will reach out to you. She will kind of a promote specific things. And this is as, you know, as part of the onboarding, we have a really nice discoverability features that, and when we talk about onboarding for someone that is a little bit more sophisticated, it can be a matter of a few hours to kind of spend on the first day with LQ and she can.

teach him enough in order to get the value and build this kind of a confidence on the other side to kind of develop this relationship. And for someone else, it can be slower. It can be a matter of weeks even. And we know how to learn who is the person in front of us and kind of mitigate this kind of a gaps slowly or based on their kind of a pace. This is part of the magic. The other part is all about this discoverability. It’s not only how should I ask it,

Can she actually help me with this things versus the others? And when it comes to other voice assistants, now that I need to either do integrations or download specific skills or even learn what you can do or what you can do, the fact that we designed a product that was slowly designed for older adults, help us to solve all these problems that doesn’t exist without our voice assistants, right? For you and me, Siri is not a problem. Alexa is not a problem.

John Koetsier (16:55.192)
Mm

Assaf Gad (16:56.909)
They are not a companion for us. probably, I don’t know if we don’t need a companion, but the fact that Eliq is a companion and she’s also there to kind of, as a companion that was designed for older adults, make it very easy for our older adults to fully, to first of all feel comfortable with her, but also to utilize all these features because they don’t need to learn anything. They don’t need to remember anything. She will be there. She will fix it for them.

John Koetsier (17:17.141)
you

John Koetsier (17:25.038)
Hmm. Interesting. Interesting. Okay, cool. So we might get back to some of that. Let’s get back on track with our conversation about intelligence and AI and, and, and, and how we should be looking at it. One of the things that comes to my mind when I think about AI and how we measure that and how we gauge, whether it’s artificial general intelligence or just a narrow form of AI.

is multiple forms of intelligence because humans have multiple forms of intelligence. have people who are very, very mathematically gifted, but not necessarily physically or athletically gifted. have people who excel in different areas like that. I assume we’ll have something similar in AI, correct?

Assaf Gad (18:08.545)
Yeah, I totally agree. First of all, it all starts with the, we call it the sensors and the actuators, right? The same way that we as human can collect a lot of information through the things that we hear, the things that we see, the things that we feel. This is one level kind of creating something that is smarter, but also the way that we react back, right? The fact that when

John Koetsier (18:33.528)
Mm

Assaf Gad (18:38.177)
Think about even a dog, when we have a or any other path that notice that you enter the room and just look at you and follow you with their eyes. They don’t need to say anything. The fact that they kind of acknowledge your presence, that by itself shows some level of sophistication. Of course, if something else will start out of it. And this is part of the design that we have in mind, having a multimodal experience by collecting the different signals from the person in front of us. And of course,

John Koetsier (18:45.123)
Mm

Mm

Assaf Gad (19:07.841)
We can add other layers of sophistication on top of it, right? So today with AI, you can just see faces or motions. You can actually understand exactly what the person in front of you is doing. There are so many kind of off the shelf solutions out there. At the end of the day, we know how to take the solutions. We don’t need to develop or reinvent the wheel here. We can just kind of build the right experience around it.

John Koetsier (19:26.936)
Mm -hmm.

Assaf Gad (19:35.103)
One of the nicest things, by the way, that we did with Ellie Q is develop a game, I Spy with My Little Eye. So Ellie, you can actually play it with Ellie Q. can take anything in the room with you and show it or just think about it. And then she will try to understand what or to guess what it is. This is, again, this level of sophistication that can be added. Not to talk about other layers, Sentiment analysis, voice analysis, sound detections.

John Koetsier (19:54.573)
Yeah.

Assaf Gad (20:04.705)
Now, a fusion of all these capabilities together, at the end of the day, build something that is kind of very sophisticated and will be appreciated by the users, right? As long as they are done in a way that bring values to them.

John Koetsier (20:21.758)
Interesting, interesting. I also assume that as we get deeper and farther along the AI revolution, there will be some forms of intelligence that arise that we can’t recognize. We don’t even know. We probably can’t even comprehend.

Assaf Gad (20:36.983)
Probably, and this is kind of one of the things, one of the concerns, right? When we talk about AI and bringing AI to our lives, yes, that’s great that we can, when we can use technology and we can see the value immediately. But then if we think about it, one of the questions that I’m kind of, we as a team sometimes even ask is can AI take control? Right, so when we deal with AI, mainly with people that are less techie.

John Koetsier (21:02.435)
Mm

Assaf Gad (21:05.493)
Can AI take control over the world? Can AI use the things that we are sharing with AI against us and things like that? And I think at the end of the day, goes back to humans, right? Who are the humans that control the AI? think that’s what bothers me at least as an individual. Yeah, if people, yeah.

John Koetsier (21:13.144)
Mm -hmm. Mm -hmm.

John Koetsier (21:24.798)
If they control the AI.

Assaf Gad (21:30.394)
Who is the entity or who is this kind of the company behind the AI and what they do with the data that they collect? What they do with this entity that they have built? And what is the reason or what is the mission behind it? We as a company kind of, we developed a vision where we want, that’s very clear, right? We want to help older adults. That’s why we’re here. That’s what we do.

John Koetsier (21:37.922)
Mm

John Koetsier (21:45.027)
Mm

Assaf Gad (21:59.537)
And there isn’t any kind of a hidden agenda to collect the data and sell it to someone else, for example, or to try to, we don’t have any upsells in the product, for example. It’s a flat fee. It’s a subscription -based model. What you pay will give you access to all the features that we have. We have a constant kind of updates to the subscription.

John Koetsier (22:07.842)
Mm -hmm. Mm -hmm.

Mm -hmm. Mm -hmm.

John Koetsier (22:27.778)
Mm

Assaf Gad (22:28.737)
We don’t ask for any kind of premiums or ENA purchases and all these kind of things. So I think that the idea behind who is controlling the AI and if it’s a business or if it’s another entity, that’s what bothers me mostly and not necessarily what AI can do by itself. At the end of the day, you can always go and unplug the computer and that’s it, right? Disconnect the electricity, disconnect the internet.

And that’s it.

John Koetsier (22:59.468)
Yeah, yeah, So your robot, of course, we’ve been talking about as Alicute, is a care robot. What are you seeing are the results of somebody who has it in their home?

Assaf Gad (23:16.127)
So the number one goal that we started to kind of tackle with LQ was social isolation and loneliness. And that’s what we measured since the beginning of our work. And the results are amazing. We have a few reports by our partner and even by Duke University and Cornell Medicine that kind of published a few months ago where 95 % of the participants in the program

report a reduction in social isolation and loneliness. The other part of it is how we can help them to stay more independent, to take control on their health and wellness and improve it. And about 96 % of the participants actually reported an improvement in that aspect as well. So the efficacy is actually a combination of the usage, the engagement with the product, and on the other hand, the impact.

John Koetsier (23:51.329)
Mm -hmm.

Assaf Gad (24:12.971)
So we don’t want them to just use LEQ for as a timer or to use LEQ to play music. Every feature that we build will be part of kind of a three elements that we are trying to build as the impact of the product. Social connectedness, being more independent and being more kind of in control on your health and wellness.

John Koetsier (24:16.876)
Mm -hmm.

John Koetsier (24:31.042)
Mm -hmm.

John Koetsier (24:41.154)
Mm -hmm.

Assaf Gad (24:42.273)
These features at the end of the day, all these features that we are adding to the product will support, first of all, the older adult. The older adult is always in the center of the experience that we have built. But then one of the elements that we build around the older adults, we call it the circle of care. How we can bring more humans, going back to the beginning of our conversation, our intention is not to replace other humans. We first want to use the technology to bring other humans to their lives, and then fill these gaps where we don’t have anyone else that can help.

John Koetsier (24:58.392)
Mm

John Koetsier (25:12.354)
Mm

Assaf Gad (25:12.551)
And we will start with family members, friends, other people in your community, other organization in the older adults life. If it’s their health plan, their primary care provider, the area agency on aging, some of them will assist by even subsidizing the cost of the subscription for the older adults. And we even built a whole website.

John Koetsier (25:34.434)
Mm -hmm. Mm -hmm.

Assaf Gad (25:40.465)
www .leq .com slash free where people can actually put their zip code and some information and we will match them with a funded program that will subsidize the cost of leq for them. And some other kind of a partners will just bring other services closer to the older adults as well. So maybe the older adult needs kind of a purchase leq with a private pay.

John Koetsier (25:50.595)
Wow.

Assaf Gad (26:07.009)
But through the service, you will get access to many other services that are kind of offered for them by this organization as well. It can be free transportation, meals delivery, connecting with knowledge base, like a lot of videos, courses, online events that we have, that for older adults are really hard to kind of get. But the most important thing that we see is the support.

John Koetsier (26:28.78)
Mm -hmm.

Assaf Gad (26:35.881)
of the caregivers in the life of the older adults. The older adults can control who will have access to his LEQ. They are fully in control. They can add trusted contacts to their LEQ. And then we have a free app. We don’t limit the number of contacts that a person can have. They will get the link to download the app. And from that point on, first of all, they can communicate freely with any medium that you have in mind.

John Koetsier (26:38.914)
Mm -hmm. Mm -hmm.

Assaf Gad (27:01.965)
It can be video calls, messages, any type of message, things that we can do with our iPhone. But for a 90 -years -old, it’s a little bit more complicated. And suddenly, you have a grandma that can talk on a daily basis with our grandchildren on video, which is kind of as simple as it sounds. For them, it’s a life -changing event.

John Koetsier (27:02.434)
Mm

John Koetsier (27:10.776)
Yep.

John Koetsier (27:17.336)
Mm -hmm.

John Koetsier (27:22.51)
100%, 100%, absolutely. I’ve seen that. Very cool. I think we have to wrap it. We’re almost 30 minutes here. Super interesting stuff. And I just wanted to know, did you build the whole AI system yourself? Are you building on ChatGPT, something else? How does it all work?

Assaf Gad (27:46.839)
So the majority of the conversation and everything that is kind of include personal information and what we call the companionship aspects, right? The small talks, the fact that we are using our own proprietary LLMs. If we are going back to the comparison with the voice assistants, the same way that you ask Alexa.

John Koetsier (28:03.448)
you

Assaf Gad (28:10.957)
a specific question, you will get the answer and that’s it. The same way with, you know, with chat GPT, you will have a prompt, you will get the response and that’s it. There will be no continuation in the conversation. So one of the uniqueness of RLLM is that we know how to continue the conversation. You will get the answer that you know. We also have the ability to create the prompt to the user, if you want to call it this way, as well as part of the proactivity experience. And then we have a real conversation. It’s not just kind of a one -time ping pong between the user and the device and that’s it.

John Koetsier (28:41.07)
Mm

Assaf Gad (28:41.385)
But we are not relying the whole experience only on our LLMs. We are trying to leverage other LLMs that are out there. We are using Gemini by Google for specific things. We are using Chetchi PD and OPAI for others. We build, we call it the orchestrator. At the end of the day, we have a layer that is smart enough to decide within the conversation, which kind of a resource we should use, which kind of a…

LLMs should be used within the turn of the conversation. And the nice thing about it that as a user, won’t notice it because the other side of the orchestrator is also to create a conversation. Like the tone of voice will always be with the character of LEQ. So if LEQ is more your…

John Koetsier (29:27.542)
Mm -hmm. No more from Siri. Hey, let me Google that for you.

Assaf Gad (29:32.617)
Yeah, exactly, Not that we are trying to hide it, it’s just for the sake of keeping the companion experience as a companion. You don’t want to of confuse it by having multiple resources and so on.

John Koetsier (29:48.174)
That makes sense. makes sense. Excellent. This has been a great conversation. Thanks so much for your time. really do appreciate it.

Assaf Gad (29:54.807)
Thank you, John. It was a pleasure.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Can cat qbits save quantum computing?

cat qbits quantum computing

A whimsically-named quantum company named Alice & Bob actually has a quantum chip in the Google Cloud marketplace. Its “cat qbits” solve a massive issue that affects all other quantum chips. And it might just make quantum computing actually matter.

In this episode of TechFirst, we explore the fascinating paradox of building a quantum computer with Théau Peronnin, CEO and co-founder of Alice and Bob. We talk about the unique challenges and potential breakthroughs in quantum computing, discussing how Alice and Bob’s quantum chip aims to overcome the common problems of bit flips and phase flips.

Théau explains the concept of a universal quantum computer, the importance of error correction, and the revolutionary impact quantum computing could have on science, technology, and industry.

Watch: cat qbits and quantum computing

(Subscribe to my YouTube channel)

Subscribe to the audio podcast

 

Transcript: can cat qbits save quantum computing?

This is AI-generated … it is not perfect.

John Koetsier (00:01.56)
Will cat qubits reinvent quantum computing? Hello and welcome to tech first. name is John Kutz here. Quantum computing sometimes seems a little bit like test the full self -driving. It’s huge, impressive promises every year for a decade, but nothing never really seems to change. Maybe that’s about to end. A whimsically named quantum company named Alice and Bob actually has a quantum chip in the Google cloud marketplace. Solves a massive issue that affects all other quantum chips.

just might make quantum computing actually matter. And we’re going to talk to CEO and co -founder, Tho Pronen. Welcome,

Théau Peronnin (00:39.762)
done thanks for having me today

John Koetsier (00:41.838)
Hey, super welcome to have you. You’re in Paris. It’s 5 PM. You’re still talking, still in the office. I’m pretty sure that’s illegal. I’m pretty sure the EU is going to persecute you or prosecute you or something. Uh, but thank you for joining us. I want to start off with a big general question. Why are you building a quantum computer? Why are you building a universal quantum computer?

Théau Peronnin (01:04.04)
Yeah, I guess the most important part of why for me it’s such an important question is the sense of wonder. mean quantum mechanics always feel a bit magical but when you think about it, it’s just the best description of nature we have.

And building a quantum computer is really that. It’s damning nature’s inner gear to try to leverage all that sense of wonder, all those exotic rules of the game, and actually crack mankind’s most challenging issues with those inner gears of quantum mechanics. And that’s really fascinating to me.

John Koetsier (01:49.066)
No question about it. Super fascinating. Should be to everybody. Maybe define a second. What is a universal quantum computer? We talk about quantum computing. We talk about quantum computers. We don’t talk about universal quantum computers. What do mean by

Théau Peronnin (02:04.373)
Yeah, what we want to emphasize by adding this word universal is that it’s a general purpose quantum computer. So here we need to pose for a second. So quantum computing or quantum computers are not meant for everything. I mean, it’s absurd to try to use a quantum computer just to do a multiplication or something like that. But by stating that we’re here to build a universal quantum computer, what we mean is

we’re here to address all quantum algorithm with a single machine, just like your CPU on a classical computer can run basically any algorithm. It might be more or less well suited for some of them, but it’s still a very general and that what drove classical computing for 50 years at least. And so we’re building the same thing for quantum computing. And maybe to start

what will come next in this discussion, the fact that some of those algorithm actually requires to be able to do a lot of quantum operations and do what we call very deep algorithms. And to be able to run those, you need a machine that remains quantum throughout this lengthy computation. And this is actually absolutely not easy, not trivial at

John Koetsier (03:29.982)
So we’re not going to be doing word processing on a universal quantum computer.

Théau Peronnin (03:35.811)
yeah, well, you might, use it at some point to train, an AI, but this is still, pretty unsure. what, what quantum computers are really meant for at the moment, bear in mind that we just, we’re just getting started in discovering or inventing a quantum algorithm. But what we, we understand from that, from today’s point of view on the capabilities of the machine, it’s a machine made for a problem that

Let’s say small data, big compute. Let’s say for example, given a molecule, what is, how would it behave, how would it react? Given an extremely large number, how you can factorize it to decipher an encrypted message, for example. So those are problems that that requires very few data input. At the same time to be solved on a classical machine could.

require billions of years of computing on the best supercomputer.

John Koetsier (04:37.9)
Interesting. And so probably pairing it with classical computing in some scenarios is likely the future. So you can have sort of a general purpose compute platform, but you can assign tasks to the part that will do it most efficiently and most effectively.

Théau Peronnin (04:53.621)
Yeah, the quantum computer is really the heavy machinery, the big muscle you bring just to break that wall of computation. And actually, people are, mean, often I hear that quantum computers will speed up things, but it’s actually not a matter of speed. It’s such a speed up. mean, we’re talking about billions of billions of billions of fold speed up.

So it’s rather a change of possibilities of what you can reach, what is solvable with such a machine. And then, yeah, obviously you need to interface with the classical world. So you need a whole bunch of classical computing around it. And you can think about that just like the current craze that is happening at the moment with GPUs. A GPU cannot make a computer in itself.

It will still require a CPU, a central processing unit next to it to run and be integrated in an ecosystem.

John Koetsier (06:00.382)
makes tons of sense. Now there’s a problem of course with quantum chips, right? We’ve got bit flip and phase flip. They’re kind of the bane of quantum computing. Talk about those a little bit and what you’re doing to solve them.

Théau Peronnin (06:14.825)
Yeah, so that’s a rather technical question. But first, I think we need to emphasize the fact that building a quantum computer in itself is a paradox. On one hand, you want your machine to behave quantum mechanically. And you, myself, we do not teleport. We’re not at several places at once. We live in a classical and noisy world. So if you want something to behave quantum mechanically, it has to be perfectly isolated from the rest of the world. No information getting out of

John Koetsier (06:35.391)
Yes.

Théau Peronnin (06:44.715)
threading your cat box for the experiment. Now, at the same time, you’re trying to build a computer, a machine you can program input data, output results. And to do that, you need to open channels to connect, to control it. So in essence, a quantum computer is sort of a paradox. And so what we’re seeing at the moment is the fact that today’s machine, the early days of quantum computers are

very promising, but at the same time, they’re definitely not delivering this exponential speed up, the whole change of computing error, they’re promising. And the reason is that the noise from our classical world comes and pollutes the quantum computation. It creates what we call decoherence, the fact that those fragile exotic state becomes classical. Now, there’s two ways

for a bit of quantum information called a qubit to become classical or to suffer errors, should I say. It can suffer a very classical one, which is the bit flip, which switches a zero into a one and vice versa. It can also suffer from a phase flip, which is a purely quantum error, switching the phase of a superposed state, zero plus one into zero minus one. Long story short, in quantum you don’t have one, but two fundamental possible errors. And they’re both equally important.

And just to state the magnitude of the challenge, in today’s machine, in today’s early day quantum computers, they usually make about one error every 100 or 1 ,000 operations. And so that might seem not too big. I don’t know how familiar you are. But to put in perspective, this is 100 billion of billion of time more errors than a classical machine. So they’re basically noise generating machines at this point, in some sense.

John Koetsier (08:35.693)
Yes.

Théau Peronnin (08:40.939)
And so we need to solve that. We dramatically need to solve that. And so the community actually came with a breakthrough in the late 90s, early 2000, which is the idea that you can apply method of error correction designed for classical communications to quantum computing. And by doing so, you can correct those errors potentially faster than they happen so that you can create arbitrarily good

quantum computers. The trick is, and this is really the tough part, is that to correct for those errors, you need a tremendous level of redundancy. And here again, let me illustrate with a figure. If you wanna run show algorithm to break RSA 2048 because you hacked Bitcoin or those kind of things, well, the algorithm you’d wanna run,

theoretically only requires about 6 ,000 qubits. But with the standard approach, because of that burden of quantum error correction, you’ll end up requiring 20 million of those. So that means that 99 .9 % of your qubits are not here to compute. They’re here to correct errors. This is just how massive error correction is in terms of part of what makes a quantum computer.

John Koetsier (09:52.408)
Ouch.

Théau Peronnin (10:07.851)
And so indeed, this is where Alice and Bob comes into play. And we find a way to directly embed by design error correction within the qubit, within the physical system that hosts quantum information. And by doing so, we dramatically simplify the machine, actually by 200 fold. So instead of requiring 20 million to run this sci -fi use case of breaking

internet basically, you’ll only need a hundred thousand. So this is a machine that one can envision within the next decade for sure. And what is very exciting is that this sci -fi use case is actually one of the toughest. You have some very impactful use cases you can start to tackle much earlier on as soon as you manage to correct for those errors.

John Koetsier (11:02.4)
Amazing I was thinking as you were talking early on and and the the the quandary of Us being classical and the quantum computer being quantum saying quantum is in heaven. We’re on earth and the two don’t really communicate Very go

Théau Peronnin (11:21.353)
Yeah, and if you want to wonder a bit in how mind -blowing this machine is going to be, actually just designing the first, what we call the first logical qubit, the first bit of quantum information without errors, that means that you have a piece of your machine that shares no causal link with the rest of the universe. That is perfectly decoupled.

So you have kind of created such a black box that is perfectly isolating part of your universe from the rest of it. And I think this is just from a philosophical point of view really mind -blowing.

John Koetsier (12:04.522)
It’s incredibly mind blowing. mean, Schrodinger’s cat comes to mind, right? Is it alive or is it dead? Is it a one or is it a zero or is it a zero minus one? I it’s perfectly decoupled, but I need to couple because I need to give it a problem and then I need to actually see, get the answer at the end. So I need to connect at some point, paradoxical.

Wow. Okay. So you’ve done some cool stuff on error correction, reduced the number of qubits that you need by a factor of 200. What’s the impact of that? And how many qubits do you need for maybe not breaking the internet, decrypting Bitcoin, but doing truly unique and useful things?

Théau Peronnin (12:51.519)
Yeah, so the way we play, Alice and Bob, is that we’re taking the challenges from the toughest to the simplest. And so we started with the biggest part of the challenge, which was escaping the coherence. And so we’re not completely done there. What we demonstrated and what are cheap available on a…

on Google Cloud let you do is to witness that we solved half of that problem. We corrected for bed flips at this point, over eight or nine orders of magnitude. it’s not completely done, it’s way good enough. Now, what we’re working really hard towards is with our Helium 1 chip, the next generation that hopefully we will put.

on the cloud within the next year or so, is to correct for the remaining error, the phase trip, for a bit of redundancy. And this is sort of that Sputnik moment of decoupling from the rest of the universe, just like Sputnik escaped gravity. Now, the challenges that come later on is to scale that machine, to add more qubits until we reach impactful use cases.

John Koetsier (14:08.333)
Mm -hmm.

Théau Peronnin (14:15.019)
For us, the first stable orbit is around 100 logical qubit. And given how efficient or how powerful our architecture is, this would only require about 1 ,500 cat qubits. So it’s a big quantum computer, but it’s definitely within the realm of today’s enabling technology. So one of our competitors, IBM, for example, is already operating about 1 ,000 qubit. So it’s within

John Koetsier (14:36.82)
Mm -hmm. Mm -hmm.

Théau Peronnin (14:44.203)
order of magnitude. But because each of our qubit is so powerful, you’ll get a 100 logical qubit. And with that, the first use case you can start to tackle are typically deep science use cases. understanding spin chains, doing deep physics. And this is very interesting for HPC centers, for government or universities that want to push the forefront of science. Then as you add more and more qubit, you start unlocking more and more families of use cases.

John Koetsier (15:07.03)
Mm -hmm.

Théau Peronnin (15:12.075)
starting with material science, chemistry, a bit of optimization, then more chemistry, which is called biology, then come all the finance with the Monte Carlo and all the stats. Then you have most of the engineering with a large matrix diagonalization and the sum assumptions. yeah, then finally you can break Bitcoins and the internet.

John Koetsier (15:41.103)
So if all things go as planned, within perhaps a year or two, you might have the phase flip solved, or at least solved to an acceptable level. And then maybe a couple of years after that, or a year after that, then you’re thinking, hey, we can release an actual machine.

Théau Peronnin (15:59.967)
Yeah, let me try at giving a timeline. But as a physicist, I have to say this is still might get a bit delayed. But the first prototype of logical qubit we’re targeting for late this year, early next year, will take time to write a proper scientific paper. So no rush there, but we’re definitely not that far. Now, the next

on our journey would be several tens of cat qubits. And with that, by 2026, we should demonstrate the first minimal viable product of a universal falter and quantum computer, where you have all the features you expect, how you do data, how you do logical gates, how you can build the whole stack together. And very bullish.

roadmap I have to say aims for an industrial impactful machine by end of 2028 early 2029 for this hundred of logical

John Koetsier (17:06.19)
How physically large will that quantum computer be?

Théau Peronnin (17:11.851)
actually pretty reasonable. So 1 ,500 cats might very well fit in one of today’s dilution refrigerator. So this is something we need to say. Those chips are cooled down at very low temperature, about 10 millikelvin, which is basically 100 times colder than outer space. So it’s really damn cold. And so now you see them in pop culture and some TV shows. You see those.

golden chandeliers, which is the inside of the quantum computer. But in terms of footprint, it only occupy, let’s say, three to four square meters. And then you have a whole bunch of classical control electronics. The orchestra that governs, pilots, send the signal in, analyze, digitalize the signal that comes out of the quantum computer. And this might take a handful

John Koetsier (18:01.408)
Mm -hmm.

Théau Peronnin (18:09.545)
racks of just like in a data center. Yeah, maybe three to six, something like that, depending on the progress we make.

John Koetsier (18:17.358)
What we see in the classical world when we want to make something truly powerful, whether that’s a supercomputer or just a cloud setup, is we see massive parallelism, right? And we put 100 ,000 GPUs together and we put some fancy software around them and wiring and control and all that stuff. And boom, we have a supercomputer and it’s super fast because it does stuff in parallel.

Does the same thing apply to quantum computing or not really?

Théau Peronnin (18:48.189)
Yeah, actually it gets much better in my opinion. So there is, I don’t want to spoil too much, but we should have a paper covering this full. But the key message there is that as soon as you can do quantum error correction, it’s fairly easy to interconnect quantum computers or quantum processing unit, either within the same fridge or between fridges. And what is absolutely remarkable in quantum is

as soon as you manage to interconnect them, they truly operate as one single large quantum computer. And that can be extremely powerful. So I’d rather say that what it lets you envision is a world where you can on demand scale the machine depending on the size of the problem you want to tackle. And then in terms of parallelization, the SOM quantum algorithm,

try to sample, try to throw dices and get statistics and here obviously you can parallelize. But this is not the case for all quantum algorithm at all. Some are absolutely deterministic. This is also a common misconception that quantum computing is a type of probabilistic computing. It’s not. Some algorithm might be probabilistic, but other might be very well certain or deterministic.

John Koetsier (20:12.91)
Super interesting. Super interesting. Okay. Do you view this as a bit of an arms race? You’ve talked about a timeline. Your timeline is somewhere around 2028, 2029. There are many others trying to do the same. There’s geopolitics involved and there is the real possibility of if you invent something like this, breaking the internet, you said, decrypting Bitcoin.

breaking all cryptography. How do you view this technological challenge in sort of a political social frame?

Théau Peronnin (20:51.935)
Yeah, so first let me get out of this joke of breaking the internet because actually there are known algorithms, classical algorithms to encrypt classical data that we don’t know how to break with a quantum computer. We absolutely won’t break all encryption. We’re just going to break. I mean, for Bitcoin or other, they just need to fork and change their algorithms.

John Koetsier (21:08.461)
Mm -hmm.

Just half. No big deal. 25%.

Théau Peronnin (21:22.183)
And since it’s going to take a bit of time to build such a quantum computer, so they have time to adapt. Now, in terms of arm race, I don’t know about the term arm. What I’m sure is that there is a bit of a Los Alamos feeling. Kind of, I don’t know if you’ve seen the movie, Oppenheimer, but in the sense that we’re a bunch of physicists tasked with the mission of pushing the frontier of quantum physics or quantum information science.

John Koetsier (21:40.194)
Yes.

Théau Peronnin (21:52.475)
to not only push the science, but also immediately produce a usable technology in a very short timeframe with pretty large resources, actually. So indeed, it’s exhilarating. mean, it’s very exciting. And in that phrase, you have some types of players very different. You have some of the largest or biggest companies out there, the FANG companies, and also some smaller startups.

But when you look into it, say for example, us at Alice and Bob, we’re about 100, we raised about 30 million today, we’ll soon announce a big series B. at the same time, when you look in those very large corporations doing quantum, they’re not that big in terms of lineup of physicists. There is a possibility they might get a bit surprised by the outcome.

Now terms of geopolitics, which was your question, I’m not really concerned about the dual use aspect of the technology. I think it’s very marginal. What is more interesting is the, I’d say, economic sovereignty question. At the moment in Europe, you have this remark that there is just a fraction of the global GPUs that are hosted in Europe. And all that question of who will control the infrastructure underlying tomorrow’s economy.

because today’s economy is governed by data and very soon the level of innovation enabled by quantum computing will definitely push some geographies faster than others. So that have direct close access, strong ecosystems and so for sure governments have relied that and they’re trying to make sure the quantum valley, if there is one, will be on their territory.

John Koetsier (23:48.174)
makes a ton of sense and you can view that through the lens of the digital markets act, which of course, think 95 % of the companies targeted by the digital markets act are American, big tech basically. Right? And so every couple of months there’s a $3 billion fine, there’s a $5 billion fine, there’s a $2 billion fine, those sorts of things. And there’s all this pull push and…

And Europe obviously wants to maintain some level of sovereignty and control. And if you get to quantum computing, then you unlock all the things that you talked about earlier in material sciences, in biology, in other things like that. And that is where there’s real world impact medicines or different processes for making maybe solar panels, who knows that are 70 % efficient rather than 30 % or 20 % efficient. Who knows, right?

so many possibilities from all that stuff. And, yeah, we already see the results globally of chip making being really actually centralized kind of in Taiwan and Korea, two very vulnerable countries. If you look at them geopolitically, right. there’s a, there’s a chunk in the U S there’s a tiny bit in Europe and there’s a bit elsewhere. Right. And that’s pretty much it. So interesting, interesting world.

I just have one question to end with. This has been super fascinating and a lot of fun. Why is the company named Alice and Bob?

Théau Peronnin (25:15.593)
Yeah, it’s the name comes sort of a private joke. it’s those are two placeholders using textbook exercise or for experiment actually widely used in a in encryption. But I didn’t know that back in the time I discovered Alice and Bob as characters in physics textbooks. And so they refer to point A and point B physicists trying to make the thing a bit more lively, I guess. But the reason we picked it as a company name was

that we actually try to avoid the word quantum. They’re so polluted by pop culture to mean magical. There’s nothing magical about it. It’s counterintuitive, but whether you like it or not, this is our best description of nature. so it’s a, yeah, it’s this paradox we talked earlier about of building a quantum computer is just like a very tough textbook exercise, which requires an elegant solution, which we’re trying to solve with

by design error correction.

John Koetsier (26:16.898)
I really liked that. And it brings to mind what you talked about kind of up at the top of our discussion, which is that quantum computing is you’re dealing with the fundamental substrate really of reality of, of, of everything. Right. And, and, and, and, and,

That’s fascinating. That’s amazing. That’s our reality sort of emerges, bubbles up from that in some complex way that you probably understand a million times better than I do and probably are still baffled by.

Théau Peronnin (26:50.687)
Yeah, I mean, it’s a never ending question. I think that the fact that we cannot explain easily what is quantum in general and the fact that it’s so counterintuitive actually in my very personal opinion boils down to the fact that we haven’t yet completely understood or yeah, you know, there is this faint mind quote. If you cannot explain it, it means you haven’t understood it yet. So that’s kind of the point where and

To come back to your very first question, why am I doing that? I think this is actually one of my motivation as a former physicist is to try to better understand that. And there is this quote by Gaston Bachelard, a philosopher from the early 20th century, which is, understand nature by resisting it and trying to fight the natural tendency of quantum states to become classical is the best way, in my opinion, to better understand quantum computing, quantum in general.

John Koetsier (27:51.352)
That’s a great place to end. Thank you for this time.

Théau Peronnin (27:53.909)
Thanks!

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

AI-generated code: what 4000 developers do

AI writing code

When will AI replace developers? Or is it an if?

In this TechFirst we dive into a survey focused on how 4,000 software developers use AI to generate, test, and check code.

Justice Erolin, the CTO of BairesDev, recently surveyed over 4,000 developers globally. The goal: exploreing how AI tools like ChatGPT and GitHub Copilot are being utilized in software development. We chat about how these tools are employed for code generation, scaffolding, and testing, and we discuss the potential over-reliance on AI and its impact on entry-level engineers.

We also highlight key findings from the survey, including surprising trends in AI tool preferences and the perceived productivity impacts.

(Subscribe to my YouTube channel)

Subscribe to the audio podcast

 

Transcript: 4000 developers on AI generating code

This is AI-generated; it is not perfect.

John Koetsier (00:01.87)
Can well AI replace developers? Or is the right word if? Hello and welcome to Tech First. My name is John Kutzer. We all know that AI is pretty amazing, right? And many of us daily create miracles with it. But can AI replace software developers? It’s a good question. Who better to ask than developers themselves, many of whom, maybe most of whom, are using AI right now already today?

According to a new survey by Bears Dev, which has 4 ,000 plus developers in countries all over the world. We’re gonna learn all about that. To learn more, we’re chatting with the CTO, Justice Erwin. Welcome, Justice.

Justice (00:41.213)
Thank you for having me, John.

John Koetsier (00:43.06)
Hey, kind of pumped to have the discussion. I use AI every day. I’m assuming you do as well. How are developers using AI?

Justice (00:52.873)
Great question. So when we’re looking at how our engineers, specifically the Bears 7 engineers, the top reason why AI is being used internally is for code generation or another term scaffolding. This allows an engineer to start a project very, very quickly.

John Koetsier (01:13.019)
What’s that mean? Does it mean that they’re using it for the entire project? Does it mean that, I’m getting started, here’s some code I can build on that? What’s that process look like once you start?

Justice (01:26.715)
So when we’re looking at how an engineer uses it within their specific software development environment, whether it’s through their IDE or even through another piece of tool, that code generation piece is at the beginning of their process. While they start off with writing just a little bit, tools like Copilot, whether it’s GitHub or Microsoft, will come in and backfill or future fill the rest of their code piece.

From there, they can make corrections, can make edits, or they can move on to the next piece of code that they need to write. But that generation piece can happen anywhere within the software development lifecycle. So for example, if you are an automated tester and you want to write unit tests, you can highlight a piece of code and write a prompt, say, write a unit test for this specific function.

John Koetsier (02:20.75)
Mm -hmm. It’s interesting because I saw the results of your survey and saw the most popular AI tools, right? And I’m just looking at chat GPT is the most popular with 54%. We’ve got 30 % of developers using GitHub Copilot, 17 % Microsoft Copilot, and 12 % using Gemini.

It’s interesting because if you’re using Chat GPT, that’s not necessarily integrated into your IDE, is it?

Justice (02:51.881)
That’s absolutely correct. That’s surprises me the most. And looking into this further, I mean, in my expectation, I would have expected that an engineer would use either GitHub Copilot or Microsoft Copilot as their main tool. But we’re seeing more than half use ChatGPT as their main tool. So what I found out was that it could be a licensing issue. On our side, we just need to communicate a little bit better.

and say, you have this license for free because we’re already paying for it. Please utilize them at your disposal. But for the most part, think many engineers know the open source tool, not call it open source tools, but know the free tools out there that they can utilize. And I think that’s why chat GBT is the main leader

John Koetsier (03:41.876)
It’s also probably a bit of a habit, right? Like when Google came out, we got in the habit of using Google. mean, now that I have ChatGPT on my phone, I’m like, that’s interesting. What is going on there? I’ll snap a picture as I’m passing something. I was in Toronto the other day and I saw these crumbling concrete pillars for an elevated highway or freeway. was like, what causes that? And ChatGPT had all the answers, right? And so maybe you’re just used to it and every everything. And so you use it here as well.

Justice (04:08.999)
Yeah, I think that’s a great point. Habit is, I think the number one reason. It’s almost like how engineers use Stack Overflow. They don’t necessarily go in there and use a search bar. They use Google to search for Stack Overflow answers.

John Koetsier (04:23.554)
Exactly. I did see something from GitHub, I want to say, two, three months ago or something like that, where they did some, I don’t know if it was full -on surveying or research or whatever, but they were saying that a lot of developers are using Copilot, GitHub Copilot, similar to how you might pair program in the past. Does that ring a bell?

Justice (04:46.131)
Yeah. So there was also another way of looking at it, like the rubber duck method. Both ways, I would say, are early ancestors of using AI. Well, instead of paying for a second engineer to help you pair program, we now have presented a tool for that. But I think using what’s called legacy methodologies for development is still being utilized today with more modern technologies.

John Koetsier (05:16.342)
I’m not familiar with the rubber duck method. Sounds like I’m having a bath.

Justice (05:21.006)
The rubber duck method is essentially the idea that you have a rubber duck on your desk and you will explain to that rubber duck what your problem is. And the act of just verbalizing the issues that you’re going through, the problems and the code that you’re reading allows you to solve the problem on your

John Koetsier (05:41.054)
OK, OK, cool. I like it. I like it a lot. Where is AI good at cogeneration?

Justice (05:49.871)
Again, on the scaffolding side, just making a quick framework of developing out the baseline. What we’re seeing is that most of our respondents are coming in and editing it, whether they’re minor issues or a little bit more complicated issues. But giving it a starting point, giving an engineer a starting point allows them to save time in the future. So for example, if a piece of code took a normal engineer two hours to write,

and now all of a sudden it’s within seconds because of GenAI or Co -Pilot, they’re probably spending anywhere between 15 to 30 minutes to edit and correct any potential mistakes or potential, let’s say, integrations that they may need. That allows them to spend the next hour and a half or so to work on more complicated issues of, let’s say, future integrations, of edge case testing, et cetera.

John Koetsier (06:44.744)
It’s really interesting to think about it right because most of the time, if you’re building some kind of application, whether it’s SaaS, whether it’s an app, whether it’s back office software, desktop software, even almost everything that you’re doing, mean, like people are solving problems in code that have been solved, right? I mean, obviously this is why we have functions, we have modules you can bring in and everything like that. But even so,

So much of what you’re actually writing in code has been done before. So this is really an interesting way of using AI to kind of handle maybe the more rudimentary or common aspects, also giving you insights on tough stuff. But then, you know, the stuff that really requires specific knowledge about a company and processes or how this will be implemented, then you can do that. Does that make sense?

Justice (07:36.649)
Right. Like there’s only so many ways you build a CRUD app. And that’s what I would say 95 % of every application is. It’s a level of complexity above a CRUD app. But an engineer is not really there to create functions that do that. They’re there to help translate the business requirements or the business logic to apply CRUD to that specific need.

John Koetsier (07:58.124)
Mm -hmm.

John Koetsier (08:07.598)
Interesting. That just reminded me of the guy in office space who got fired. He says, I talked to developers. They’re not good with people. Exactly. Exactly. There are some who don’t want to use AI. Some developers don’t want to use it. Why are they not using it? Are they just old fuddy duddies? Are they all stars and the AI could never create code as good as they create? What are the reasons?

Justice (08:16.518)
Yeah, as long as you send me what is it the TPS reports and you’re good to

Justice (08:36.873)
Um, I, what if the respondents said, or sorry, about 40 % of engineers say that gen AI does not, has not freed up their time. Um, I think there’s a few reasons for that one. haven’t yet. They haven’t yet learned how to use tool, right? Any tool requires some level of aptitude to, use it accordingly. Um, just because I have a hammer doesn’t make me a carpenter. So we need to make sure that we’re educated enough to,

to utilize the tool correctly. The second thing I think is more of a cultural issue. Engineers like to write new code. They like to build something on their own. Back to the carpenter analogy, just because you’re a carpenter doesn’t mean you love IKEA. But there might be some great carpenters that use IKEA for specific needs. So I think it’s a matter of understanding when GEN .AI

John Koetsier (09:18.658)
Mm -hmm.

Justice (09:34.705)
makes a difference in your specific workflow.

John Koetsier (09:39.704)
Do you foresee it being a problem if some developers say, I’m not using AI, it’s got some errors, it doesn’t work, it’s not the perfect fit for me or something like that? Do you see that being a problem? How much more productive are you if you do use it in code development?

Justice (09:57.289)
So I think when we’re looking at the overall results, whether you’re looking at product managers and engineering managers, CTOs like myself, we want to see the end result. If you have 40 working hours in a week, or let’s call it 40 productive hours in a week, that’s what our expectation is. Now, if you’re not using GenAI, then the question will be, are those 40 hours as productive as they could be?

So we want to make sure that from an engineer’s perspective, they have every tool at their disposal to be as productive as they can. Now, if they’re running at 100 % productivity, regardless of the tools that they use, and they’re a rock star, I probably wouldn’t be the guy that says, no, you have to use this. But let’s be honest, not everyone’s a rock star.

John Koetsier (10:45.869)
Mm -hmm.

John Koetsier (10:52.974)
Not everyone’s a rock star and even you are a rock star. Do you want to rewrite the same function 500 times in your career or do you want to just get it

Justice (11:02.417)
Right. I’ve never met a rock star that looks back at their previous code and says, yeah, that was a great piece of code. They always look back at it and say, you know, that could have been improved. And so many of the rock stars that I know actually want to find ways to improve their craft.

John Koetsier (11:18.074)
It’s pretty interesting because I do a lot of writing and I find generous AI very helpful not to write for me, but a lot of research, sometimes some background, other things like that. I find a time saver right there. In the actual writing, I want my personality to show. I want some humor. I want to bring up the points I think are important. I want to sprinkle in the quotes that really, you know, dress up.

a point or an idea and I find AI doesn’t really do that. Is it somewhat similar in code where maybe it’s less so, maybe there’s less personality that shows up in code.

Justice (12:01.123)
personally, think does show up in code in a certain context, mainly around comments and how and styling. but when you get, if you strip all of that away, there might be, I would say there’s only a handful of times where I can see someone’s personality coming through in a piece of code. that provides a little bit of a danger for, for some of our engineers because it means

to a certain extent, we were commoditized, right? Our outputs are fairly the same. So if that’s all true, then we need to make sure that we have the right tools at our disposal, that we’re utilizing as much as we can to get the right output.

John Koetsier (12:45.578)
It’s pretty obvious, given what we’re seeing in the past few years, frankly, and even in recent history, that it’s critical to write secure code, especially in networked applications, especially in applications that are running in places that are network accessible, right? Does AI help engineers write more secure code?

Justice (13:08.339)
So yes and no. think one, you have to make sure you’re prompting the prompting copilot to give it that specific context. On top of that, you want to make sure that you’re also aware of the potential pitfalls of unsecured code. So for example, I’m pretty sure that if I were to write a prompt right now, I could force copilot to write

a piece of code that’s easily injectable by SQL. It’s a matter of how are we leading these AI tools? Are we providing them the context to write the most secure or the best secure applications?

John Koetsier (14:01.47)
It’s pretty interesting really because if you think about it, mean, it’s amazing. Chatch EPT is as good as it currently is at writing code. I’ve seen that personally. I’ve talked to developers who have used it and been very successful using it. It’s a little more surprising that you can use a generative AI application that’s attached to a development environment.

And it’s not automatically, you know, having some like three laws of AI robotics or something like that, you know, for security, for validating inputs, for checking, for integration, those sorts of things.

Justice (14:41.127)
I think that’s where we’ll get to. I think in the future, or at least in my perspective, organizations can start configuring how their AI, how their within IDE tools behave. So these are the style guides, these are the configurations, this is the context that we want to use. These are the places that you shouldn’t even be touching. If we were to do

and also apply AI to, let’s say, code reviews, automated testing, deployments, and integrate that within the CI -CD pipeline, I think that would be almost a heavenly approach to how development and AI can work together in a larger context.

John Koetsier (15:27.694)
Let’s bring CrowdStrike into this. That’s been obviously a massive thing, right? They’re still getting flights canceled because of that one, Delta. I think it was something like 50 ,000 to 75 ,000 people have had their flights missed. And that’s just the airline industry. There’s many other industries. were doctors’ offices in the UK that couldn’t open and manage their practice software.

And there were global issues here. How can AI help us avoid that sort of scenario?

Justice (16:05.667)
That’s a very heavy question. So part of the problem with CrowdStrike was the launch of a content patch that addressed essentially null memory or memory addresses that were non -existent. Could AI have found it prior to that release? Absolutely. Could a human have done it? Absolutely. So maybe

problem wasn’t a lack of AI, maybe it was a lack of process. I don’t know enough about their internal processes to know what the solve was. But let’s say AI today and CrowdStrike as it happened, could AI solve some of these issues? Not entirely because you still needed a person to manually be at that terminal or at that machine to reboot, to go to safe mode.

to delete that file and then restart. No network application could do that. If so, then Delta would have been up by now and not experience 31 % of cancellations yesterday or Monday for those that are watching the recording. It’s a pretty hard

John Koetsier (17:04.45)
Mm -hmm.

John Koetsier (17:20.698)
Crazy. Yeah, yeah. And yet somehow CrowdStrike was set up to automatically push to millions of servers and computers across the globe. anyways, we’re not talking about that so much, but it is interesting. How do you see this evolving in the future? Code generation, the role of developers, and maybe even how many people can be developers?

Justice (17:47.475)
So there’s probably a problem that I don’t know if it’s being solved yet. The first one is going to be around entry -level engineers. If AI is a great way for senior engineers to write a bunch of code very, very fast, that’s what mostly junior engineers are there for. They’re meant to really come in and start filling in the blanks, and then the senior engineer will come and help massage that for a more production -ready environment.

So if AI is taking over what a junior engineer normally does or an entry level engineer normally does, then what’s the role of an entry level engineer? So I think there’s going to be a problem, one, for our educational system to see how we can up the ante on those entry level engineers. And two, how can we give those entry level engineers the right experience to, one, write prompts to understand

potentially missing pieces that they need to start looking for? And then three, how can they educate themselves better on on potential new tools that are coming

John Koetsier (19:00.064)
It’s an interesting challenge you brought up entry level engineers and there’ll be a temptation among them in particular to just let AI do everything and see how it all kind of works. Does it concern you at all that we might have an over -reliance? As we continue to kind of abstract away

some of the tougher or more menial challenges, not just in coding with generative AI, but in many areas of our society and engineering functions and scientific functions and other things like that. we, we sort of paper over those complexities and we just operate at higher levels, we forget how to do the basic stuff. And will that cause huge issues in the

Justice (19:43.282)
Absolutely.

Absolutely. I think that we ran into a very similar problem with network speeds. When I was starting off as an engineer, the maximum speed that you could utilize was 56k. This is back when modems made a sound when you got online.

John Koetsier (20:03.554)
Wow, you had 56K, how lucky was that? No, I’m just joking.

Justice (20:06.865)
I’m a little bit younger, a little bit younger than you, John. But when we’re looking at how developers behaved back in the day, they wanted to make sure we were optimizing payloads very, very well. But now with broadband and especially 5G on a device, I haven’t really seen engineers talk about their full payloads and the time it takes to download a specific JavaScript file or a

image. It’s almost like it doesn’t really matter. So in the future, I think we’re going to hit that same problem where some of the baseline, some of the baseline scaffolding and understanding of how to set up your development code will be an issue moving later on. And you might have to call in some of the old folks like me to help out.

John Koetsier (20:58.529)
Mm -hmm.

John Koetsier (21:05.742)
Where’s all those mainframe engineers when you need them? Wow. Very cool. Super interesting. Maybe we’ll end with this. Of all the results of the survey you did, I believe about 500 engineers that you surveyed about their use of generative AI, what was the most surprising result?

Justice (21:07.655)
No, exactly.

Justice (21:28.265)
The most pricing result, let me scroll down to where I can see it. It’s roughly, even we mentioned this previously, but it was around the tool that they utilize and that’s going to be chat GPT. So despite it being in despite copilot being integrated within their IDE, chat GPT is still number one. And I think habit, as you said, pushes a lot of that. So I think when we’re looking at

John Koetsier (21:40.216)
Mm. Mm -hmm.

Justice (21:56.041)
what that means for GitHub and for Microsoft, they got to start sort of breaking habits in the future. There’s a lot of mental space that chat GPT is taking over. So we should be cognizant of what that looks like in the future. And then the next thing I was looking at was around the 24 % that said there was no change in their quality of work with the use of Gen .ai.

I wonder if it’s because they’re missing those, they’re not seeing what the difference is or without utilizing the time. I have a personal, obviously a vendetta, but personal mission to find out what their anecdotal research, what their anecdotal experience looks like.

John Koetsier (22:26.456)
Mm -hmm.

John Koetsier (22:45.75)
Yeah, yeah, that is super interesting, right? Like, was your code just perfect before and still is, or are you just not seeing those issues? It is really interesting that ChatGPT is being used there. I mean, the way to solve that and the way to solve, you know, habit and going to other outside products is to make something that’s way better. You got to believe that GitHub and Microsoft, I mean, they know a little bit about development. They have a lot of source code that they can look at. You got to think they could get better.

Justice (22:50.217)
All right.

John Koetsier (23:14.776)
Chachipati has been amazing, but where has it gotten all of its learning and training data from? Probably from GitHub. Exactly. Exactly. Well, anyways, this has been great. Thank you so much for your time. I really do appreciate

Justice (23:23.793)
GitHub public repos exactly.

Justice (23:32.915)
Thank you, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

AGI: solved already?

AGI solved

Have we already achieved AGI?

OpenAI just released GPT-4o. It’s impressive, and the implications are huge for so many different professions … not least of which is education and tutoring. It’s also showing us the beginning of AI that is truly present in our lives … AI that sees what we see, doesn’t exist just in a box with text input, hears what we hear, and hallucinates less.

What does that — and other recent advancements in AI — mean for AGI?

In this episode of TechFirst, host John Koetsier discusses the implications of OpenAI’s GPT-4 release and explores the current state and future of Artificial General Intelligence (AGI) with Roman Yampolskiy, a PhD research scientist and associate professor.

They delve into the rapid advancements in AI, the concept of AGI, potential impacts on different professions, the cultural and existential risks, and the challenges of safety and alignment with AGI. The conversation also covers the societal changes needed to adapt to a future where mental and physical labor could be fully automated.

00:00 Exploring the Boundaries of AI’s Capabilities
01:36 The Evolution and Impact of AI on Human Intelligence
03:39 The Rapid Advancements in AI and the Path to AGI
06:38 The Societal Implications of Advanced AI and AGI
09:27 Navigating the Future of Work and AI’s Role
14:52 The Ethical Dilemmas of Developing Superintelligent AI
19:22 Looking Ahead: The Unpredictable Future of AI

Subscribe to the TechFirst audio podcast

 

Get a transcript of this episode of the TechFirst podcast …

Roman Yampolskiy: If you look at all possible. Tasks, humans, engagement, it speaks every language.

It can write poetry, generate art, play games. No human being can compete in all those domains, even very capable ones. So truly, if you average over all existing and hypothetical future tasks, it’s already dominating just because it’s so universal. So. Beyond what a typical human is expected.

John Koetsier: Have we already achieved a GI? Hello and welcome to Tech First. My name is John here. We saw OpenAI just released GPT-4. Oh. It’s impressive. The implications are huge for so many different professions, not least of which is education tutoring. It’s also showing us the beginnings of AI that is truly present in our life.

It sees what we see. It doesn’t exist just in a box with text input, and here’s what we hear. It hallucinates less. What does that and other recent advancements in AI mean for a GI chat? We have Roman Polsky. He’s a reacher research scientist. He’s a PhD in computer science. He’s an author. He is an associate prof at the University of Louisville.

Welcome Roman.

Roman Yampolskiy: Thank you so much for inviting me.

John Koetsier: Hey, super pumped to see you. Last time we saw each other was at the Beneficial A GI conference in Panama, which was great fun, and Panama was a wonderful place to be. Hope you enjoyed that.

Roman Yampolskiy: Oh yeah, I loved it. I discovered Panama. That was awesome.

John Koetsier: Exactly.

And the Panama Kamel, which was cool. I wanna kick this off by just reading what you recently posted. You recently posted, in my opinion, current ais when their performance is average across various tasks already surpassed the intelligence of the average human. While top individuals still outperform AI in many areas, this gap is rapidly shrinking.

That sounds AGI ish.

Roman Yampolskiy: Well, the best definition of intelligence in my opinion, comes from shame leg. And to simplify it, he says its ability of a system to win in any environment. So we’re not talking about narrow systems. If you look at all possible. Tasks, humans, engagement, it speaks every language.

It, can write poetry, generate art, play games. No human being can compete in all those domains, even very capable ones. So truly, if you average over all existing and hypothetical future tasks, it’s already dominating just because it’s so universal. So. Beyond what a typical human is expected.

We also, for some reason think very highly of humans. I don’t know if you get to interact with average people and half of them are below average, that’s not an impressive level of performance. I.

John Koetsier: You remind me of the quote. I forget who it was. I’m not sure if it was Robert Hyland or somebody who said no, it’s probably George Carlin.

The average person is a moron and half or worse or something like that. Of course, we all think we’re above average. Everybody thinks they’re above average. That is not always true, especially across different domains. I’m sure I’m below average in many areas, so in any case,

Roman Yampolskiy: of course, absolutely true, but we do have measures of general intelligence and those you can pretty accurately assess if you are at the top or not.

John Koetsier: Yeah. So. This is it’s a massive big deal, right? and we’ve always had this thought that after a GI, the singularity after a GI, everything changes. And it changes instantly, like quickly. The chart of progress just goes straight up. And is that a false thought, perhaps?

Roman Yampolskiy: Well, it is happening pretty fast. The. Change is happening weekly, every week there is a new model with new capabilities, improvements. It may not seem like it’s going straight up, but if you zoom out, let’s say there is 70 years of research and ai, most of the progress is within the last 5% of that timeline.

So it is starting to look pretty steep and as it gets more general, I think it will accelerate in terms of being able to observe. New knowledge, new capabilities. So, it may not be instantaneous in a sense of one second after this model is released, but like, it takes, 21 years to raise a human.

I. And they’re known to be general intelligences. So if this takes three years to get to super intelligence it’s pretty quick.

John Koetsier: Yeah. What’s another prerequisite for a singularity type event? Because you can have this super intelligence, but if it’s only like, a genie in a bottle that we summon and put back in the bottle, it has to have some life of its own.

Does it not?

Roman Yampolskiy: So the switch from Tool AI where it just listens for your commands and tries to fulfill them to agent. Kinda entities with ongoing sets of plans and goals and can create additional plans and goals. I think that’s the game changer. And quite a few of those companies are now talking about creating agents for businesses, agents for societies of agents to interact and get better performance by having so many of them kind of wisdom of artificial crowds.

John Koetsier: You also just quoted Sam Altman in another post. You should bring up that quote right now. The one about it, it’s not a

Roman Yampolskiy: quote of Sam, to be fair. It’s a journalist writing a very not funny headline. Sam did not explicitly say the. Phrase. It’s basically, what was the phrase? So the common dream of everyone is to have a killer app.

John Koetsier: Yes. And they’re

Roman Yampolskiy: talking about agents being a killer function of ai.

John Koetsier: Yes. Killer paraphrase of what

Roman Yampolskiy: Sam said, but still not the best choice of works is, could be really happening very soon.

John Koetsier: Yeah, we did just go to the beneficial a GI conference, not the killer, a GI conference. Not saying those don’t exist.

The, there are people building military ai, so I. Talk about what this changes, right? Because if we’ve seen the internet, we’ve seen the price of information, the price of data approach zero, right? As you look at the field of robotics, you can extrapolate out. And while it’s very expensive right now to build and field and ship and use robots, you can see that the price of physical labor will approach zero.

With a GI, it looks like it will beat. That it’ll beat blue collar labor, white collar labor could be at risk. First price of mental labor could approach zero. How does that change everything?

Roman Yampolskiy: So it really depends on how far in the future you’re trying to. Make your prediction. We used to say, long term was 20 years, 30 years until a GI, short term problems, technological unemployment were more immediate.

But now most predictions, prediction markets and top people are saying we’re three to five years away from HGI. So that completely changes our concerns. For me, it’s existential risks. If you’re still concerned about technological unemployment, then we’re really looking at all jobs a automatable. It’s not just low level or specific occupations.

Really anything can be automated and it looks like robotics industry is catching up. There are multiple humanoid robot models, which are quite capable already, and the progress is also exponential. So even the physical labor, the difficult task of being, a plumber or something like that, could also be automated.

John Koetsier: That’s a pretty good future if we do it right, that’s a pretty awful future if we do it wrong.

Roman Yampolskiy: Well, even if we do it right in a sense of not getting killed by it, it’s not obvious that people are happy with nothing to do. We all depend on having a place to go in the morning, and a lot of people derive meaning from being.

A speaker, a writer, a comedian, whatever it is, you are self-identifying as. And if all that is gone, it’s a really cultural crisis. We’re not prepared for. We talk about, very commonly about existential risks, suffering risks. We coin the term eye risks. Ikigai risks, meaning your meaning is stolen from you.

John Koetsier: My hope and dream is that we’ll find different ways of creating meaning that are not necessarily related to a job that provides the necessities of our life. But obviously that remains to a point in question. I wanna talk about one of the things that we brought up at the beneficial a GI conference in Panama.

We talked about LLMs and most people there. I think maybe this is my perception, maybe I’m wrong. Most people there seem to think that LLMs by themselves were insufficient for a GI, that you needed some other components, whether it was like a super ego to the ego or whether it was like, a, an agent type mechanism to direct.

What are your thoughts on that?

Roman Yampolskiy: Well, we haven’t seen diminishing returns yet. Every new model is a lot more capable than the previous one. Like nobody even knows what GPT one was able to do. GPT two was like, oh, that’s really cute. Put some money in it. But three, we, we really were impressed and now we at four and five.

Sounds like it’s going to be pretty close to a GI if it’s not there yet. The same process with just tying it in with, you have perfect memory. You have access to internet, you have multi-agent architectures. You can brute force a lot of narrow domains through two ai that will already again, be able to out compete most people in most occupations.

I think a lot of jobs today. Don’t have to exist at all. They’re BS jobs and they’re there for historical reasons.

If we truly wanted to automate a lot of low skilled labor is fully automatable today.

John Koetsier: You have

Roman Yampolskiy: your, USPS mail delivery. You have your. Take orders at McDonald’s, all that we can do today.

It would be nice if we had a plan for what happens then All the jobs are gone. It’s a big cultural paradigm shift. You cannot just do it overnight. You have to really change society. You have to change opportunities for people to engage with something productive. So those are big problems and I think no one’s spending enough time looking at them.

John Koetsier: Let’s just amplify those last words that you just mentioned because we’re in a. Incredibly diverse, divisive era right now. There’s so much anger and hatred even politically, and that’s globally, right? That in the United States where you live. I see that in Canada where I live. We see that in Europe all sorts of places.

Different ideas about what should happen with immigration, different ideas about what should happen with culture. The woke mind virus that some people are complaining about. This whole culture war that’s going on. We’re focusing on all these things and our politicians are focusing on many of these things as well, including regional wars and other things like that.

And there’s this, all these things are these small little issues. If you see this massive wave of change that is totally going to reinvent human society, what it means to work, what it means to think, what it means to do, have a job, how our economy is structured, how we allocate resources, who’s who has power, who does not have power?

It seems like 99% of the planet has no clue that there’s this. Wave that’s about to hit,

Roman Yampolskiy: that’s about right. And that’s why I never waste my time on any of those issues. I will not be on the internet debating, local governance,

John Koetsier: Smart man. Let’s chat a little bit about OpenAI. You mentioned the GPTs that they’ve come out with.

I led off by talking about GPT-4 Oh, the latest. They’ve had some shakeup. Obviously. Sam Altman was briefly out, what was that, a year ago? Half a year ago. Then back in now the CTO or Chief Scientist Delia Suit. Skiver Gaver, his is out, and a couple others as well. There was some talk. Back when Sam was initially turfed that people were revolting, a few, not many, because they felt like we’re approaching a GI here and it’s uncontrolled and we don’t know where this is going, and we’re freaked out.

Do you see any of these current shifts as part of that fear?

Roman Yampolskiy: So I have no insider knowledge. I don’t really know why so many top safety people at those groups resigned. They also don’t disclose it, and I think that’s not good. They sign NDAs and they’re not allowed to really say what happened there. I would be happier if they were unhappy and something like a GI was internally developed if they stayed behind and did something to.

Mitigate the risk from inside rather than just quit.

John Koetsier: If you

Roman Yampolskiy: have a security guard at the mall and shooting starts, you don’t want him to quit. You want him to take responsibility, and if he does run away, we can hold him legally responsible for failing at his duties, I would hope.

John Koetsier: It’s the

Roman Yampolskiy: same here.

John Koetsier: You were

Roman Yampolskiy: hired to do safety work. You were letting all of us down.

John Koetsier: Yeah, well, I guess we’ll learn more about that in the future. Hopefully not as a singularity begins, but we’ll find out. I recently chatted with Dan Fagel on Tech First, and of course he was at the Beneficial A GI conference, and he talked about different approaches to HEI and some approaches are like, forget it.

Don’t even start. Other approaches are, hey, do it, but we need oversight. Other approaches are go full bore. No. Worries, no protections. Where do you fit on those spectrum?

Roman Yampolskiy: So we need to be specific about what type of AI we’re referring to. Narrow AI systems are incredibly useful. They are tools we should develop them.

They are great for research, for medical work. Strongly encourage monetizing them. Deploying them is wonderful. Creating super intelligence, a truly general, more capable system, which we cannot control. Sounds like a dumbest thing we can possibly do. We’ll build our own replacements. So unless you have a working safety mechanism in place, which no one claims to have, just don’t work, and more capable general ai.

John Koetsier: I don’t see any way to put the genie back in the box. I don’t see any way to stop development there. Certainly not across companies. Certainly not across all nations. And there’s very open questions of if you actually did develop, I. A super intelligent agent, how would you Totally what does safety even mean in there?

You can have all the systems you want, but if you’ve got something that’s 10 times, a hundred times as intelligent as you are, well, we’ve seen that pretty much every security system that we’ve ever built is hackable and for a super intelligent agent good luck.

Roman Yampolskiy: Right, and it makes perfect sense of a personal self-interest.

No one should be developing those systems. They would get you killed. If you are a young, rich guy, you have a billion dollar startup, why would you wanna destroy all that? It sounds like if there is enough of those convincing arguments that should convince them not to go in that direction. They would not be known even as a bad guy in history because they’ll destroy history.

John Koetsier: Yeah, we also invented the atom bomb and the hydrogen bomb and many other things like that. And I don’t think that many of the people who are building these have the ability to stop. They’re wired to keep opening the next door.

Roman Yampolskiy: I’m not saying you are wrong, but it seems like this is our best chance to present convincing enough proofs of impossibility.

And deploy it to people who are smart enough to comprehend them.

John Koetsier: They’re

Roman Yampolskiy: smart enough to build those systems. They should be smart enough to understand you cannot build a perpetual safety device. It’s like a perpetual motion machine. You need to always have it right, GPT 5 6 7 400. It can never be unsafe.

Never have a single bug. Despite learning, despite self-modifying new hardware malevolent users, nothing should ever produce a single bug forever. That seems like a difficult challenge to me.

John Koetsier: I a hundred percent agree. I a hundred percent agree. And given that it’s unlikely we’ll produce just one a GI, if we do actually produce an agent and self-aware, even, I don’t even know if that’s, we’ll get into that.

I, but if we’re gonna produce many of them and some of them are going to be different. They’re going to have different ideas and goals. Let’s talk about that consciousness thing. Is that a requirement for a GI Is that is It does somebody does an entity need to know it exists and be con capable of contemplating its own existence to be an A GI probably not.

’cause you already said that, you think GPT-4 0.0 is pretty much a GI as it stands.

Roman Yampolskiy: So those are two different concepts. I think self-awareness in a sense of, you understand you are an agent within a world model and you understand how you impact the world and how world impacts you is necessary, and I think those systems can do that.

Internal states of experience, quality, pain. Completely unnecessary for being capable optimizer. I have no idea if you feel pain. I never tested it. I trust you when you say you do, but that’s not relevant to anything.

John Koetsier: Love it. Okay. Let’s turn our eyes towards the future a little bit. Peer into the crystal ball. You’ve said the three to five year prediction that many have, given that the prediction markets are saying, Hey, that’s a GI, you’ve already said, Hey, what we have right now, I. Pretty close basically in some senses.

What do you think the next three to five years look?

Roman Yampolskiy: It sounds like they’re gonna continue releasing more and more capable models like watching a kid grow. You had a 5-year-old, now it’s a 7-year-old. What’s the difference between the two? It’s hard to pinpoint specific major milestones at that age range, but clearly they becoming more capable and by some point they become smarter than you, hopefully.

John Koetsier: Well, and it sounds like that’s a reality we are entirely unprepared for as a culture and as a world,

Roman Yampolskiy: and again, I don’t think you can prepare for something smarter than you. The whole point is there are unknown unknowns. If you were capable of making those predictions, you would be that smart.

We know the systems are unpredictable. They’re too complex to understand. We cannot comprehend sufficiently or large explanations, so they’re well-known limits to what can be done in this space.

John Koetsier: David Bryn science fiction author, also astrophysicist was at the beneficial a GI conference as well and felt like, Hey AI is coming.

A GI is coming. It will be dangerous. The best option for us is that AI polices ai agree, disagree.

Roman Yampolskiy: I’m not sure how that could be implemented. You’re basically requiring the Sketch 22 where you have a friendly super intelligence to help you develop Ava friendly super intelligences, monitor them, supervise them if there is adversarial relationship now where collateral damage in this AI wars.

But the bigger problem is we don’t have already aligned police officer ai. And you cannot have narrow systems monitoring general systems. And all we can verify are narrow systems.

John Koetsier: Yeah. Pretty challenging the problem of intelligence, right? It’s like, our cat trying to police us. They can influence our behavior, but only as far as we want to be influenced.

Roman Yampolskiy: Yeah, I haven’t seen examples of where lower level intelligence can indefinitely control, not influence, control, higher level intelligence.

John Koetsier: Agree. I agree. Excellent. Well, thank you so much for taking this time, Roman. Do appreciate it. Thank you for inviting me again.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Moon first, then Mars: a chat with astronaut Jack Fischer

moon then mars

Today we have a special privilege: we’re talking to an astronaut who has spent 136 days in the International Space Station and completed 2 space walks. He’s also the mission director for the recent Intuitive Machines lunar lander, the first US mission to the moon in more than 50 years.

His name is Jack Fischer, and here is our chat:

(Subscribe to my YouTube channel)

In this episode of TechFirst, we chat with astronaut Jack Fischer. He describes the awe-inspiring experience of space travel, including the different perspective gained from 250 miles up. He humorously recounts adapting to zero gravity and the physical relief it provided for his neck and back. The conversation covers Fisher’s role as mission director for the Intuitive Machines lunar lander and space missions’ significance.

We dive into the technological and cooperative efforts required for future Mars missions, including efficient propulsion and collaboration across industries. Fisher keeps the discussion engaging with anecdotes and enthusiasm for space exploration, highlighting recent advancements and the potential for a lunar economy.

Subscribe to the audio podcast

 

Get the podcast transcript

Jack Fischer: I went up on a Soyuz vehicle with the Russians … that head shroud came off the windows. Light starts coming in, engine kicks off. You’re in space, you’re floating. You look outside, there’s this thin blue line of every living thing on the planet that is so dramatic from 250 miles up.

John Koetsier: What’s it like to go to space? Hello and welcome to TechFirst. My name is John Koetsier. Today we have a unique privilege. We’re talking to an astronaut who spent 136 days in the International Space Station. He’s completed two spacewalk. He’s also the mission director for the recent intuitive machines, lunar lander, the first US mission to the moon in more than 50 years.

His name is Jack Fisher. Welcome,

Jack Fischer: Welcome. Thanks for having me, John. I’m excited to be here.

John Koetsier: I’m super pumped to have you. I know I’m calling you Jack, but maybe I should call you TwoFish. I think that’s your call sign from the US Air Force Days.

Jack Fischer: When I grew up when it was okay to make fun of your name, so … lots of ways to make fun of Jack and TwoFish is probably easier to stick with.

John Koetsier: So I wanna chat with you about a lot of stuff where we’re going in space, the next missions to the Moon, Mars, all that stuff. Even beyond but I wanna start here. What’s it like to go to space?

Jack Fischer: Man, John, it’s as cool as you think it would be. I and unfortunately I’m not as much a poet as I am a pilot, so, my, my words sometimes don’t do it justice. But when I launched you’re really in the moment. You’re doing your job going uphill. It’s when the headrow comes off.

I went up on a Soyuz, vehicle with the Russians. That headrow came off the windows. Light starts coming in, engine kicks off. You’re in space, you’re floating. You look outside, there’s this thin blue line of every living thing on the planet that is so dramatic from 250 miles up.

We’ve all been on airliners. We’ve all seen, the curvature of the earth. And you get to start to see the atmosphere, but 250 miles up, you you get to really see a different perspective. And that’s what’s so cool about it because. Everything changes everything you’ve known. All those rules of if I drop something, it falls I gotta walk on the ground or nope I can float all of these rules and these things that have and parameters and constraints that have restricted you.

Your mind is allowed to redefine itself to look at everything in a new way, and. It’s magical. So, for me I got up there I was lucky enough, my body figured it out pretty quick, and with the exception of just me being silly sometimes trying to fly around like Superman and running into the things.

Absolutely loved every minute of it. It was just a blast.

John Koetsier: I have not heard of astronauts on the ISS or previous missions maybe dislocating a shoulder or breaking a bone ’cause they were doing the Superman stuff, but I’m sure there’s been …

Jack Fischer: I, there has to be, and there’s definitely some times when you’re lucky that, that you don’t whack your head or something. The first couple weeks … I’m an old pilot, so my back and neck don’t feel good anyway, and you go up there and all of that offloads. So I actually grew a couple inches and my back felt great and I was flying around and doing all this stuff and I was on the video chat with my wife and I’m like, my back feels great.

I’m sleeping great, but my, the back of my neck hurts. She goes, well dude, why are you flying around like Superman all the time? I’m like, ’cause I can. And she’s like, can’t you fly anyway? I’m like, good point. And so then I was, doing like low rider, like I’m on a Harley or spinning or flipping or it changed the experience.

So props to my wife for giving me some good advice there.

John Koetsier: Yeah. Anybody with back or neck pain, all you gotta do is come up with, I dunno, 20 million, maybe 20 billion, go to space and live there forever.

Jack Fischer: Feels great. Coming home is a little less fun, but going up. It’s a good time.

John Koetsier: Now, you said you adapted pretty quickly, right? Because we all have this sort of mental model of how the world works. I’m sitting on a chair right now. I trust that I’ll stay here and not start floating off. My phone is down on the desk. I trust that it’ll stay there.

How long does it take for most people to rewire their nervous system … their understanding of how the world works, how physics work when they go into a null-G environment?

Jack Fischer: It depends on the person. We haven’t quite figured out physiologically some people feel really bad, and some people don’t. I was one of those lucky ones that, that don’t. But it’s not necessarily tied to what you did on Earth. I’m a test pilot. I’m supposed to be able to do things that make billy goats puke, but test pilots don’t necessarily do better than anybody else when you go to space.

So we’re really trying to understand that better as far as training your mind. I think that test pilots the training that you get being comfortable, being uncomfortable is and rewiring for a new aircraft is what you’re doing in space. So knowing that I can, if I can have a wrench that I set here and don’t impart any force.

It’ll stay there and then I can go and do this and it’s still there. You get better and better as you go. I think most people about, a month or so in are pretty proficient. And then, certainly by the end I got to fly with two very experienced people Fyodor Yurchikhin and Peggy Whitson.

And they’re at a completely different level. Peggy can float down a hallway and she doesn’t even know she’s doing it, and she kinda like moves her leg over here and then floats this. It’s insane. She’s so good. Her brain is at a different level just because of all that experience. So you get better and better.

But rough cut … couple, few weeks in you’re probably pretty good.

John Koetsier: And then you wake up your first morning back on Earth and you try and float down the hall, it …

Jack Fischer: That does not work as well and it is not as fun. No. Coming home, it’s rough.

John Koetsier: Wow. There’s so much more that I wanna know there, and I’m sure there’s billions more like me. Maybe one final question before we get into some of the other stuff, and you said you’re not super poetic, but maybe wax a little poetic for us. You’re on the launchpad, you’re sitting on a time bomb.

Controlled time bomb. It starts to go, and I know you’re super trained, you got a million things and you’re thinking about all the technical stuff and everything, that there’s real danger as well. These things are not safe in the terms of what most vehicles go in that most vehicles that people go in, that we feel are safe.

What do you feel? Is it a bigger kick than anything you had in a fighter jet?

Jack Fischer: It’s actually not … so I went up on a Soyuz. It’s a liquid rocket, so the acceleration. Isn’t that great? It’s great. You go orbital in eight minutes, but it’s around three Gs. And it’s smooth. The ride is very smooth. And folks on, like the falcon in the Dragon capsule get, get a similar experience.

The folks who rode the shuttle with those big old solids, those things are, they go fast and they shake, and you’re hanging off the side, off the center of gravity. So you’re on a diving board, if you will. And then all that rumbling translates into the seat. So that is a very different experience.

As far as, uh, fear or those type of feelings, my whole career as a fighter pilot, combat pilot, test pilot. You, I think fear is a good thing. I think it brings your A game and it makes you sharp. But you can only use that for the things that you can control. If I’m gonna be fine pink mist in 20 seconds ’cause this rocket decides to blow up, there’s not a lot I can do about that.

I need to nail the stuff that I can affect the outcome that I can affect change, and I need to focus to do that. So, when I was flying, you’ve seen Top Gun, where it’s the beautiful little anthem, and then Tom Cruise salutes the crew chief, and then it’s danger zone, right? When I was flying, I always appreciated that moment.

And same thing on the pad sitting there, you got to choose a song and I chose The River from Garth. And that’s when all the emotions are going through you. When you salute the crew chief, it’s time to go to work and then you focus. And so I took that moment … the words of that song meant to me and and this, life goal that was about to happen appreciated it put that away, and then saluted the crew chief and went to space.

John Koetsier: Amazing. You’re now a VP at Intuitive Machines. We will talk a little bit about what you’re doing there and what you have done as a mission director and stuff like that, but maybe big picture, what does that company exist to do?

Jack Fischer: It exists to defy the impossible, to really break down barriers and make us make a blueprint for how we can expand into the solar system, starting with the moon. We love those hard challenges and we build and we do.

There are a lot of people out there that make really great PowerPoint and talk a great game and that’s great.  But I got into space and flying and everything in my childhood was ignited when I was six years old, visiting my grandpa here at Johnson Space Center saw that big old Saturn five sitting on its belly. And just seeing it being in its presence to, to realize that humanity came together to do something so incredible and do it that’s what fired me up.

And as much as I love this space station it was a great place to live. Mind blowing for me, but it doesn’t connect with that many people because. It’s a bright spot in the sky that if you know where you’re looking, you can see it. The moon, mankind has been staring at that baby for as long as we’ve been around.

It’s it’s an emotional, passionate connection that we have. And, it’s a full moon right now. And last night I was standing out there walking my dog and. Looking up and going, holy cow, ods up there, we did it. And that doing and inspiration that comes from actually accomplishing.

I think that and being part of the whole process, a part of this team to accomplish something that we haven’t done since 1972 in this country, no commercial company has ever done. That is probably the crowning achievement in, in my entire career by far.

John Koetsier: Amazing. This is from a man who’s been to space, been a test pilot, spent 136 days in space, and being the mission director for the mission that got a lunar lander to the lunar surface. Incredible. The moon is incredible, right? It is visual, it is visceral. It’s part of our, our culture, it’s part of our heritage.

It’s been the most visible part of space, if you can put it in the night sky. It’s also a place where we can spend time, right? You can imagine us carving out living spaces that are protected from cosmic rays and solar radiation with some regolith over our heads and that sort of thing, and as much space as you plan to dig for and maybe even find some combustibles and some water, ice and other things like that, right?

So there’s real prospects for living, and there’s gravity. So you, your body should hopefully function quasi-normally, right? There’s low gravity, but it is an interesting place to to establish …

Jack Fischer: Oh, absolutely. And as a company our, in addition to the lunar transportation systems lunar access that we build just like OTI our next mission is coming in November. We also just won a contract for kind of a down select on an astronaut moon buggy. So the lunar terrain vehicle that is, as you said the lunar environment does give us that platform to launch into the solar system from to learn how to survive and thrive on other celestial bodies, to get the technologies that we need hammered out and to really reduce the cost low earth orbit.

Wasn’t as prolific as it is now until we got the cost to launch down. Thanks Elon. And the others. So we are trying to do the same thing for the moon. We’re trying to develop the engines, the structures that guidance and control, all of the things that you need to reliably efficiently and cheaply.

If you can call it that land on the moon that’s what’s really gonna ignite a lunar economy and allow us to build up the infrastructure we need to go even further.

John Koetsier: So you’re gonna start working on this rover for Artemis. Does it look like we found some good spots to land and perhaps establish a base that might have access to local materials that we can use, water, ice, or other things?

Jack Fischer: Absolutely. So NASA’s been working on this for a while. We have they’re called the Artemis Landing Sites. We have all the data from every, everything all the way back to Apollo to current day with our lunar reconnaissance orbiter. And all of the data from our mission all goes into and analyzing and understanding exactly where to land.

In fact on our second mission, we are landing at one of those sites. it is within the same picture frame of the South Pole, 89 and a half degrees. So, right next to the Shackleton crater, if you’ve ever heard of that. And it is a site that we are fairly confident we’ll have water, ice in the soil.

We’re bringing a drill with us that drills down about a meter into the soil. Has a little mass spectrometer that looks for volatiles in, in what we dig up. We also have two rovers on the vehicle and a rocket powered drone. How cool is that? It’s called the hopper. And it’s gonna fly off the main lander and then hop into a permanently shadowed crater to also look for water ice.

So another technology that’s looking for that and testing out comm systems. Like I said, what we’re trying to do is obliterate those technical barriers that really keep space confined to just. NASA science and opens it up to business where we can close business cases and not just for exploration.

You accelerate exploration by having not just NASA investing in this stuff. I’m wearing a Columbia a sweatshirt, which is a partnership we had on the first vehicle. If you saw a picture of Odie, actually we got ’em right here. I’ll just show you. Mini Odie.

John Koetsier: Nice.

Jack Fischer: on the front of Odie we had what’s called the Columbia Omni Heat material.

And originally they were just gonna put a sticker on, right. It was just an advertising thing. And then we started talking to them and just realizing the brilliance of their company and their R&D and they’re like, hey, what if you put this on the lander? And we tested it. It worked great.

We talked to ’em about how we do insulation on spacesuits and everything else. They took some of that back. They made the lightest, most efficient jacket in history. It’s a fantastic partnership and what it does is it accelerates technology because now you have a clothing company partnered with a space company and I.

We’re getting rid of the lines that space is hard and only space companies can do it. And we’re taking the very best of technology across the board. Our LTV team is the same way.

We have Roush and Michelin, AVL, all these completely non-traditional companies for working on a space system bringing the very best that they’ve done in the tire industry in Formula One racing, and they’re bringing it to bear together so that military, civil and commercial all at the same time are investing and synergizing together so that we can just move faster. It’s just, it’s an exciting time.

John Koetsier: Pretty cool. I think you’re also bringing the internet to the moon in a sense, or at least the communications array. Is that correct?

Jack Fischer: You bet. So on on that second mission we’re partnered with Nokia. We obviously have our comm system that talks to the earth. But this Nokia 4G LTE system is gonna be on the rovers, on the hopper, and on the lander itself. To create this network and really demonstrate how we might have a more capable communications system on the surface that then talks to relay satellites and back home.

John Koetsier: Nice. Nice. I think there’s something, there’s an interplanetary IP address system as well, so I’m assuming you’ll tie into something like that. I believe it’s been used on Mars.

Jack Fischer: You bet. Yeah, we use a lot of those those same protocols. NASA has conglomerated all of those protocols into a document or a standard called LunaNet. And that is really guiding another one of the contracts that we’re going after. It’s called Near Space Network Services. It is the commercial augmentation for the Deep Space Network.

And its goal is really to standardize, just like you said, have that the standard that everybody uses so that we can put up a system and it’s not the only system you can use that they all interact and get better together.

John Koetsier: I am so excited for us to be able to create some sort of permanent or semi-permanent spot on the moon. There’s so much that we can do there. Radio telescopes on the dark side, right? Free from the interference of all the EM that we put out. Just telescopes and in, in essentially perfect conditions that, that are on a platform.

There so much that we can do, so much that we can learn. It’s one of the mission goals is also to prepare for travel to Mars. Talk about that …

Jack Fischer: You bet. So a lot of our technology is scalable or applicable to a Mars mission.

You might have heard in recent weeks that NASA put out a call to industry hey, what could you do to accelerate the Mars sample return program? It was a bit behind and over budget. And can we do something akin to the commercial lunar payload services or clips firm fixed price where industry takes some of the risk, to accelerate and move quicker and get this same thing done without bankrupt. And we did put in a proposal for that as well with several partners in the industry to get out to Mars return those samples with an ascent vehicle. We dock, we come home. We’re building out reentry technology for even lunar sample return.

So that’s part of the overall infrastructure.

There’s just a lot of overlap in these technologies, so the faster we go, the more that we can really develop and just refine the faster we’re gonna be able to get to Mars efficiently. And so we’re really excited about that one as well.

John Koetsier: One of the things that you mentioned there is docking meeting up, docking, getting some samples, returning. That’s one of the, honestly, one of the mysteries of space to me. You mentioned being on the ISS and if you put a tool somewhere, don’t impart any force to it. It’ll stay there.

Not imparting any force to, it has gotta be incredibly challenging. Right. And if you’re in space, any force to any wiggle any role that you come up with. Counteracting that with just the right amount of force. I know we have computers for that these days, but they had to do that so more manually in the Apollo days.

And it’s shocking that that’s possible.

Jack Fischer: And we keep getting better and better at it, right? The the space station, God bless that arm. It has had a long and arduous life of capturing all those. But they are largely cooperative. Everything that docks to the station we know rates on the station so that. It doesn’t move much, and it’s so big that it really doesn’t it’s more that active control with the vehicle and then turning it off at the wrong time so it’s not fighting.

So that, or the arm or the station docking isn’t fighting it. But nowadays we do have systems out there that can go up to. Non-cooperative is the. Term, but a vehicle that might be in distress or is rotating because it’s outta control.

John Koetsier: Interstellar!

Jack Fischer: There you go. Yeah. And and it can still think of it like when you’re getting ready to go into a jump rope as on the playground and timing it just right to go in.

At that moment when you have a window. We do have those capabilities. We actually have several patents in this company for non-cooperative docking and capture which our work with, the Maryland NASA Center, Goddard. We have a contract that is working on a orbital docking and rendezvous demonstration.

So, that is something that is getting better by the day. Lots of companies working on that and humanity as a whole is a whole lot better than we used to be.

John Koetsier: So, I do want to talk to you about some of the future stuff and where you see us going in space and what it’ll take to get there.

Before we do that, you are the recipient of an award very recently. Talk about that …

Jack Fischer: You bet. So, Annie and the whole unified team just a in incredible group of people has started unified space agency quite a while ago. So Annie Burillo, started this she, she grew up in the music industry and working with rock stars. And she met a few astronauts along the way and saw this kind of kindred spirit of artist in a but.

But just not afraid to really put yourself out there and try to make an impact, right? Whether it’s with music or with science. A similar underlying goal. And so she started this company to do the PR for astronauts. In addition to her work with music industry and it’s grown, she represents.

Heck, most of the astronauts I know, including me. And she has now put together, okay, let’s start combining the two in a way where the accomplishments and just recognizing great achievements over the year or over a lifetime in the case of Alan Shepherd. Can now showcase that and ignite it in a different way.

Just like when you go to space, how you tell that story is gonna resonate with certain people. I am not a rock star. You don’t wanna hear me sing. So I am not going to inspire anybody with music. But if my story can be translated through a rockstar band, like OAR. And they can capture people’s imagination in a different way.

We get the same thing done. So I’m excited about it. I think it’s a great idea that Annie had. I’m completely honored to be a part of it, and I. And, just as a representative of Intuitive Machines and our banner year this year. So, honored and excited at the future of really a different way to get people inspired.

John Koetsier: Very cool. If I recall, it was a platinum award for your achievements and as a very humble person. I’m guessing you spent most of the time talking about your award, talking about the people who set it up and what

Jack Fischer: Of course.

John Koetsier: good. No worries. It’s all good. It’s all good. Awesome. Let’s turn here then.

What will it take to make human humanity multi-Planetary.

Jack Fischer: Yeah. So. We’re a lot of people working on that, right? Elon’s Battle cry is Occupy Mars.

we need better propulsion, we need better radiation. We need good power sources. All of those building blocks are being investigated today. I think the true key is where can we synergize industries?

So it’s not just nasa. NASA can’t afford to do this. There is no country on the planet that can afford to do. Really to kick the door down of the heavens by themselves. There’s also no one industry that can do it by themselves, but like I mentioned with Columbia or any of our partners, when you overlap those and you accelerate the.

The tempo of change. That’s when you can start knocking those bowling pins down one at a time and we can have a better radiation protection. If we’re taking six months to get to Mars, the sun is a finicky fellow. And when he, when he passes gas it’s got some, it’s got some oomph behind it.

John Koetsier: We’ve seen it just recently.

Jack Fischer: Good Lord. Those lights. Where are you located at? Are you in Vancouver? How were the lights? Were they awesome?

John Koetsier: they were incredible. They were amazing. They were in the entire sky. And I got video and I got photographs as well. And you see it better. You see the colors better, the camera captures the colors better, but just

Jack Fischer: That’s so cool. We didn’t really know that was coming before it was coming and six months is too long, so. How do we protect the crew? How do we protect the electronics? How do we get there faster? we have a, an old I won’t say old, an experienced astronaut named Chang Diaz. He’s developing an engine called the Vaser Engine.

It gets you to Mars in 38 days. Incredibly efficient. As we develop those technologies, that’s where. It becomes more and more possible for us to, to land. The Starship is a pivotal piece of that architecture to, to be able to land and in an atmosphere and launch again. The methyl lock technology or cryogenic engines.

That can be fueled by fuel that you make when you’re there. Just like oti that’s a key enabler. So we just need to keep taking bites outta the elephant until he is gone. And the faster we can take those bites, the faster we can become multiplanetary.

John Koetsier: Wonderful. Do you have give us 30 seconds of context on this engine that you mentioned that can get us to Mars in 30 days? That sounds amazing. That sounds like you’re under full thrust the whole time. Half the way there, you’re thrusting away and how you’re, then you’re reversing into Mars.

How the heck does that

Jack Fischer: So, and I’m not a vaser expert and that’s one of many engines, but the the basic premise is it is a very efficient engine. So there’s a thing called specific impulse. It’s like gas mileage and horsepower for your car all wrapped up into one. And a normal, a really good engine that is chemical, like the space shuttle main engine is like 400 seconds.

That’s about who cares what it means. That gives you an idea. This type of engine is in the 30,000 range, so incredibly efficient. But unlike the shuttle engine, which is a whole lot of thrust, high velocity thrust, this is well. High impact thrust it. It moves you quickly. This is a little butterfly fart.

And

John Koetsier: Slow and

Jack Fischer: it’s a butterfly fart that is very small, but it’s continuous and there’s nothing to slow you down. There’s no drag. So all you’re doing is accelerating constantly. And like you said about halfway there, you need to turn around and slow down. So it is, kinda looking at propulsion in a different way.

And we have a lot of technologies out there that are trying to crack that nut. And it will be a big part of how we really unlock Mars.

John Koetsier: That’ll be super huge. I’m gonna do some research on that. I’m assuming it’s like essentially a particle accelerator or something like that, because of course, it’s the faster you can toss stuff behind you, the more impact you get. So if you don’t wanna carry a lot of reaction mass and a lot of mass, you’re gonna throw the way behind you then.

Then you’ve just gotta accelerate it crazy fast, and hopefully not. Aim your rocket the exhaust at the earth or something like that, or some other place. Very cool. They, let’s say you, they come to you tomorrow. They perfect it in secret. It launches. Here’s a ticket. Golden ticket. It you taking it.

Jack Fischer: Yeah. My, my back and neck are not the best. But if I get

John Koetsier: Slow thrust. Slow

Jack Fischer: if I get

John Koetsier: Mars gravity. No worries.

Jack Fischer: Mars, oh yeah. I’m taking it. It you, me my life has been about trying to find great teams to be a part of so that I can make an impact. And what an incredible honor to be able to go to something like a Mars mission.

I got a lot of friends in the astronaut office who could do a better job. So I would step aside and let them go younger, stronger, faster. Smarter and I’ll do my best here with this amazing team to create the technologies and infrastructure that’s needed to help get them there and home safely.

John Koetsier: Wonderful. Wonderful. Final question, favorite space movie you already mentioned, interstellar. I just recently re-watched The Martian. One of my favorites. What’s yours?

Jack Fischer: it’s tough. But I gotta go with space balls. Love me some ludicrous speeds. Yeah, I the number of great quotes in that movie are just In fact, our,

John Koetsier: You picked the least realistic one.

Jack Fischer: it’s, the Martian was great. I thought they really captured everything. Who doesn’t love space pirates? But even our gym in this building is called Ludicrous Speed, and we have kind of a.

I’m going plaid mural on the wall. You gotta have fun and you tie back into that rockstar thing. With the award. You have to have fun. This stuff is awesome. We’re going to fricking space. You get to see the earth is a little marble. You get to see the moon, you get to touch and be a part of things that are hard to even imagine.

You can’t. Act like it’s just every day. NASA tv, God bless him, but. Sometimes you, you make a launch sound boring. That’s unacceptable because it’s cool. Rockets are cool. Fire’s cool. Go into other planets is super cool. So have some fun with it. And if you ever get down to Houston, we’ll walk you around.

You’ll see our Kessel run hallway. The crane is called Chewy. The little hoist is Han. In the other assembly room we got Darth Vader is the crane. We got Lu and Leia and Luke is the hoist Leia’s, the 10 ton crane ’cause she’s more powerful Jedi. Line up the lights sas that can come across, we have fun because this is awesome and we wanna remember that every day when you walk through.

So yeah, space balls. Ludicrous speed. I’m going plaid.

John Koetsier: Love it. If I came you just in case, me in Carbonite, so I’ll

Jack Fischer: We don’t have that machine done yet.

John Koetsier: not yet next year. Thank you so much, Jack has been a ton

Jack Fischer: All righty. Well thanks a lot.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Water from air for 10 cents a gallon: atmospheric water generators

atmospheric water generators

If our planet turns to Dune, how will you survive? Atmospheric water generators might help …

We’re already experiencing water crises in thousands of global cities, places like Flint Michigan, or even native reserves in Canada. How can you ensure you’ll get good, healthy, clean drinking water?

In this episode of TechFirst, we explore the critical global challenge of securing clean water, a resource essential yet scarce for over 2 billion people worldwide.

(Subscribe to my YouTube channel)

In this episode I interview Brian Sheng, CEO and co-founder of Aquaria, a company at the forefront of developing atmospheric water generators capable of extracting clean water from the air, ranging from 24 to 2,600 gallons daily. He discusses the technology’s workings, its potential to address water scarcity effectively, especially in areas with limited access to clean water, and the company’s vision for scaling up to support communities and potentially entire cities with sustainable, clean water obtained directly from the atmosphere.

00:00 Atmospheric Water Generator
01:19 The Global Water Crisis: Challenges and Solutions
05:13 How Atmospheric Water Generators Work: Technology Explained
11:00 The Future of Water: Scaling Up and Making it Affordable
12:48 Comparing Water Solutions: Desalination and Atmospheric Water Generators
21:45 The Vision for a Sustainable Water Future

Subscribe to the audio podcast?

Find the podcasting platform you prefer:

 

Transcript: water from air … atmospheric water generators

Note: this is AI-generated and is unlikely to be perfect

John Koetsier: If you lived in the world of Dune, what would your most valuable resource be? Hello and welcome to TechFirst. My name is John Koetsier. I’m currently reading Earth by David Brin. He’s an award-winning science fiction author and NASA consultant who happened to edit my own science fiction novel, No Other Gods.

Brin’s Earth is widely regarded as one of the most predictive sci-fi novels of all time. One of the things he foreshadows would be very common is conflict over water.

Does that need to be the case today? We’re chatting with someone whose inventions could solve that.

They’re called atmospheric water generators. They pull anywhere from 24 to 2,600 gallons of clean water from the atmosphere in a single day. He’s the co-founder and CEO of Aquaria. His name is Brian Sheng. Welcome Brian. Thank you, John. Really excited to be here. I was thinking as I was doing that intro, have you seen Dune or have you seen the wind traps there?

Brian Sheng: I was gonna say, I haven’t seen Dune yet. It’s definitely on my radar. My co-founder Eric has already seen and raved about it. Lots of sci-fi movie references, and also Star Wars. So that’s on my list. Haven’t gotten there yet, though.

John Koetsier: Do not wait too long. It is amazing. You’ll love it. Start with the first one, of course, but you absolutely love it. But let’s talk about water. How many people struggle to get clean water today?

Brian Sheng: Yeah, so I think everyone knows that we have a global challenge with water. Actually I’m here in Cal. I’m usually based in California.

And just in California alone, we have actually a million people that don’t have access to clean water. And in the United States, up to 40 million people actually have disruptions to clean water. And clean water. As is defined by the EPA. So this is a big problem even in the United States. Yeah.

John Koetsier: Well, and it’s not just California, but that’s shocking.

Of course. ’cause the US, one of the richest nations in the world, California, one of the richer states in the United States, you also have Flint, Michigan. I. Where they still have trouble getting clean water to people that doesn’t have contaminants like lead and other things in it. I think I saw something that a global figure was somewhere on the order of 2 billion people globally do not have safe, constant access to clean water.

Yep. 2

Brian Sheng: billion people. A lot of it focused and concentrated on the global south and in developing nations. And it’s a huge problem and it’s actually getting worse. I.

John Koetsier: Talk about that getting worse. Why is it getting worse? We have situations like in the American West and Midwest where we’ve been tapping the water table for decades, for farming, for irrigation.

That’s gone, that started sinking down. And there’s all kinds of issues with that. Not the least of which is the, is the lack of water, but also land substance as the water table goes down. But you also have, obviously places in countries in the global south as you’re talking about where drought comes.

And you may not have a. A ready source of water. What’s driving all this?

Brian Sheng: Yeah. I think when we think about our water crisis happening, and what I mean by worsening. I think there, I, there are three main categories and it’s reflected actually, if you look at the WHO or un, global perennial challenges, water is always a top three problem.

and so if you look at some of the reasons, well, number one is and this is not in I guess any order of priority, but number one, a lot of people take water for granted. I. It’s not something that I think about as a resource that is replenishing in right in front of our eyes. This is especially the case in the United States.

Where we think water is a is a free public good to a certain extent that the government provides. And so water is extremely undervalued. Number two, we have climate change happening. Climate change. We talk a lot about EVs and decarbonization, but actually one of the main effects and one of the main ways we suffer from climate change is just depleting water, right?

Like ocean water rising. What does that do? Comes inland and creates disruptions to our clean water. When we have droughts and more natural disasters, same thing. Our clean water gets depleted. And then number three is also, it just takes a lot of time and capital For us to build new water solutions, we have to build infrastructure, we have to build pipes, we have to approve large budgets.

And so all of this is happening in ever higher frequency and severity. And so this is how I think about, our global water challenge.

John Koetsier: And not only is it expensive and hard to build those massive projects to distribute water everywhere, they’re super inefficient. Especially old systems, New York, other Eastern United States cities they estimate they lose 15% to 30% of their waters through leaks.

It’s crazy. It’s absolutely insane. Okay, let’s talk about your atmospheric water generators. What is it? How does it work? Sure.

Brian Sheng: So I. I I wonder, John, do you know how much water is in the air around us?

John Koetsier: I don’t know. I obviously, it has something to do with humidity and a hundred percent humidity.

I don’t know how much water that would be per, let’s say, cubic meter. I assume it’s a lot, but you’ll have to, you’ll have to share it with me. I don’t know.

Brian Sheng: Yeah, so I think we can think back to. High school or middle school earth science. The humidity in the air is part of the earth’s natural hydrologic cycle, and so we have roughly 37.8 million billion gallons of water in the air at any given time or for our global audience.

It’s about 13,000 cubic kilometers,

John Koetsier: 13,000 cubic kilometers of water suspended in our atmosphere at any given time. Wow.

Brian Sheng: Correct. And that gets replenished roughly every week or so. Completely recycled just through the hydrologic cycle. So that’s an incredibly large amount of water, fresh water that we can use that’s sustainable, renewable it’s about 200 times more than what human consumption actually consists of per year.

And so what our atmospheric water generators do is we’ve created a way that can capture this. Water in the air and turn it for into water that we can consume in a large scale and energy efficient way. How does it do that? How we do that, technically speaking, is from heat exchange. So I think an easy way to describe it is, on a hot summer day we take out a.

Can of cold drink from the refrigerator. When a cold surface touch a hot parcel of air, the water drips right onto the side of the can. So that’s actually heat exchange as condensation. And so what our technology does is we’ve created special materials as well as heat exchange systems where we can capture large amount of air and then squeeze out the water in the air by efficiently dissipating heat.

And then. Capturing and purifying that for consumption.

John Koetsier: Super interesting. I guess the principle there is that cold air can’t hold as much moisture as hot air,

Brian Sheng: yeah, exactly. So for every parcel of air with its temperature and humidity content, there’s going to be a colder. Temperature where the dewpoint is the water becomes liquid water from atmospheric humidity. So we’ve created a system where inside the system then we can actually efficiently cool the parcel of air through our sys through our heat exchange systems.

And then when that parcel of air cools to the dewpoint, then the water drips down into our collection and then we’re able to purify that. So, that’s exactly right.

John Koetsier: Super cool. How much power does this take?

Brian Sheng: Yeah, so I guess it depends on how much water you need and the size of the actual product itself.

But roughly speaking, right now we’re creating water somewhere in between five to 15 cents per gallon,

John Koetsier: five to 15. Wow. Okay, that’s pretty good. That’s not bad at all. Now, of course, that depends on how expensive your power is or anything like that, but at that price point in, let’s say north, the North American grid, you could easily run that off grid on solar power, correct.

Brian Sheng: Yeah, absolutely. That’s actually I think one of our main customer categories is a lot of people are looking at securing their properties, securing their ranch, their home, making sure that they have a water supply that no matter what happens to their groundwater grid. They have another air water group now, and so these customers typically also have solar and batteries and that’s a great, addition for us because then we are essentially getting energy from the sun and then water from the air.

John Koetsier: Yes. Free water. Talk about the scope here because I mentioned, I saw on your website that you’ve got atmospheric water generators that can pull 24 gallons of clean water a day. And you’ve got ones that can pull 2,600. That’s a significant difference, obviously, but it sounds like you’ve got some things that are happening on an industrial scale, not just a personal or home skill.

Is that correct?

Brian Sheng: Yeah, absolutely. For Aqua, we are a technology company and our mission is to safeguard clean water access, harvesting air. So we are imagining that in the future as we continue to improve our technology and as more people adopt our technology, that we can actually create water for entire cities and countries, all from the air.

So from the beginning, we’ve designed our product to be linkable. So that’s why, that’s why there’s such a large range from, something as small as 24 gallons, which is pretty great for the home or your school, all the way up to multi-thousand gallons. Because all we have to do is link the technology together.

And so whether, John, maybe you’re telling me you want a bigger one for your ranch, okay, we can do that. Or we have a bigger project. Like for example, right now, Mexico City’s having a huge water crisis. Well, we can also do larger projects as well.

John Koetsier: Interesting. Yeah. Mexico City, I believe is also subsiding, right?

It’s essentially falling very slowly as they deplete their water table. And the whole city is sinking some centimeters or inches per year which we’re also seeing in the American, Midwest and West. Okay. How much is this? Is it super expensive?

Brian Sheng: So our smallest unit the 24 gallons that I mentioned starts at $3,000.

So you can start supplying that, starting from there.

John Koetsier: Gotcha. So what do you think is that, is

Brian Sheng: that expensive? What do you think it

John Koetsier: as? Everything. It’s relative. If you’re in Africa, in some dry, dusty interior nation, you don’t have access to water. And maybe your income is, some thousand dollars.

US equivalent a year, it’s probably pretty expensive. But in, in the West, maybe not so much.

Brian Sheng: Yeah, right now we’re mostly shipping within North America, but John, I absolutely agree with you. My goal is that in the next five years, I can sell the hydro, our smallest product and take it down at least two x so down to $1,500 and I hope to one day sell it over under a thousand dollars.

Right now we’re just getting started, but as we continue to build the company, as we continue to manufacture more of these products, then we’re definitely going to offer this for more affordable pricing.

John Koetsier: It is interesting ’cause if I think about let’s say an off grid home or a ranch or just somewhere that’s quite distant.

$3,000, assuming the technology is reliable and lasts a long time is actually not that much because what’s the cost of laying all that pipe that you need to get something in? What’s the cost of otherwise having some sort of treatment facility for groundwater or digging a well or other things like that those things can be quite expensive.

Super interesting. How does this fit into kind of the global. A scale of problems. I just saw like five days ago, financial Times had a story on desalination and how when it’s powered by solar, that’s getting actually really efficient, really cheap. I mentioned to you off the top or before our recording, the Hasian project in Dubai.

They’re thinking they’ll provide desalinated water at 37 cents per cubic meter whereas. Drinking water in London is a pound which is I guess about a buck 50, buck 25 per cubic meter. How’s that fit in? I guess that’s really massive city scale, but you’re hoping to get to that level as well, aren’t you?

I think I’m a big

Brian Sheng: fan of these validation. We have a global scale water problem, and as I actually brought up a great point, solar power desalination and also advancements, like even concentrated solar power desalination and advanced membranes. We’re bringing more supply online, especially many regions that have entire countrywide scale water problems like in the Middle East, right?

We’re not talking about city scale or state level. It’s like entire countries have problem with this. The way I think about it is that, at a global scale, we need different options. It’s not a one size fit all solution. When we think about energy, we have all kinds of energy options depending on what your particular geography holds for you.

Maybe you’re closer to the coast and you can have wind. Maybe you’re in a, on an island with great sunlight, you have solar, maybe you have geothermal or any number of options to make sure that. You’re, you have a efficient source of energy and good and well priced. But that’s the problem with water is that we don’t have those options.

We really have relatively limited options. And desalination over the past 30 years have finally gotten to a point where it is now at the pricing and the technological scale that, John, you just talked about. Aqua is aiming to get there as well. Right now we’re definitely more expensive than desalination from a liter or gallon, a gallon level, but we’re able to provide an option that is way more affordable from a total project side, and then also we can provide that water immediately.

So I see us right now complimenting desalination. We can put our machines and actually our customers do so alongside other desalination projects.

John Koetsier: It is super interesting that you mentioned different energy sources because I think I saw it just the other day. Scotland was is about 97% powered by renewable energy sources.

They have a lot of wind apparently. Really interesting. Costa Rica. Same. You look at you use what you have, right? You look at Iceland, so much geothermal there, right? It’s always been sad for me when I’ve gone to places like Bermuda or Bahamas or something like that to see how little solar they’re using when that’s such a rich resource there.

But hopefully over time that’ll come as well. It makes sense in the water. Area to have that also, right. Like you say, spin it up, scale it for one house, a few houses, a ranch, a property, that sort of thing. Instantly. I almost wonder if you’re not more cost effective than desalination. ’cause of course you have the $3,000 cost, but if that machine lasts for 10 years or something like that, you said, 15 cents a liter or something, like, that’s getting close, isn’t it?

Brian Sheng: I think it is the most affordable for people, for communities for people like us and developers of communities. We do, we are the better option as a non-country, non-utility organization. I. The cost of desal, you can’t really build desalination, so you have a large counterpart.

No. Imagine John, you said, you have a place. Let me go build a desalination plant. I’m gonna zip cost me to build a pipe spot. Right.

John Koetsier: Who do I have to talk to enable that? What?

Brian Sheng: I think that’s what, we’re able to bring about is that we are the fastest and most affordable option where you can build your own water security and supply yourself.

And the scope and size of what we can provide is within the amount of water necessary. For well, for the size of the needs of communities.

John Koetsier: It is actually interesting to me to remember that even in areas that have a lot of water. It’s not necessarily the right kind of water.

So for instance, I live in British Columbia, Canada. And it’s about three years ago that we had one of those atmospheric rivers that I think San Diego had last year, and it just rained in incessantly and hard for three to five days, something like that. We had tons of flooding water coming over the border from the states as well, from Sumas, Washington State and everything like that.

The reality was we were had. Insane amounts of water, but because of the amount and it got into the farms and manure and other pla industrial places, it was not clean water. It was not potable water. In fact, there were concerns that it was going to render some of the farmland damaged because of what it might’ve picked up in various areas, right?

So even if you have lots of water, it may not be the water that you could drink or easily drink, at least.

Brian Sheng: Absolutely. John, did you see the recent atmospheric water in Dubai? It drowned the whole city. I saw

John Koetsier: that on Reddit. I’m in a natural disasters subreddit or something like that, and it was the weirdest thing.

In fact, not just Dubai. Several places around the Middle East, you saw these, just all of a sudden flash floods and huge amounts of water flowing over the desert and into the cities. It’s crazy.

Brian Sheng: Clean drinking water I think is super important as we think about different. Sources and use cases.

Actually drinking water, clean drinking water, I think is one of the best use cases of atmospheric water generators. Simply from the fact that one, while the air also has pollution, the air carries way less pollution. It’s way easier to purify then. Bodies of water. Like when you have a body of water, anything could be in there.

You like, you have to figure it out, what’s in there and the carrying capacity of that. Right. But for the air, we can purify it and make sure that it’s the highest quality drinking water, W-H-O-E-P-A compliant and then offer to you immediately again. So drinking water is, I think, one of the best use cases for atmospheric water.

You mentioned like Bermuda, Bahamas, Costa Rica. Actually all of these areas, countries, any islands, any coastal areas, they all have clean drinking water problems. They all have salt water contamination problems. But you know what? They do have hot and humid air. Yes.

John Koetsier: A lot of it. Yes. And a lot of sun.

Absolutely. A lot of sun. Huh. I was gonna ask, because you mentioned Dubai and that brought to mind like places like in the states that are often very dry atmospherically, dry, like Nevada that sort of thing. What’s the efficiency of your units? Let’s say you have one of Pacific Northwest, let’s say coastal Oregon, right?

High humidity air, and then you’ve got one, let’s say Las Vegas or something like that. Nevada somewhere maybe. Close down the border to Mexico. There’s gotta be less water in the air there. Correct? Yeah.

Brian Sheng: There’s less water in the desert for sure.

John Koetsier: How does your unit function then?

Does that make it slower at producing water? Does it take more energy?

Brian Sheng: That’s exactly right. Is that slower in production? I think the. Best way to think about it is like creating solar energy on a cloudy day. So instead of maybe 24 gallons, we might only produce 10 gallons of water or seven gallons of water, in a 20, 30% humidity area.

Typically. The way we like to think about it is that. We would love for you to have at least 30% humidity for us to make you. A, very meaningful amount of water. We still work under 30%, but it’s going to be the same as creating solar energy on a cloudy day, and so we’ll be much slower in making that water.

John Koetsier: Okay. Very cool. So if I ever build an off-grid home I know where to go. I know where to get some water. That’s very cool. Talk about the future. You’ve talked a little bit about what you’re doing today. You talked about maybe going to city scale. What’s that look like? What are you dreaming about five years from now for the products that you might be releasing?

Brian Sheng: I dream of a future where we can provide an water alternative without pipes or infrastructure. And we are already bringing that into play today by demonstrating that we can build entire communities without water from the ground. So Aqua last year we built the first homes in the world where the entire water supply for the homes actually just come from air solar powered batteries.

And so we’ve already showcased that, and actually right now we’ve already signed a contract and we’re starting to build communities in Hawaii where the entire community would be water powered from the air. Wow. So the way we I think about it is I need to. Continually scale up the projects that we can showcase to the public that, hey look, this technology is here today and we are building larger and larger communities or cities or projects.

And over the next five years, my goal is to showcase that we can actually build out entire city infrastructure with water from the air starting from, one community at a time today.

John Koetsier: Very cool. Thank you so much for your time, Brian.

Brian Sheng: Thank you so much, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Robots in agtech: what’s next?

robots in agtech

What’s next for robots in agtech? Everyone who’s paying attention knows agtech is massive right now … there’s so much innovation from laser-equipped weed killing machines that keep chemicals off our food and out of our land, to drones, autonomous tractors, AI-powered seeding and watering plans, and much more.

What about robotics?

In this episode of TechFirst, I chat about the future of robotics in agtech with Kevin Dowling, managing director at Robotics Factory in Pittsburgh, Pennsylvania. We discuss the evolution of robotics in farming, from traditional methods to the modern use of drones, autonomous tractors, and AI-driven systems, and Kevin highlights the diversity of robotic forms in agriculture, including wheeled, legged, flying, and swimming robots.

And — of course — we chat about humanoid robots with legs. Are they useful on farms?

Or … when will they be?

Kevin predicts a shift towards smaller, more affordable robots for smaller farms and emphasizes the importance of technology in reducing environmental impacts, enhancing food production efficiency, and potentially democratizing farming.

00:00 Exploring the Future of Robotics and Agtech
00:46 The Evolution and Future of Robotics in Agriculture
03:39 The Role of Humanoid Robots in Farming
07:38 Challenges and Opportunities in Ag Tech Startups
10:05 Innovative Startups Shaping the Future of Agriculture
12:49 The Complex Environment of Farm Robotics
15:30 The Potential of Indoor and Vertical Farming
23:30 Envisioning the Future of Farming with Robotics

(Subscribe to my YouTube channel)

Subscribe to the audio podcast

Find the podcasting platform you prefer:

 

Robots in agtech: get the full transcript

John Koetsier: What is the future of robots in agtech?

Hello and welcome to TechFirst. My name is John Koetsier. Everyone who’s paying attention knows that agtech is massive. Right now. There’s so much innovation from laser equipped weed killing machines that keep chemicals off our food and out of our lens to drones, autonomous tractors, ai, it’s seeding and watering plans, all that and much more.

What about robotics? To chat, we have Kevin Dowling. He’s the managing director at Robotics Factory in Pittsburgh, Pennsylvania. He’s a former scientist at Carnegie Mellon Robotics Institute, serial entrepreneur, former VP Innovation for Phillips. Welcome, Kevin.

Kevin Dowling: Thank you. It’s good to be here. Thank you very much, John.

John Koetsier: It’s good to have you here, and maybe let’s just dive right into it. Is the future of robotics in Ag Tech, does it come on wheels, legs? Does it fly? Does it swim? All of the above.

Kevin Dowling: It’s definitely all of the above, and I think it’s ironic in that of course it started on legs with people and then animals.

Yes. And then it went wheels and tracks. So almost every form of locomotion there is maybe not serpentine, but certainly a lot of these others. So I think the future will be all of the above.

John Koetsier: Yeah, that reminds me, I think the word robot comes from the Czech and it means worker or something like that, right?

Yes. So yeah we have been our own robots sometimes compelled in history, sometimes not, fortunately, not so much anymore today, but I. Where do you see the most interesting innovation?

Because I’ve seen some cool stuff on wheels and that works in the farming environment and you’ve got massive, hundreds, thousands, tens of thousands of acres sometimes, and so big machines that can do big jobs there, there’s a role there, but there’s also probably some roles that, something that looks more like a human could do as well.

Kevin Dowling: Potentially, I think most of it because of the size and scale of the fields and also the equipment often either requires you carry a significant payload or that you gather a significant payload, tilling, seeding, harvesting, and the machine, like the human, would touch the field anywhere from a half a dozen to 30 times a year across the growing season.

And so you need to have machines that can handle that and today for the more automated aspects of farming them literally standing in their field. It’s tended to be row crops, the wheat, corn, soybeans, and so forth, where you have very large acreage of thousands of acres in a given farm field, especially in the Midwest here in the us.

And so that tends to drive that approach to very large machines that can cover 12 or more rows at a time. And that happens all the way through harvest when you have combines that do everything at the end. But what your fundamental question is what is the configuration, the morphology of these farming machines?

And it can be everything. Whatever’s best suited for that particular job will likely win out.

John Koetsier: Yeah, it’s also interesting because like you say, the big machines, the massive tractors, combines it makes sense given the scope and scale of North American farming. Maybe a little bit different in Europe, maybe a little bit different in Asia and other parts of the world as well.

But certainly in the States and in Canada, the massive farms there. It’s interesting. It’s interesting when we see this explosion of humanoid robots do you see that coming into farming in any way, shape or form?

Kevin Dowling: I think it’s interestingly anthropomorphic nature of humanoid robots is attractive because it’s what we are, what we resemble and what we look like.

How do you reconcile that with the job to be done. So, for example I’ll push back on that a little bit to say, if we were designing a dishwashing robot today, would it be humanoid and why? Where we have this very efficient box that we put dishes into, and that’s a robot for cleaning dishes.

So why would we bother with the humanoid form if we don’t need to do it? And airplanes don’t necessarily mimic what birds do. So although the humanoid is trending right now in terms of robotics investments and so forth, it’s not clear to me as it wasn’t with some of the walking dogs and other things that you see all over YouTube, for example, they are compelling because they look like things that we know.

And a part you have mentioned here the laser weed killing robot, which looks like something from space. And hopefully it won’t scale so big that it’ll become a threat, but it’s fascinating to see that I’m not convinced that a humanoid robot necessarily will be what you need in order to do tilling and seeding and harvesting and so forth. But I think for I will say though, that for smaller farms who cannot afford these large autonomous vehicles, the big autonomous tractors, there’s a real opportunity there because most farms, well, the US tends to much larger farms overall, but you mentioned other places around the world where these types of machines could be used on smaller farms, if they were affordable.

So, I point to a company, the big ag equipment makers John Deere leads the pack, but you also have Agco, CNH, but then rapidly rising behind them are companies like Kubota or Mahindra. Mahindra is now the world’s largest tractor manufacturer by unit numbers, not by revenue.

So they’re shipping more units to more farmers than anyone in the world. So where did that, where does that come from? What kind of technology can they add? What simple improvements and the computers and the sensors that are added to these machines are lowering in cost every time. And I’d add one, you pointed out that robot has derived from the check word for surf or worker.

The computer is the same way. The first computers were human. They were typically women in World War II who were actually calculating things all the time. And then that gave rise to the name, which was then applied to the inhuman, right. The vacuum tube based large computers of the past, and then became the the canonical name for calculating devices and obviously the computers we use today.

But there’s a lot of room for all kinds of machines.

John Koetsier: Absolutely. Absolutely. And that’s always the tough point, right? Like if you’re going to build a robot for washing dishes, hey, the dishwasher is pretty fricking good, right? Yeah.

On the other hand, it can’t load itself. On the other hand, it can’t empty itself.

Now, you could build in things for that, and the question is, do you build in, do you build 25 different types of robots that can do every task, or do you build one that can do what we do and make it work? And there’s no right answer or wrong answer to that, but certainly in farming it with the big machines that you need, you can totally see autonomous tractors and they’re here already to some extent, right?

Being a thing. I guess one of the challenges, and you’re in the business of finding, supporting growing startups in ag tech and related fields, that’s not cheap. If you look at the size and scale of those machines, that’s not the hardware Startups are notoriously challenging. I’m more used to software startups personally.

Right. But. When you get into Ag Tech and you look at the size and scale of these machines, the jobs they need to do they, the amount of funding required must be significant.

Kevin Dowling: It is, I think any hardware based company has a greater challenge than simply writing good code to solve a particular problem.

The advent of a large machine. But, even I’m not sure that the scale of the device matters so much as the ability to do the controls and everything. In other words, if you’re making, a new, let’s say like go back to the Nest or an Apple watch, very complex, small mechanisms, but the scale of that device matters less than the complexity of the thing you’re trying to put together.

So a tractor is typically an engine today. Sometimes you’ll find an electric motor and a drive, and those are well known, well understood, and made by. In the millions today across the planet. But to your point testing requires larger areas. You need real estate to do that. Sometimes it has to be seasonal, so you can only do it at certain times of the year, so that only compounds the challenges.

But there are many small companies focused on ag and doing it today, whether it’s mowing or harvesting or forestry all the way through row crops, specialty crops, and livestock not so much, but there is increasing automation even in livestock, beginning with milking parlors for dairy and, other things that people want to do.

I’ve talked to investors who want to invest in things that are heavily manual today, and the most heavily manual today. Still requires people to process animals into food. So, if you’re looking for a protein source like that there are many concerns around being able to manipulate and use the tools that they use in order to turn animals into food.

John Koetsier: What are some of the more interesting startups that you’re seeing that are coming across your desk and maybe that you’re investing in?

Kevin Dowling: So we there are about a half a dozen here in Pittsburgh right now. One is focused on vineyards and monitoring vineyards as well as doing harvesting and so forth.

One of the most interesting things that I have seen is that if you think about grapes, for example, normally if you just let them grow, they would become these scraggly sort of bushes, but they’re essentially trained. To wrap a structure and then form a particular thing. They’re now doing that with apples.

They’re doing that with other crops too, so that you can change what is in the field either potentially genetically. But the other way to do that is to simply provide a structure that they can grow in and around making it more accessible for automation, more easy to use. It also makes it easier, even for the human to do that, to reach the top of a cherry tree or an apple tree typically requires a specialized ladder or other equipment.

Or sometimes you can find these really interesting videos where they have shakers that come up and grab the trunk and shake the entire tree, and that’s how they harvest. So all of that I believe could be automated whether or not you use an, entrainment of a plant or you create plants that are genetically easier to harvest or produce an apple, for example, that is tougher and able to be shipped and stored and then eaten eventually.

So, there are companies like that. We have graduates of the programs here in robotics at Carnegie Mellon, for example, in Pittsburgh, who are off doing these types of things elsewhere. We also have a company you mentioned earlier the idea of indoor agriculture. The idea is you have more than a greenhouse where it’s actually set up to be fully automated. And we have a company here for growers, which is harvesting tomatoes in greenhouses. And they’re doing this in, in, not only here in the US but in the Netherlands, which is a hotspot of of indoor agriculture as well as, excuse me, as well as automated agriculture.

And they’re harvesting tomatoes rapidly.

John Koetsier: That’s really interesting because a lot of tech comes out of the Netherlands because they’ve vastly expanded their agriculture on a tiny land footprint. So they’ve gotten very sophisticated at getting high yields and high automation. ’cause their costs are high and everything like that.

Obviously in flowers, but also in agriculture. That’s right. Super interesting that they’re taking their tech to the Netherlands.

Farms are a crazy challenging environment to put robotics into. Right? They’re complex. They’re changing, they’re moving. It’s not laid out like a factory. It’s not relatively stable. Ground conditions change. There’s weather there’s animals, right? And every type of farm is different as well. A wheat farm is very different from a vineyard, as you mentioned or, a beef dairy concern, that sort of thing.

That leads to significant challenges, I’m assuming.

Kevin Dowling: It does. It’s not entirely unstructured. It’s not a walking into the woods, for example, where you have no control of anything. But it is even though it’s. It is slightly structured, it’s still challenging. It’s soft materials that you have to traverse. You’re also some big concerns right now in agriculture have to do with soil compaction.

So how do you prevent that from happening as time after time as these machines roam these fields? How do you prevent those kinds of things? Well, you can make them bigger, have softer tires, have more tires. More legs perhaps depending on the approach. So, and the farmer is assuming most.

In some cases, I think people are arguing all of the risk because they face those challenges. It’s outdoors and you’re subject to all of the vagaries of of the outdoors in general. And the, so they get the harvest time and they produce the crop. But if it’s, if they had bad weather, if they had floods, if they had drought they’re the ones assuming the risks.

So how do we help the farmer to mitigate that risk? One way as I mentioned, is moving indoors. But who knows? There, there can be other potential ways. There are crops, for example, that can be grown entirely indoors like mushrooms. And you might, you and others might be amused by the fact that mushrooms are a crop, but they are, and they harvest every several weeks.

It is actually Pennsylvania’s number one crop by volume, by dollars. It’s, wow, six to $700 million a year to the state of Pennsylvania. Wow. So it’s a, and if Pennsylvania was a country, it would be number four in the world in the production of mushrooms. So it’s very likely the mushrooms you have on your pizza or in your salad are made right here in Pennsylvania.

But they have problems harvesting because of labor issues. So how do you solve those?

John Koetsier: Yeah. Yeah. It’s interesting that we had the discussion at the top, like, what is a robot? What do you classify as a robot and what’s the most efficient sort of machine, whether you call it a robot or not, to operate on a farm.

We talked about bringing farms indoors. Vertical farms are a thing. Right. And there’s huge potential there for year round production. Right. There’s huge potential there for production where. It’s needed, so right in the city, let’s say, or nearby or production for, let’s say far northern climates, Alaska’s the Alaska’s of the world, right, where farming is not really a thing. It’s a thing, but it’s not really a thing. But fresh fruit and vegetables are required. In that case the farm itself is a giant robot.

That’s an interesting world to consider too.

Kevin Dowling: Yeah it is we have a rather broad definition of what robotics is, and I think of it as the cycle of sensing planning and actuation.

And so you take that loop, that triumvirate of these three aspects of it. One is capturing the data planning based on the data you’ve acquired, turning that into information and then turning that into potentially movement. It doesn’t have to be movement, but a thermostat in your home also has follows that same cycle these days.

It’s not necessarily a robot, but you, to your point John the idea of a home, I think a famous architect, Le Corbusier said, a home is a machine for living. And I think we’re living inside of robots because our climate is controlled, our temperature, humidity many other things about our home, the lighting everything can be controlled.

And so that’s an environment that we’ve created for ourselves. That is robotic in nature. But agriculture I think could be exactly the same type of cycle, the exact same type of thing where we’re trying to control things in order to produce food or do some task in a factory, for example.

But all of those loops together, so I think your analogy of the farm being a robot is exactly spot on.

John Koetsier: Is there progress or is there, are there even projects to create something like a FarmOS? Because let’s say, we’re five years in the future, 10 years in the future, and you’re starting to see a significant surge of. Automation and robots and semi-autonomous machinery on farms, there’s a command and control issue, or there’s at least a work together issue.

Right. And do these things know about each other and do some of them cooperate and some of them are harvesting, but then some are gathering and transporting and do they talk and how’s that communication? Is there a project underway to, to manage that or is it basically using. Some of the operating systems that people are developing for coordination, robotics in general.

Kevin Dowling: I, I think most of it is exactly your last point, which is using real-time operating systems, for example, or using Ross Robot operating system to control the robot itself. But in terms of complex networks of robots, there’s not a whole lot of work going on in that area. I do think it will be absolutely necessary to do that if you have.

I believe that there will be a trend to move away from the very, very large machines, the half million dollar 12 row kinds of systems to larger numbers of small machines, which are more easily maintainable. You can pull them out if you need to. And by having a larger number of smaller machines means that they do have to communicate, they do have to know where each other is.

And that will require, as you pointed out, the ability to communicate amongst themselves like a mesh network in a way, but mobile and being able to coordinate and synchronize their operations. If you think about in a field today, even the. The operation of a harvesting, let’s say you’re moving through a field of corn.

The US has around 90 million acres of corn, so there’s a lot of it. But as you’re harvesting threshing as you’re generating the grain and the cobs and so forth that are becoming eventually silage or other things, there, there’s a coordination of vehicles, one driving to capture that flood of granular material coming into one vehicle from another.

And so there’s already a little bit of that work. But typically one and twos not tens and twenties of these machines working together. sometimes you’ll see these wonderful, beautiful pictures of probably Kansas or further north where they’re row after row of combines working together.

Yes. And then they migrate …. those aren’t robots yet, but they essentially migrate northward with the harvest. And that’s a specialty machine that allows farmers to harvest a great deal more quickly. And because they can’t afford that many machines, no one really can. Yeah. But as they move north toward Canada it allows them to do at a, do it at a rate that is far faster than anyone could do in history prior to that.

John Koetsier: Interesting. Super interesting robotic migration. Here we come. Very cool. Do you do any investing in, I wanna say farming adjacent areas? Because there’s a lot of work, let’s say we don’t wanna have however many tens of millions of cows that there are in the United States. States and Canada and we don’t want to kill that many for food.

And so we want to grow beef and have the impossible burger and other stuff like that. Lab grown meats and other things like that. Do you do any investing in those areas?

Kevin Dowling: We haven’t. I think that’s probably a little beyond our direct knowledge and capability. I have nothing against that. I think creating protein sources like meats from sort of chemicals is certainly a valid way to do that and reduce some of the impacts that the the full animals have.

But I’m not especially going after one versus the other. What perhaps, one way to think of this is there’s the farm and that’s really what we focused on in our recent Ag tech summit, which is specialty crops and row crops. But we did ignore for the purpose of this particular summit, livestock, pigs, chickens, cows and so forth. Cattle. And then the beginning there’s also seed. Chem fertilizer, chemicals and other things. And there’s a, there are ways now there’s one company here called Robot, and it’s spelled R-O-W-B-O-T. They’re focused on small robots going between rows of corn, which happened to be planted perfectly on center, 30 inches apart.

And so they designed a robot to do exactly that and inject nitrogen directly at the roots of the plants. So it cut the use of nitrogen, which has other impacts in the environment of course. It ends up in the Mississippi River. It will then turn, make its way to the Gulf of Mexico and so forth.

But it cut the use of the nitrogen fertilizer by half

John Koetsier: Nice. Which is,

Kevin Dowling: Not only a great cost savings, but reduces that impact on the environment substantially. And they’ve been working for quite a bit and hundreds of acres in Iowa right now.

John Koetsier: Absolutely huge. If you look at the amounts that farmers spend on fertilizer, it, you have, obviously, it blows your mind.

There’s huge costs here. And like you say, and nobody wants all that flowing into the lake. That’s right. The analogy that comes and everything. Right. So, yeah, exactly. Okay. Let’s look forward what does farming look like in, let’s say 10 years? As we see more. Purpose. Robotics and machinery.

Autonomous, autonomous machinery for the farm. What does farming start to look like?

Kevin Dowling: I think it’s certainly an extension or an extrapolation of what we see today. There’ll be more machines and more fields doing these types of operations at the beginning. Of course, there’ll be the much larger farms which can afford this machinery or perhaps there’ll be some interesting, ways to financially incentivize the farmers to lease these things, the capital lease, operating lease all kinds of financial ways to manage that to further it. And then I think you’ll begin to see taking that same technology which will get cheaper, but once, once it’s become. Solid and robust in terms of use.

You’ll start to see that in smaller farms where if you have not 20,000 acres but 2000 acres, you can also make use of that. I don’t know how much smaller it will get if you have 20 acres or two acres of farm. Will it have. Autonomous vehicles. Yes. Eventually, but, and you asked about the next five to 10 years.

I’m not sure that it will, but I hope that it will. And so that, those are the field operations, and then there could be things related to fertilizing and inspection. It could be drones and other things to monitor crops, look for disease basically remove those that are diseased from the field.

And all of that could be done in an automated fashion. What’s interesting to me too, a lot of what we learned was post farm. How did, when you go from harvest into the processing chain, there were a lot of interesting, both economics and potential automation. So, for example, we’ve seen many across the country, I’m sure where you are and certainly where we are in Pittsburgh.

The rise of many small breweries. But very often they can’t scale well because for a small amount you have the machinery and the existing industrial tools to be able to satisfy a brewery’s output, which might be, who knows, thousands, a thousand cases a year, something like that. But when you start to scale beyond that, there’s a gap actually in the market between what a Nestle can do or a very large food company can do, and what a small growing.

Bakery or other things. So there’s an interesting gap in there. We’ve been talking to contract manufacturers who work in that space about how to improve that. And there may be room for another summit related to post-farm activities and food processing.

John Koetsier: So my hope is also that in whether it’s 5, 10, 15 years, as this becomes more available to farmers, this technology we also have a better product.

Yes, we also have better food. We use less fertilizer because it’s, as you mentioned, injected right where it needs to be, not just carpet bombed over the farm. Right, right. We use less pesticides because we have machines that either mechanically or with lasers, remove them stuff that we’re seeing right now but isn’t necessarily widespread.

Right. And maybe we have less waste because we’re smarter about how we harvest. Maybe we lose less to crop failure because as you said, there’s constant monitoring, perhaps drone based or something like that so that we can identify, oh there’s. There’s something attacking the corn here.

Let’s, get rid of that. Let’s treat that right away and it won’t spread. So my hope is that farming becomes safer for humans, but also better for the environment and also better for people who.

Kevin Dowling: Yes I certainly have the same hopes and dreams around that. Organic farming, for example.

There may be a lot of ways where machines can help grow, not, no pun intended, to grow that part of the market as well. Where we could have healthier food that’s grown. And here’s another aspect of it, is, everybody sees farmer’s markets. You might see the honest stands where you’re, you basically put money in a jar and you take a, a dozen years of corn.

There might be ways to do monitoring of that to reduce the need for people to be there all the time. Mm-Hmm. And then you have companies like one here in Pittsburgh called Harvey, H-A-R-V-I-E where they’re going directly from the farmer to the customers. So they’re able to navigate, they work with something like 300 to 400 different farms.

So that’s another link in the chain of being able to provide good food for people on a timely basis where they can directly they’re essentially directly paying the farmer. There might be a middleman in there doing some of the distribution, but, I think there’s I think the, I, hesitate to use this term since it’s used so often, but the democratization of farming and how to get it so that the farmer can benefit more directly from it without a lot of processing and other things going on in between them and the consumer.

It’s really opened my eyes to see all of this. It’s amazing.

John Koetsier: It is amazing. And like you, I also hope that these innovations will come down to the smaller farms because we’ve seen some interesting, amazing, incredible things in small farms right now. Even as small as one acre on a city lot or something like that.

Where with intensive farming techniques and there, there’s really a lot that can be done. Wanna thank you for taking this time. Do appreciate it.

Kevin Dowling: You’re welcome. Thank you, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

World’s first nano lunar rover

first micro lunar rover

In January of this year, Peregrine Mission One launched with at least 22 payloads. One was intended to be the first American made rover to land on the moon since the Apollo days: 1972. It happened to be the world’s first nano lunar rover.

It was called Iris, and it was also the first lunar rover constructed with carbon fiber. It was designed and built by students at Carnegie Mellon University.

Today, we’re going to chat with them …

(consider subscribing on YouTube?)

Despite a mission failure due to the lander experiencing a propellant leak and missing its lunar target, the Iris team achieved significant milestones. They successfully demonstrated that student-made rovers could survive space conditions, including the Van Allen Belt’s radiation, and maintain communication and functions in space.

This project, despite its setbacks, marks a significant achievement in democratizing space exploration and contributes to the broader vision of establishing moon bases and Mars bases as stepping stones for further space exploration.

Subscribe to the audio podcast

 

Transcript: the world’s first nano lunar rover

Carmyn Talento: I think it’s just gonna be just incredible. I see moon bases in the near future. I see Mars bases in the near future, and I like to think of those as like your gas stations.

In some cases they might be, if we are able to develop that technology, if we’re able to use the resources on these, other planetary bodies and once you get to that little step away from earth, whether that’s the moon, whether that’s Mars, you start opening up more accessible areas of our solar system and of space

John Koetsier: What can you learn from building the first nano lunar rover? Hello and welcome to TechFirst. My name is John Koetsier.

In January of this year, Peregrine Mission one launched with something like 22 different payloads. One of those was intended to be the first American made rover to land on the moon since the Apollos 1972.

It was called Iris. This was the first lunar rover constructed with carbon fiber. It was designed and built by students at Carnegie Mellon University, and today we’re gonna chat with them.

Welcome, Harsh, Kevin, and Carmen. Super pumped to have you guys. Maybe let’s start here, and I don’t even know who to talk to about this, but maybe Kevin will give it to you first.

What was Iris intended to do?

Kevin Fang: Our goal was to launch what would be the smallest and lightest lunar rover to ever go to space. which is something that at the time we did succeed in doing. And our goal was to first and foremost show that students. Are capable of achieving such a task and do so at what is a relatively low budget compared with typical space missions.

And our goal is to pave the way for future space programs at Carnegie Mellon University, as well as other universities and organizations, and just take these little steps towards democratizing space for all.

John Koetsier: Cool. Carmen, this is not easy stuff. I know because my son is an engineer and he participated in some of these types of challenges, including going to Shanghai for some robotics challenges when he was in university.

And I it’s easy to make something that sucks. It’s hard to make something that is good and especially for a harsh environment like the moon

Carmyn Talento: For sure. Yeah. And. that is definitely a challenge that we had to face. We did a lot of different testing on our rover to make sure that it could, deal with the difficult nature of space. There are a few tests that have to happen that, all rovers and all, everything that goes to space has to pass these tests.

That could be like a vibration test, ’cause when the powerful rockets or the powerful engines on a rocket go off, you have to be able to survive that. And that’s true of anything that has to go to space. And then being a lunar rover we had to survive … we’re also the first lunar rover to be dropped as our deployment from the lander as opposed to, rolled out or lowered down.

So that was some testing that we had to do and, we succeeded in doing that here on earth. We unfortunately weren’t able to test at least our deployment mechanism in space, but we were able to survive the launch. We, our systems survive some of the harshest places in space, at least between here and the moon including the Van Allen belt.

And, a lot of the things that we used in our rover a lot of our electrical components and stuff are not technically space grade. They’re not like industry standard. They’re all custom and just normal stuff. And it survived and it did well.

John Koetsier: Super interesting and maybe Harsh will bring you in here because yours was the first lunar rover designed with carbon fiber.

That’s really interesting because. I know a little bit about carbon. A little bit, right? A little bit. But I did talk to Nathan Myrhvold, who’s a former CTO of Microsoft, and he talked about building cameras to photograph snowflakes. And the key challenge with that is that the camera has to be very cold because guess what?

You’ll evaporate the snowflake. You’ll melt it very quickly.

And so he ended up building much of his assembly out of carbon fiber because it doesn’t contract or expand in cold or heat.

Ws that one thought? With using carbon fiber as well as the weight for you guys.

Harshvardhan Chunawala: Our mission was to be the ultra low cost and carbon fiber being a strong material which could withstand all of the conditions that you just spoke about. That was one of our decision to make it off carbon fiber.

So, yeah it takes off the low weight which ultimately reduces the overall. Cost of the launch which was one of our aim as well to be an ultra low cost rower so that decision supported our overall goal.

John Koetsier: Cool. What did you guys do? What were your roles? Kevin, maybe we’ll start with you and we’ll just go around in the same order.

Kevin Fang: Yeah, sounds great. I had a variety of roles on the project. When I first came in, I was helping out with mission operations. At the time we were building out our mission control center, which is actually located in our gates School of Computer Science.

And so we were remodeling that and really adding in. The screens, the computers and figuring out how we’d integrate that with the mission simulations that we’d been running in order to make sure that for the actual mission we’d be able to have all the procedures and designs in place in order to make sure we can run a very smooth mission akin to what you would see in the movies for like NASA’s control center and the sort.

And after that I helped out with the media team as well, representation, and when I was there, I helped with designing and maintaining our Shopify store. And later on during the mission itself, I was in Florida with the mission team as we went through what would be a very long series of events and we’ll definitely get into that.

John Koetsier: It’s amazing how many jobs there are. There’s a lot of different pieces.

Carmyn, what did you do?

Carmyn Talento: Yeah, on a similar vein I wear multiple hats in this organization as well. Where I, similar to Kevin, have worked as a mission operator joining in a big wave of recruitment for the project once we were getting close to a potential launch date and a mission date after that.

So I joined as an operator and again, developing kind of what operations would look like on the surface of the moon because we have this incredible. Work of engineering, but we need a team to run it. So joining, working that out, coming up with some constraints that we might put into place, some pipelines of communication within the team, what kind of roles do we wanna have, et cetera.

And then on top of that, I worked my way up to representation, team lead. So working on public relations things … information … leading the face of the project to the general public helping, with Kevin us working together, social media interviews, talking to reporters after our mission happened.

John Koetsier: Cool. For the first part of what you were saying the picture that’s coming to mind my mind was one of my favorite movies is The Martian. And it’s that team that they had to pull that old team out, how do we talk to this thing?

How do we communicate, what process, what commands do we send? All that stuff. So. Very cool. I just noticed you and Kevin. Are wearing a particular kind of jacket with some patches and stuff like that. It looks pretty cool. It looks really pretty official. I don’t know if you’re in a motorcycle gang or maybe it’s the mission jacket.

Is that correct, Carmyn?

Carmyn Talento: Yes, it is correct. All of our our mission operators who are there for all of our training and the mission, we’re able to get an official operator jacket here.

John Koetsier: Sweet, sweet, harsh. What was your role? What did you do?

Harshvardhan Chunawala: Sure. So I was an elderly contributor along with Kevin Carmen for the Carnegie Mellon Mission Control. And then I joined with the team for our launch and was one of the mission operator. I was also the practicum leader for space mission engineering,

John Koetsier: Let’s go back in time and let’s assume that it’s gonna work. And it’s successful, and it lands, and it’s on the moon … and it’s traveling around: your rover’s actually functioning. You’re an operator, you’re a controller, you’re telling it where to go. You’re checking out craters and boulders and all that stuff.

What were you hoping to learn? What were you hoping to accomplish?

Carmyn Talento: Yeah, great question. So some of the scientific goals of Iris as a rover were to just test the terramechanics of how tiny nano rovers work on the moon, because, one of the next big steps in space travel is how can we use the moon?

How can we use that to our advantage where NASA’s gonna be sending more astronauts there. There’s talks of setting up bases there. We need small little devices there that can move around, maybe future transportation of small goods et cetera. So this is like the first big step in that is testing how small scale can we get and still be effective.

So Iris was only about 20 centimeters in length: a very small shoebox size little rover. And the wheels, comparatively and the wheels are made of carbon fiber as well for that flexibility aspect. So it was really testing that how do these, what we call nano rovers, work with the moon.

Yeah, that was the biggest thing really, is testing that, capturing images on top of that. And just kinda, learning our environment. What’s the threshold of a rover this size? ’cause that’s a lot of what we found in our research as well is, some of the sizes of obstacles that we were considering … like nobody had ever had to think about those before because the next smallest lunar rovers were like SUV sized.

So our little thing like that would be a huge obstacle. A smaller rock that a car size rover would have to deal with is nothing. Where our rover, it would be a pretty big deal.

So, yeah, just testing how all that works, capturing some images on our way, and ultimately also proving the feasibility of having students be able to do this. And as Kevin was saying earlier, open this up to more than just government and longstanding professionals.

John Koetsier: Super interesting. And what’s really cool to me is like before we landed on the moon, there was a lot of talk about what we’re gonna find in terms of how we’re going to get around. Some people thought, Hey, there’s a huge layer of dust there, perhaps, especially in the mare area areas, right?

The lunar seas, quote unquote and you could just sink, right? And so others were no you won’t. But with such a tiny machine, you wonder.

Did you ever consider something like grasshopper mode? Like extend the wheels really quickly flick yourself over a big obstacle?

You can have a lot of fun with different options, right? Especially when you’re building small, it’s not necessarily insanely expensive to try stuff.

Carmyn Talento: I know earlier in CMU robotics there are some some rovers that have like a spider effect where they have legs that will pick up and move. But we really wanted to just test out a more standard method of transportation, just wheels. It’s keep it simple, stupid … hey, they’ve been around a long time.

John Koetsier: They have a long track record. They’re pretty effective.

So Kevin, let’s bring you back in here. We, Carmyn talked about what you would’ve accomplished, what you hoped to accomplish. Obviously the mission failed not through any fault of your own, but the craft that you were operating on, that you were, that you’re a passenger on, had a fault and did not ultimately make it to the moon.

They had to crash, land it in an ocean somewhere. But your mission was not entirely a failure. There were not, it is not like you got no results. What results did you actually accomplish even though you didn’t land on the moon?

Kevin Fang: The amount of data scientifically that we collected was very significant given what probably most people would expect given what ended up happening to the rover itself.

I’d say in terms of the largest technological achievement, I think we accomplished was firstly just the fact that the primary systems during launch and transit survived extreme temperatures, high radiation, specifically through. The Van Allen belt and we verified that all the systems were operational and for a nano rover of this size, that is quite an accomplishment.

And these were done in space during transit to the moon, we were able actually to, connect and have two-way communication between the lander and Rover, where we ran test commands and telemetry and we transmitted a large amount of data, including down linking a large file that actually included the names of everyone who was actually involved in Iris over the years.

So that was one of the largest achievements we managed to have. I myself did manage to send some messages of my own to the rover in space and receive them back, which was very cool.

John Koetsier: So that’s interesting actually, more than that it’s crazy because often we think, okay you’re sending up craft it’s got an autonomous mode, semi-autonomous mode, and then there’s a command mode where you tell it where to do stuff.

And often when you send a craft, then you know, you hear, okay, NASA’s lander or craft. Landed it on Mars, now they’re sending commands, now it’s responding like it, it unfolds its antenna or its dish or something like that, and start sending. But you were able to do that while your craft was packed away.

It was in the box, it was in inside the spacecraft, and you were still able to communicate with it, and it was able to communicate back.

Kevin Fang: That’s correct. Our rover it, fortunately it’s not we didn’t intend for it to be a satellite, so it doesn’t have, for example, significant solar panels that need to be unfolded in order to receive power.

Actually the modules inside of the lander, which required power, were connected with power during the flight in order to make sure that the telemetry could be verified and that the batteries would remain charged up until the moment of landing. And so we were fortunate in that case to actually have access to the rover and be able to run these commands during flight once we learned that we maybe didn’t need the battery power necessarily for the mission itself on the moon.

Cool. Cool.

Carmyn Talento: And actually, something unique about our rover and the way we were connected to our landers: we actually were not inside of it. We were attached to the outer part of the lander,

John Koetsier: So you were a hitchhiker!

Carmyn Talento: Yeah, exactly. And I think that adds another layer of just excellence to what we achieved is that we were, or our rover more like, was exposed to the pure vacuum of space during transit and our system survived all of that. And we were still able to send these massive files that Kevin had just mentioned. And so we weren’t tucked away.

We were actually on the outside, which is why we would’ve had the drop deployment if we had made it to the moon and stuff. But Very cool.

Harshvardhan Chunawala: The photos that astrobotic the land company which is also a CME spin out, uh. They tweeted and we could see our rover’s wheel in space along with stars in the background.

John Koetsier:  How close did the craft get to the moon?

Did it circle the moon once or was it still orbiting the earth?

Harshvardhan Chunawala: So we reached the lunar distance but because of the of the propellant leak from the lander we missed the moon. So where we were supposed to see the moon and entered the orbit we could not do that, but we certainly did reach the lunar distance. And at that point of time, we we saw the dust that was collected. And since we missed the moon we had to come back to earth. And while it was coming back the photos were transmitted.

John Koetsier: So you were the Apollo 13 of lunar landers essentially? Return to sender.

Harshvardhan Chunawala: And also also America’s first commercial lunar payload service mission. And I think our last America’s last lunar mission was Apollo 17.

John Koetsier: Yes, exactly. Kevin, talk about Astrobotic, which is the company that was running the Peregrine missions.

They are trying again. They realize that they failed. They’re redeveloping learning from their mistakes. They’re trying to get, I think in November of 2024. Are you guys on it? Do you have another prototype? Do you have a working copy of what you build? Are you just, gonna like bolt this one on too?

Kevin Fang: As of currently, I don’t believe we have plans to have a copy of Iris sent out in the future. One of you guys, Harsh, Carmyn, you can correct me if I’m wrong about this, but I don’t believe we have a ticket on their 2024 A mission currently, but I can definitely see in the future that we would love to write on them in the future.

And definitely in terms of future payloads, we actually do have several different lunar missions at different stages from proposal to design to. Actually finished rovers as well that are just waiting for a ride to the moon.

John Koetsier: Okay. Okay. So there’s no refund on that that lunar ticket, huh? And there’s no, like, okay. Yeah, we’ll replace it with their second one. That’s a little unfortunate. Too bad.

Kevin Fang: There’s no no insurance companies that I would say would be willing to take on a risk at this high.

So unfortunately, we don’t have a policy claim on it.

John Koetsier: Okay. Okay. But Carmyn, do you guys still have, like, you, you don’t just make just one? Did you make two of your rovers? I’m sure you had. So many prototypes. Is there one that’s almost exactly identical to what actually got sent up?

Do you have another one hanging around somewhere in your back pocket?

Carmyn Talento: So we don’t have a space grade one lying around. We have what we call our earth model, which is an earth replica meant for earth conditions such as. Earth gravity, where the moon’s gravity is one sixth of earth’s. This one is meant to withstand normal gravity, what we would call normal gravity.

For example, one of the differences you would see is that the wheels are thicker on our Earth model. Whereas the wheels on our flight rover were like quite literally paper thin and to the point where we wouldn’t leave it sitting in earth gravity for too long, for fear of the wheels collapsing, but it would work just fine up on the moon.

So we don’t have anything that’s identical, mostly for cost purposes. As well as students still have to go to classes and stuff, whereas an industry professional, would have a full-time job dedicated to these kinds of projects. So, yeah. There’s a few reasons for not having a carbon copy of our rover.

John Koetsier: I get it. It’s understandable. It’s interesting though, ’cause you mentioned, like the wheels on the earth version are tougher and stronger. You guys have seen some of the Mars rovers that NASA sent out there, you’ve seen their wheels. Obviously they’ve got what they put a 30 day lifespan intended lifespan on them.

Right. I’m sure they underestimate so they can beat it, then some of they’re going for a year and a half, and their wheels are beat up, I’m talking holes in them all over the place and you almost wonder it would be to have something up there that would be working and running for a long time.

Anyways, let’s get onto the future. What are you guys doing now? Are you guys still associated with the project? How, what impact has it had on your careers, your school and what you’re doing? Maybe Carmyn, let’s start with you.

Carmyn Talento: Yeah, sure thing. Me personally, I am actually continuing on with one of the next rovers that Kevin had mentioned.

While we don’t have an Iris 2.0, we do have some more rovers in development, including Moon Ranger, which is a rover also developed entirely at CMU that is planned to look for lunar ice. Near the south pole of the moon. So that’s a rover that’s getting to the end of its development and hoping to hit your ride in the near future to get a real lunar mission on that one as well.

So currently I’m working on the mechanical team for one of the, one of our test rovers designed for our earth conditions here. So that’s what I’m currently working on.

John Koetsier: Awesome. Anybody listening who’s might maybe working on Artemis or something like that? We’re looking to hitch a ride here.

Prior experience as a hitchhiker, very low mass. Not a problem. Very cool. Awesome stuff. Kevin, what are you working on?

Kevin Fang: Yeah. Well, as Carmen mentioned before I was also on the Moon Ranger team for a little bit before I joined Iris. And I can also agree on that point that we definitely are looking for a ride in the new future.

So, if anyone just happens to have a rocket lying around that, they haven’t been using in a while we’d love to, to pick one up. But definitely jokes aside, I really have found Iris to be the defining event of my time at university, being able to work on a project of this scope and of this importance at such a relatively young point in my career, I feel like has really changed my mind on what kind of effect I can have in industry as well.

Although I don’t believe I’m going directly into the space industry upon graduation, just knowing that I have had something like this already on my resume or already on my profile, allows me to confidently say that anything on earth I could probably do pretty well if we are already doing things that are going to the moon, right?

So how hard could it be?

John Koetsier: I’m signing you up for deep sea exploration development and you have to personally test it. It’s the only thing that’s tougher than space. I’m not sure you’ll like it.

Harsh, what are you working on right now?

Harshvardhan Chunawala: Right. So I transitioned from a student to an alumni. So, now the involvement which I had with Iris for the next mission I’m just working with the practicum leaders and the director of INI, to open up opportunities for our future students.

John Koetsier: Cool. What’s really cool about what you guys have told me is that you did something innovative. You created something, you also used off the shelf components in a lot of cases. And what’s interesting about that is, the surface of the moon what percentage have we touched is gotta be a 1e-06% or something like that, right?

If we can drop a bunch of these on a lot of different places, especially if we’re looking for water … we need some combustibles. We need some oxygen. We need some hydrogen. We wanna fuel future rockets. We want to provide breathable air for Lunarians or whatever we’re gonna call them, right?

People are gonna live on the surface. We’d find that if you could, we’re just looking one place at a time that take forever. If you can have these cheap things that can last four weeks, two weeks, whatever it might be, and you can drop a thousand of them … you increase your chances. So, so that’s really cool. Carmen, maybe let’s end with you a little bit.

We’re in, we’re really in a golden age of space exploration. We it’s incredible, right? SpaceX is obviously leading that, but there’s many other companies involved. We see the constellations of satellites that are just mindboggling. 10 years ago, even five years ago, and you’d say, there’s thousands of privately owned satellites in space.

People would laugh at you. They’d think you’re insane. If you’d say, Hey there’s a launch module that is bigger than anything that’s ever happened before. It’s designed to go to Mars, and they wanna build like a thousand of ’em and actually get, immigrants to Mars and stuff like that, that people would laugh at you as well.

There’s a lot that is going to be possible perhaps in the next three, five years, 10 years, something like that. How do you see the future, the aerospace, you’ve worked on it a lot, you’re still working on it. How do you see the future of development and future of humanity and space?

Carmyn Talento: Yeah, you’re absolutely right and I think, as you mentioned, all of these projects working up, I think.

More and more people are gonna start to get involved. You’re already seeing it. We’re proof of concept of that, and I think it’s just gonna be just incredible. I see moon bases in the near future. I see Mars bases in the near future, and I like to think of those as like your gas stations. In some cases they might be, if we are able to develop that technology, if we’re able to use the resources on these, other planetary bodies and once you get to that little step away from earth, whether that’s the moon, whether that’s Mars, you start opening up more accessible areas of our solar system and of space and whatever.

Whatever contribution that we all can make to that, whether it’s our first attempt at a nano rover and being able to be one of the earliest concepts of a, as you say, like a little device to maybe transport some goods or search for materials, whatever, that we can be in that larger just journey through space.

We’re happy to be it. Awesome, and we’re gonna keep working towards it.

John Koetsier: And who’s signing up for a ticket to Mars?

Carmyn Talento: Maybe!

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Billions of robots in 10 years

billions of robots

Billions of robots within a decade? A similar growth curve to smartphones?

We currently have about 30 million robots on the planet, not counting Roombas and similar small bots. RobotLab CEO Elad Inbar says that will hit BILLIONS with a B within 10 years.

(consider subscribing on YouTube?)

We discuss the exponential increase in commercial robots globally and predict billions of robots integrating into daily activities, from service industries to personal assistance, over the next decade. We chat about the evolution of robotics from novelty items to essential aspects of business operations, highlighting the role of robots in automating mundane tasks and their future potential in enhancing customer service and living standards.

Inbar also emphasizes the importance of service infrastructure to support the widespread adoption of robotics technology, drawing parallels with past technological advancements like mobile phones and cars. And we dive into specific applications of robots in restaurants, cleaning services, and healthcare, particularly for dementia patients, and the franchise model RobotLab is adopting to expand its reach and capacity to deliver robotics solutions.

Billions of robots: zoom to the section you’re most interested in

Zoom to the topics you’re most interested in …

  • 00:00 The Dawn of the Robot Decade: Envisioning a Future with Billions of Robots
  • 01:02 The Big Picture: Robots Transforming Business and Society
  • 07:10 The Current State of Robotics: From Hospitality to Manufacturing
  • 09:50 The Future of Work: Robots Filling the Gaps in the Workforce
  • 12:40 Enhancing Customer Service: How Robots are Changing the Game
  • 13:31 The Restaurant Revolution: Robots Taking Over Service Roles
  • 16:35 Exploring the Role of Robots in Restaurants
  • 16:47 Adapting Robots to Different Restaurant Environments
  • 18:18 Growth Areas Beyond Restaurants: Cleaning and Retail
  • 22:47 The Future of Customer-Facing Robots
  • 24:00 Robots in Assisted Living: A Compassionate Solution
  • 27:09 Unlocking the Potential of Robotics in Business

Subscribe to the audio podcast

 

And … a complete transcript of my chat with RobotLab CEO Elad Inbar

RobotLab

Elad Inbar: We are going to see billions with the B of robots out there.

So think about this way, there are 8 billion people on this planet, right? And, you know, 30 million robots, as big as it is, it’s not even half percent of half percent. Hmm, the, the population. Mm-Hmm. So we are, uh, you know, in, in this exponential growth right now

John Koetsier: Our robots soon to be part of our everyday life in hotels, restaurants, airports, everywhere. Hello and welcome to Tech First. My name is John. Er. First time I saw a robot in a hotel. It was in Japan. It was more than a decade ago. It might’ve been 15 years ago. It was mostly a toy. Uh, but it was cool.

Question is, is that changing and where are robots close to actually delivering business value? Jack, we have a lot in bar. He’s a CEO of Robot Lab. They’ve deployed tens of thousands of robots and businesses over the last 15 years. Welcome a lot. How are you? 

Elad Inbar: Yeah. Thank you, John. Thanks for having us. 

John Koetsier: Hey, super pumped to have this conversation.

Let’s start with a super big picture. We’re gonna get into the details, where you deliver robots, what they do, how much they cost, payback periods, all that stuff. But let’s start with the really, really big picture. It’s really early days in terms of robotics. There’s cool stuff, there’s amazing stuff. What kind of future do you envision when you look forward, maybe it’s 10 years, maybe it’s 20, I’m not sure, but what kind of future do you envision robots and humans working together?

Elad Inbar: Yeah, that, that’s a great question. And um, I always love to start from, you know, the big picture because, uh, especially in technology, it’s very hard to, uh, you know, comprehend the pace. Changes. And, uh, you know, we, we all, you know, remember the, you know, the first, uh, you know, mobile phone, the Motorola with the shoulder strap, right?

Uh, this was launched somewhere in the, I dunno if we all remember that. Okay. It all okay, but we know of it. Okay? Yes. But, uh, but anyways, it, it was launched roughly in the late, uh, eighties, um, and a decade later, by the late nineties, everyone already had a mobile phone. Uh, the same happened with a smartphone right from the time, you know, the, uh, you know, Android, the iPhone was launched.

Uh, you know, until everyone has, uh, a smartphone, it’s roughly a decade. Mm-Hmm. Um, same with internet penetration, the same with laptops and so on and so on. And I believe today we are roughly in, uh, the second or third year of the robot decade. Uh, so although we, you know, we kind of like not really see that everywhere.

Uh, I can tell you based on the, uh, international, uh, robotics association, there are around 30 million, uh, commercial robots out there. I’m not talking about the Roombas and, you know, the residential robots. I’m talking about commercial cleaning, delivery, you know, factory robots and these kind of things, and 30 million robots.

It’s a pretty. You know, large number by itself. Mm-Hmm. But if you consider, again, the, the, you know, the, the decade that is, you know, upon us, that we are going from, uh, basically zero, you know, products out there to almost a hundred percent penetration. Okay? We are going to see billions with the B of robots out there.

So think about this way, there are 8 billion people on this planet, right? And, you know, 30 million robots, as big as it is, it’s not even half percent of half percent. Hmm, the, the population. Mm-Hmm. So we are, uh, you know, in, in this exponential growth right now, and that’s actually, uh, you know, something that we see in our numbers.

That’s, you know, why, uh, we decided to grow in a certain way. Um, you know, using a franchise model, we can talk about that later, uh, because we get demand from so many different places that need to have robots today. Right now, and we just can’t be everywhere all the time. Mm-Hmm. Uh, so when, when we look at the, you know, the horizon, what’s coming?

Um, I can just refer to one, you know, uh, you know, person, Elon Musk. Uh, a couple of weeks ago we had, um. Uh, earnings, uh, you know, call with, uh, you know, the Tesla shareholders and he said that they’re going to deploy 1 billion with a B 1 billion of their Optimus, uh, robot, the, the human rich robot, uh, by the end of 2030.

So this is something that, you know. Just one company with one single robot is going to do 1 billion products. Okay, 

John Koetsier: well I’ll believe that when I see that Elon Musk also told me that I could buy a Tesla and it would be making money, self-driving as a taxi about 2015 or something like that. So I’ll believe that when I see that.

But I have talked to, I, I agree. I agree. Investors behind figure ai, which is, you know, a humanoid robot company that will compete with Optimus, and I’ve talked to the executives. Apron and many others. Yeah. Figure as well. And many of them are saying, Hey, you know what, uh, we expect the numbers of humanoid robots to be roughly equivalent to or surpass the human population at some point.

Exactly. So that that point in general is not tremendously controversial in the robotics community, at least. 

Elad Inbar: Exactly, and, and it’s important, you know, for I think people that are not from the industry to understand, you know, what’s coming. Because again, we, we can’t even comprehend that within, you know, a decade and let’s say Elon Musk is, you know, wrong by five years.

Okay? We’re talking about, you know, again, it’s not like, you know, 50 years from now, a hundred years from now, it’s in our lifetime. We are going to have, you know, as you said, more people, more robots and people, right? And you know how our lives are going to change, you know, as a result of that, how, you know, the service industry, the, you know, entertainment industry, the, you know, manufacturing and so on, are going to change.

And this is something that, again, we have to take a step back and look at, you know, this overarching, you know, uh, uh, trend that is happening in the industry. 

John Koetsier: It raises so many questions and people let, let’s say normal people, people who aren’t in technology right, are already dealing with so much future shock, right?

You know, they’re already dealing with so much change and, and change of this scale and change that impacts potentially their wages, how they make their livelihood, uh, all that sort of thing. Is going to be a tsunami. Absolute tsunami. Okay. We’re gonna get into where you’re delivering them now, what verticals, what you see growing, all that stuff.

But kind of let, maybe let’s, let’s continue at the high level. ’cause it’s been interesting. Yeah. I wanna talk about as we. Enter that reality of robots entering the workforce and maybe their, uh, hospitality, maybe their delivery, you know, and maybe eventually they’re humanoid and, and they’re actually doing some interesting things in a factory or in a warehouse or something like that.

How do you see robots and humans working together? I mean, there’s lots of options, right? Um, one human, one robot take, take the job. One human, one robot help with the job. One, one. Make something, do something so that you could do the job with significantly greater quality. Right? Yeah. And there’s many more scenarios as well.

What, how do you see 

Elad Inbar: it? So let’s start with where we are today. Um, today, uh, you know, we at Robert Club and, you know, other, uh, you know, companies, manufacturers, and everything, uh, we are helping, uh, mainly business owners, uh, to automate tasks that people don’t want to do anymore. That’s where we are today, right?

I mean, uh, you know, people don’t want to, uh, clean floors, don’t want to, you know, vacuum corridors for eight hours a day. People don’t want to, um, you know, run, uh, uh, you know, dirty dishes. Back to the dish washing station. Uh, these are tasks that still need to be done. Uh, you know, we are hearing even from, uh, school administrators, the janitors are retiring.

And, you know, there is no new generation that wants to come and, you know, clean the, uh, school, you know, facilities every day, but they still need to clean the floors every single day. So this is where, you know, where we are today. And you know, my observation is that, you know, it happened, you know, for many years, even before Covid, but Covid kind of like accelerated that, um, in which.

If you think about it, uh, COVID happened almost four years ago, right? I mean, I believe in, in a week or two from now, uh, is is four years, uh, you know, celebration, uh, since we went Mm-Hmm. We were all sent home to the flatten the curve, right? Um, right. It was mid-March roughly. Um, and what happened is, if you think about it, people that were.

At these entry level, uh, you know, jobs four years ago they moved on, right? They are four or five years into their careers. They want, you know, higher paying jobs. They have more, more responsibility. They want to build the family and, you know, start, uh, you know, buying their home and, and so on. They, they don’t want to do the entry level jobs anymore.

And the new generation that’s supposed to, you know, just step into these entry level jobs. They were in eighth grade when. So, Mm-Hmm. They never experienced, you know, childhood, the, the, the way that we experienced that, right? They never worked at a, you know, burger King, flipping burgers or, you know, doing all these, uh, summer jobs.

Everything that they know is, is around, you know, being online, being at home. So DoorDash and Uber Eats and trading crypto on, uh, on Robinhood. This, this was their, you know, their, their entire, you know, growing right, growing up right As, as high schoolers. So, you know, they, they finish, you know, high school.

They, you know, I’m ready to go to the workforce. Okay. I’m going to the, my first job. And, uh, you know, the business owner is just like, oh, that’s your first job. Awesome. 12 to an hour, take this broomstick and start, you know, working on there. Mm-Hmm. And they’re like, no, I’m not gonna do that. Right. I never did that.

Why would I need to do that? So this is the gap that that happened. These people moved on. These people are not willing to step in. Okay? Mm-Hmm. So robots today need to help business owners operate. Where people don’t want to do these jobs anymore. And that’s where we are today. We have, uh, you know, delivery robots that are helping in restaurants, in hotels, uh, you know, room service, robots and so on.

Uh, we have, um, a cleaning robots that are designed to clean, uh. Uh, you know, large spaces. Think about, uh, you know, ballroom in a, in a hotel, all the corridors across, you know, multiple floors, uh, you know, uh, uh, warehouses and car dealerships and supermarkets and so on. Um, and we have customer service robots because that’s also one of the largest challenges right now is people don’t want to talk to people.

They don’t want to do customer service because everyone has attitude from both sides, customers and their representatives. Right? And, you know, robots can actually offer the way of, um, you know, providing customer service, providing information in a consistent way. Uh, so these are, you know, the, the robots that we see today.

We have also cooking robots and these kind of things because the same problem happened in the back of the house as well. Line cooks are in great shortage right now. 

John Koetsier: Um, I mean, I had a conversation. I had a conversation. It’s gotta be half a year ago now, actually. It was December 8th, 2020. I just looked it up on my own website here with Measle Robotics, the founder, buck Jordan, they make flippy the, uh, the the burger flipping robot.

Yep. And, and he was saying that he was talking to, uh, restaurant owners and they had nobody who would come in. And flip the burgers, uh, flip the fries, all that stuff. White Castle. Mm-Hmm. Big chain in the US for burgers and stuff. Did a big pilot project around this. I didn’t hear how it went or they kept it or anything, but they had similar issues bringing in low cost labor to 

Elad Inbar: make fast food.

Yes, exactly. So, so this is where, you know, robots are today, right? Uh, we have a need, uh, you know, businesses still need to operate. Um, and, you know, people are not willing to do that. So robots step in, uh, to help with the, uh, with the few that do show up to work. So that, that’s kind of like where No, it is. I mean, think about yourself.

Yeah. How many times you sat in a restaurant, okay? And you ask for the check. Right. And you raise your hands like, Hey, can I get the check? And the server was just like, just a second. I’m just running over there. I’ll be back in a second. And you know, it’s been 5, 7, 10 minutes and you know, she keeps running back and forth and you know, she didn’t forget you, but she’s very, very busy.

And why is that? Because the few that do show up to work are overworked. They cannot provide the level of service that we as customers expect them. And for every minute that’s, you know, pass on, you know, when we wait for the check, their tip amount is, you know, shrinking, right? Because why are we giving tips for the level of service, right?

And if we don’t get service, you know, this gets smaller and smaller. And what we are seeing in every restaurant that we deployed, uh, uh, service Robot, a delivery robot, the servers can stay in the dining room. And these robots save them all the running back and forth to the kitchen. So with the surface staying in the dining room, their tips actually increase.

Because, you know, if I run out of water, they refill it immediately. If I drop my fork, they bring me a new one immediately. If I need the check, they’ll be there in two minutes. Right? And, and this kind of service is what we are looking for. You know, we have, you know, we live in a first world, right? We have enough food at home.

We go to the restaurant for the service. Right. Mm-Hmm. Not for, you know, uh, uh, real need, right. For food. So, you know, if we get the service, we will recommend the place. We’ll, you know, come back. So it’s, it’s a positive cycle that just contribute to the business. 

John Koetsier: So that’s a good segue then to talk about where your delivering robots the most right now.

Uh, what, what are the biggest sectors? Uh, and then we’ll talk about the growth rates as well. Yeah. 

Elad Inbar: So restaurants by far right now. All over the place. Uh, it doesn’t matter if it’s a, you know, full serve restaurant, full service restaurant or quick serve or whatnot. Any, uh, restaurant owner, uh, is struggling with the same thing.

Uh, finding people and making people show up, because even if you have people, they just don’t show up. 

John Koetsier: Right. So the robot that you’re selling into a restaurant is what you mentioned, it’s delivery, right? It’s bringing stuff out. What’s that look like? What’s the form factor? I’m guessing it’s on wheels. I’m guessing the top is a tray.

I’m guessing it has some kind of screen and maybe some kind of talking capability. Yeah. 

Elad Inbar: So Lab is a unique company in which, uh, we don’t manufacture the robots. Uh, we used to manufacture robots, uh, in the past, but we stopped doing that today. We are partnering with the largest, uh, manufacturers from all around the world.

So, for example, we are the exclusive partner for lg, uh, robots, uh, you know, across the, uh, north and Latin America. So we, uh, bring their robots to the market, but, uh, you know, we are, you know, partners with other manufacturers as well. Typically these robots will have shells, you know, couple of wheels, sensors that they can navigate around obstacles, around people, um, and these kind of things.

But not every robot is the same. There are different, uh, you know, use cases which will require different types of robots. I’ll give you a couple of examples. So let’s say you are, uh, you know, a small bistro kind of restaurant, right? And you know. Everything is packed. You know, people are sitting, you know, close to each other.

You’ll need a smaller robot with a smaller footprint that can navigate, you know, between these, uh, you know, different chairs and and so on. Uh, compared to, uh, let’s say iHope, you know, type of restaurant where you have, uh, you know, large parties, typically it’s, you know, parents and grandparents and kids and everything, and large plates.

So a small robot that can, you know, uh, navigate between chairs is not gonna fit enough tables, sorry, enough, uh, plates, uh, to bring to the table, uh, in, in a iHope type restaurant. So not every robot can do the same thing. Another example, if you are, um, a Vietnamese, uh, fall restaurant. So you serve giant balls of boiling, you know, liquid, right?

You need superb suspension. We actually have, uh, you know, one of the manufacturers have a drive pattern that, you know, when the robot drives it actually, it has great suspension, but it also, as it slows down, it actually leans backward. It. Making sure that liquid wants spilled, you know, out of the bowls because they are focused on this type of, um, of, on this type of, of food.

So not every robot is, uh, you know, the best fit for every restaurant. And that’s what we do. Our team actually partner with the customers and we ask them a lot of questions about, you know, their environment, their food, their kitchen, their floor, their, you know, uh, width between the tables, the between the tables and so on.

And based on all of that, we recommend the right product that will be successful in their environment. So these are, you know, generally speaking, delivery. How does that 

John Koetsier: work? Fatigue. How does that work at the table? Um, does the robot come up and say, Hey, here’s your food and you have to grab it off the robot.

Does a server come by and put, put, put it on the plate? How does that work? 

Elad Inbar: So that, that’s, that’s a great question because again, not every, uh, restaurant is the same, right? If I go to a Chick-fil-A for example. Um, you know, it’s, it’s a fast restaurant like McDonald’s. For those who don’t know, uh, you know, it’s okay if the robot comes next to me and I take the tray and I put it on the table.

Actually, I don’t expect any more service than that. And it’s really cool that I was served by a robot, right? Uh, but if I go to a high end, you know, uh, you know, steakhouse or something, you know, uh, you know, fine dining restaurant, I don’t want to take the plate by myself. Because again, we are paying for the service.

I want the server to present the food to me and say, Hey, Mr. John, here’s your steak. Medium rare, the way, the way that you ask for it. Would you like some extra salt and pepper? Right? So in that case, the robot needs to be in a holding position, okay? Just like where they put the foldable tables with the giant, you know, trays that they bring.

The robot is in a holding position. The server takes the food from there and present to the guests. Because, you know, in that type of environment, it’s not acceptable that the guests will, you know, take the food. Yeah. So we are working with the restaurant owners. We have different, uh, standard operating procedures for different types of restaurants, uh, just to match the, you know, the level of service that, you know, they want to provide to their, um, uh, to their guests, uh, with the right technology, but in any, in either way.

Okay. The server stay with the guests, and that’s the purpose. Because the servers need to serve the guests. Right. Not to run back and forth. Yeah. This can be automated. Yeah. 

John Koetsier: That makes ton of sense. So restaurants is a big growth area. What other verticals are significant growth 

Elad Inbar: areas right now? Um, the other, uh, vertical that we see a lot of demand is in cleaning.

I. Um, the, the, the entire clinic industry, it can be hotels, again, restaurants, assisted living facilities, very, very, uh, you know, in high demand right now. Uh, they all, you know, suffer from labor shortages. Uh, and they can’t clean, you know, enough, uh, of, you know, their facility because they don’t have people, they don’t have the manpower to do that.

Um, so, you know, we have, for example, some of our customers assisted living facilities with 200 rooms. They have one cleaning lady. One. Wow. That’s possible. Okay. It, it, it’s hard. It’s hard on her because she can’t do everything. Okay. She can’t Mm-Hmm. Touch the entire facility every day. Right. And, and mm-Hmm.

You know, it affects, it affects their, you know, cleaning lab, you know, score and all of that. Uh, so, you know, when we introduce robots, uh, today, by the way, cleaning robots are mainly for, uh, public spaces. We don’t have yet robots that can clean rooms or, you know, bathrooms and these kind of things. Uh, but, you know, even helping with public spaces, all the corridor, the ballrooms, the, you know, dining area, reception area, and all of that.

We’re talking about, you know, hours per day. Uh, we’re talking about, you know, tens and, you know, hundreds of thousands of square feet at the end of the day. Um, yeah, so, so this is a great help for 

John Koetsier: them. I assume retail is another interesting area as well. I think it was Walmart that recently did a pilot project with a stock sensing robot.

Yeah. Going down the aisles, what’s low, what needs to be replaced, all that sort of thing. 

Elad Inbar: Yeah. So there were few attempts, uh, in the past few years by, by different companies to do the stock keeping, uh, robots, um. As far as I’m aware, most of them are still in the pilot phase or, you know, were discontinued.

Mm-Hmm. Uh, the reason is that, uh, the way that it works, uh, the, the role basically has a, like a long, uh, you know, post with cameras and, you know, uh, just take pictures of all the items on the shelves. And in many cases, uh, you know, the, the associates at the store bring, let’s say, you know, there is only one box of cereal, you know, left, right?

So the associates bring that to the front of the, of the shelf. Right. Um, and it’s full, no problem. Exactly. So, you know, the robot can see behind that. Uh, so that’s a problem that, uh, was still not solved. Um, so, you know, most of these are still in, uh, in early phase. Uh, I I, I’m not familiar with any successful deployment of these type of, uh, robots.

Um, but when it comes to cleaning, you know, Walmart is an example or you know, any supermarket, right? I mean, cleaning their floors, uh. Either on a consistent basis or, you know, it just goes up and down, uh, to clean. Or, you know, milk was peeled in, you know, aisle four, right? So someone needs to get there.

Mm-Hmm. Send the robot, let the robot clean that, um, you know, instead of, uh, you know, taking a cashier or someone else, uh, that is doing something more important. Um, so this, this kind of thing, automating, like 

John Koetsier: actually helping a customer Exactly. Like actually helping a customer. I mean, trying to get help where in one of these stores is really, really challenging.

Yeah. Most of these. Most of these jobs that you’re talking about are jobs that don’t require. A significant amount of engagement with humans. They require safety protocols. They need to get around humans. They need to be careful where humans are, but they don’t involve a lot of interaction with humans.

Where do you see that coming in? Is it coming in already? Where do and if, if so, where? And do you see technologies like LLMs, GPT four, that sort of thing being important in that? 

Elad Inbar: Yeah. So, uh, on the. These types of robots that we discover, that we discussed so far, uh, you know, they have sensors and everything.

They’re safe around humans. Uh, you know, every time they, they have laser sensors and ultrasonic sensors and cameras and so on. So every time they detect motion or something, first, you know, rule is stop. Let you know the humans, you know, pass. And then, uh, you can keep going. If you know the person stays there or the obstacle stays there, then you know, replan uh, a path around that and, and try to find another way around that.

So this is, uh, what, um, uh, you know, these robots are doing today. And again, these are what, as you said, I mean, they’re not. Talking to people, they’re not engaging, they’re not entertaining. Yeah. Some of the delivery robots have a, you know, birthday song, so, you know, if they deliver the cake they’ll, you know, sing Happy birthday.

But that’s fine. That’s not an interactive kind of, uh, uh, you know, uh, you know, a way to, to talk to customers and all of that. We have another, uh, section of robots that are designed. To be customer facing, to designed, to answer questions, to, uh, talk about, you know, product services, about the location. Um, so I’m sure you’re familiar with Pepper, uh, humanoid robots from SoftBank Robotics.

So this robot is designed to be, you know, a customer facing to answer questions. It can actually speak and understand 26 different languages. Uh, so think about it as a concierge at the hotel. Okay. Answering all these repetitive questions like when the airport chat leaves, uh, what are the hours of the gym?

Can you recommend restaurants around here? Can you recommend attractions for the kids? Right? So all these repetitive questions can be answered, you know, in a, in a, you know, a, a coherent way, constantly. It doesn’t matter if it’s 6:00 AM in the morning or, you know, close to midnight. Uh, you don’t have, uh, you know, um.

Either missing employees that don’t show up in the morning, or you know, employees that talk back and, you know, bring, you know, attitude in a way, or, uh, I mean, you’re laughing, but that’s, you know, reality or people that just sit on their social, you know, media on their phone, on the back office for two hours and they actually don’t even, you know, occupy their, their station.

So these are, you know, real stories that I’m hearing from, uh, you know, from business owners. Uh, we even have these robots in, uh, assisted living facilities where. Uh, they’re helping, especially with people with, uh, dementia, um, that, um, mm-hmm. You know, is, is a big issue right now because we don’t have assisted living facilities don’t have enough therapists to work with, uh, you know, the, the residents with dementia.

Yeah. And, you know, these people wake up in the morning. They don’t know where, why they’re there, why they’re not home, where is their spouse? Where are the kids? You know why I’m being held here? And you know, they’re not stupid, they just forgot, right? They just don’t remember what’s going on. So, you know, we need to go to them and just, Hey, Mrs.

Jones, you are here because you have a memory, uh, issue. We are here to help you. Your spouse will come, uh, you know, later this afternoon, your kids will be here, and so on. Just try to ease them back, you know, into the daily life. We don’t have enough therapies to do that. And robots are great, you know, assistance because, you know, this type of robot pepper, for example, um, you know, can come over, can show on the tablet, it has tablet on the chest, uh, it can show pictures of their wedding day.

We work with the families, we work with the, uh, you know, facilities, uh, you know, picture of their first child that was born and so on. Um, and basically lower the level of stress because. You know, think about yourself. You wake up in the morning, I’m being held here, the door is locked. Why am I here? I want to go home.

I want to go to my family. You know, you’ll not hold me here. And you know, because we cannot talk to them. Okay? They get frustrated, they get upset and, you know, sometimes violent. So they lock them in the room. We just accelerate this time and this is not the way to treat, you know, the elderly and, and robots are not judgemental.

No. The robot will do that 50 times a day. Right. Without any problem. It’s not like, Hey, I told you five minutes ago, can’t you remember it? It’s not going to have this attitude or issue, uh, because it’s a robot. And, and that’s kind of like where customer facing robots can, you know, help, uh, you know, different types of people, right.

From hotels all the way to assisted living facility. 

John Koetsier: I saw a lot of that kind of robot at CES in January of this year actually. And at the time I was. Super skeptical and I was like, okay, you’re gonna put this like one that I saw sort of sits on a desk or a table or a counter or something like that. Has a screen, not really a face, but sort of glowy eyes and sort of a mouth smile.

And it’s like, come on. But I can totally see it. My mother has been diagnosed with dementia. And I can totally see it as something that would be incredibly helpful. And people like this come to need 24 7 care. ’cause they get up in the middle of the night. Yep. And they go somewhere and they don’t know where they are and they get lost and they get upset and they get scared.

And we simply don’t. Even if, even if you’re paying $10,000 a month for a super amazing care facility, uh, there just aren’t enough people to be with them all the time. And you know what? Our, as their kids, we’re working, we have jobs, we have our own kids, we have our, we can’t be there 24 7 either. So I think there is huge opportunity there.

Yeah. Okay. Um, we’ve talked about a lot of the stuff that, that I wanted to get into. Um, I, I wanted to maybe ask one question before we start to come to a close. Where do you see the biggest opportunity over the next few years? We’ve talked about a couple different verticals. You talked about.

Follows the path of the smartphone, follows the path of other technology that becomes ubiquitous in basically 10, 15 years or something like that. Uh, where do you think are the, the biggest areas of opportunity? 

Elad Inbar: So the biggest opportunity is actually, uh, in the service side of that, how do we enable business owners okay.

And, and, you know, build their trust in this type of technology. That, that’s the biggest opportunity. And this is something that we are working really, really hard, uh, to solve. Uh, you know, our goal, my goal to the team is basically to have robot lab offices in hundred metro areas by the end of next year.

Okay. Because when, when we think about it, um, let, let’s take, you know, the car industry as an example, uh, when, you know, everyone knows Henry Ford’s biggest invention is the production line, right? The ability to, you know, produce the model, ty and low cost, and, you know, high volume and all of that. But everyone forget another invention that probably was more important, uh, from Henry Ford, which is the car dealership model, because let’s say you are in, you know, Los Angeles, in San Diego, okay?

This guy, Henry Ford is building, uh, cars, cheap cars, okay? In Michigan, right? I’m not going to buy it from Michigan because if it breaks in Los Angeles, what, what am I going to do? Am I going to, you know, carry it with a, you know, tow it with a horse, you know, all the way to Michigan to fix it. Right? I need someone in my backyard that I can go down the street from me that can take care of me if something goes wrong.

Right? And this was the mm-hmm. In my mind, this, this, this was the pivot point that basically unlock the potential because people want this technology, they want the benefit of the technology, but they are afraid from, you know, putting their money into this type of thing if they won’t get service. So this is something that we are really, really focused on because I, I, I can guarantee our product portfolio in five years will look totally different than what it is today, right?

With the, you know, the pace of innovation, the pace of changes, everything will change. Okay? Look at. Our phones, our laptops, I mean, we are keep upgrading them every other year, right. So these kind of things will change and we’ll have new robots and, you know, Optimus eventually in 15 years, not 10, uh, will be there.

Right. But, uh, you know, we will get these type of technologies that will enable things. But again, no one will buy Optimus. Okay. In, uh, Reno, Nevada. Well, there there is a Tesla arena, Nevada in Tampa, Florida, uh, right. No one will buy that if there is no service available. Down the street from me. So that’s the biggest opportunity.

How do we unlock that, uh, you know, model where everyone feels comfortable enough to step into this, you know, new adventure, right? Of putting a robot in my business and relying on a robot, you know, for my business. Um, you know, and, and to do that, I, I, I have to have someone to take care of me. So that’s where, in my mind, at least the biggest opportunities.

John Koetsier: That’s, uh, super interesting. I was gonna bring up a counterfactual. I was gonna say, Hey, Tesla doesn’t have dealerships, but you know, essentially they had to build dealerships because they needed to solve the service issue and so they don’t have enough. I know, ’cause I drive a Tesla. Yes, it’s sometimes harder to get to them.

Uh, but uh, yeah, they had to essentially build that entirely themselves. And the cost of that, of course, is significant. Now you talked, you mentioned earlier, you’re doing a franchise model. That’s a new one. Most people, they think about franchises, they think McDonald’s, they think Wendy’s, they think something like that.

Right. You’re doing a franchise model that should help you grow quickly, 

Elad Inbar: I assume. Yeah, exactly. So that, that’s, you know, you know, when we look at the, at our company, we grew year over year. We’re doing it for, uh, you know, 16 years. We, year after year, we, we grew. Um, but we got to the point, you know, especially when you look at exponential growth, right?

That the next step is so much, you know, bigger than where we are today, and we can’t do it with our resources. We can’t open, you know, 50 service centers around the country tomorrow. Uh, and we looked at different, you know, uh, growth strategies and, you know. Time and time again, the franchise model came, uh, to be the right, uh, model because we are talking about business owners okay.

That are passionate about, you know, the, the market, the about the industry, about servicing, uh, you know, customers and, you know, they, they’ll do everything, uh, in their, uh. Power to be successful because again, it’s their business. We train them on everything that we know. We have all the training materials.

We have, you know, hundreds and hundreds of hours of, um, uh, online learning system on every product that we, uh, we provide. We have, uh, you know, on site training, we have all the SOPs, the standard operating procedures and everything, but we just missing the people on the ground. And, you know, marry, marry these two together.

Okay. This is basically an unstoppable, uh, uh, machine. And I think this will, what will unlock the biggest potential here because again, business owners are waiting to have someone down the street from them.

John Koetsier: Super interesting. I wanna thank you for taking this time and, uh, for having the interesting conversation.

Elad Inbar: Yeah. Thank, thank you for  having me.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Here’s an all-wheel drive e-bike … with ChatGPT

Do you need ChatGPT integrated into your new bike? How about an all-wheel drive bike? (OK: a 2-wheel drive … but yeah, that’s all-wheel drive on a bike!)

In this episode of TechFirst, host John Koetsier chats with the CEO of Urtopia about their new AI-integrated ‘smart bike with a mind.’

(consider subscribing on YouTube)

The e-bike market is predicted to grow to about $26 billion by 2028, but Dr. Owen Zhang explains how Urtopia is taking a different approach by developing most parts in-house to create a fully integrated, software-enabled product. He says their AI features, like ChatGPT integration, makes e-bikes safer and more personalised. It can also provide assistance including directions, making the ride safer and more enjoyable.

We also chat about the world’s first e-bike that has drive motors on both wheels, providing more power and better traction. Want to skip to a section? Go for it:

  • 00:00 Introduction and Welcome
  • 01:06 Exploring the Fusion GT Bike
  • 01:47 The Design and Development Process
  • 03:53 The Power of Dual Motor and Dual Battery System
  • 06:51 The Future of Bikes: ChatGPT Integration?
  • 07:12 The Role of AI in Utopia’s Bikes
  • 07:38 The Vision of Utopia: A Bicycle with a Mind
  • 16:48 The Future of Smart Devices and E-bikes
  • 25:30 Conclusion: The Bike as a Wearable Device

Audio podcast: a bike with ChatGPT

Perhaps you prefer the audio podcast? Be my guest …

Get it on your favorite podcasting app:

 

Intro and overview for this episode: the bike with a ‘mind’

Here’s an AI-generated blog post about this episode. Appropriate for a show on ChatGPT, no? Note: the AI engine that my podcasting software uses is very exuberant and kind of “extra” … so everything is a little over the top 🙂.

In the ever-evolving world of technology, innovation knows no bounds. The latest groundbreaking development that has caught our attention is the Urtopia smart bike. This incredible creation is set to revolutionize the e-bike industry with its unique features and state-of-the-art technology.

In this blog post, we will delve into the fascinating details of the Urtopia smart bike, including its dual motor and dual battery system, as well as its integration with ChatGPT.

Let’s explore the future of cycling and how the Urtopia bike is pushing boundaries.

The Urtopia smart bike was unveiled by the visionary CEO, Dr. Owen Zhang. With a background in mechanical engineering and a passion for innovation, Dr. Zhang set out to create a bike that would not only redefine the e-bike market but also bring a fresh perspective to the concept of “smart” devices. The fusion of cutting-edge technology with sleek design resulted in the birth of the Urtopia smart bike.

One of the standout features of the Urtopia smart bike is its dual motor and dual battery system. Designed to provide an unparalleled riding experience, this all-wheel-drive bike ensures maximum traction and control, even on challenging terrains. Dr. Chang’s vision for a bike that could tackle steep slopes and rough trails became a reality with the fusion GT bike.

To create a product that would truly stand out in the e-bike market, Dr. Zhang collaborated with renowned designer, Hartmut Esslinger, who previously worked with Apple’s Steve Jobs. Their partnership resulted in a unique design concept named Snow White, which marries aesthetics with functionality. Urtopia’s fusion GT bike embodies the perfect blend of powerful performance and stunning design.

The integration of ChatGPT into the Urtopia smart bike takes the concept of smart devices to new heights. Riders can now enjoy a conversational experience with their bike, receiving real-time feedback, navigation assistance, and much more. The ChatGPT API enables seamless integration with popular apps like Strava and Apple Health, enhancing the overall riding experience.

Dr. Zhang envisions a future where bikes become an integral part of our daily lives, seamlessly connected with other smart devices. With ongoing developments in AI technology, Urtopia plans to introduce UrtopiaGPT, their proprietary AI model based on GPT-5. This advancement will further enhance the capabilities of the ChatGPT system, allowing riders to interact with their bikes effortlessly.

Urtopia’s smart bike represents a paradigm shift in the e-bike industry. Driven by the vision of creating a truly intelligent bike, Urtopia has combined cutting-edge technology, sleek design, and seamless integration to offer an unparalleled riding experience. With the dual motor and dual battery system, as well as the integration of ChatGPT, the Urtopia smart bike is paving the way for the future of cycling.

Stay tuned for more exciting developments from Urtopia as they continue to push the boundaries of what’s possible in the world of e-bikes.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

App store for your brain: reading brain waves to fix sleep, pain, learning

App Store for the brain

Can you deliver medical treatment by changing brainwaves instead of injecting drugs?

Elon Musk has recently implanted his first Neuralink into a human patient. But can we get neurotech medical treatment without drilling holes in our skulls?

Maybe …

According to Elemind, a startup with roots in MIT, we can. And they say they can read your brainwaves, manipulate them, and fix issues like sleep disorders, tremors, pain, as well as speeding up learning.

Watch here:

Subscribe to my YouTube channel here

Today we’re chatting with Meredith Perry, the CEO and former NASA astrobiology Researcher, plus Dr. David Wang, co-founder and CTO, who has a PhD in AI from MIT. This technology could potentially treat medical conditions ranging from sleep disorders and tremors to learning difficulties. We also discuss the future of medtech, envisioning an ‘app store for the brain’ where individualized treatments can be downloaded like apps, focusing on promoting the most optimized state of health for any given individual through real-time detection and diagnosis.

Check out the story on Forbes …

Get the audio podcast: neurotech startup building an app store for your brain

 

AI summary and transcript

Summary:

This podcast is a conversation between John Koetsier, the host of TechFirst, and Meredith Perry and David Wang, the CEO and CTO of a neurotech health company called Elemind. They discuss the company’s wearable device that uses neurostimulation to read and stimulate the brain in real time. They talk about the potential of using this device for medical treatment without the need for drugs and its effectiveness in improving sleep, reducing pain, and enhancing learning. They also discuss the future plans of the company, including the development of an app store for the brain and individualized treatments based on AI and machine learning. Overall, the script highlights the innovative technology and potential benefits of this neurotech device.

Transcript:

Meredith Perry: We use a wearable neurotech device to read the brain in real time and intercept it in real time with something called neurostimulation. And so that’s using sound or light or vibration or electricity to stimulate the brain. when we do that, we can actually guide the brain precisely, and that leads to a behavior change.

So like a drug, but much smarter and without the side effects.

John Koetsier: Hello and welcome to TechFirst. My name of course is John Koetsier. Elon Musk has implanted his first Neuralink into a human patient.

Maybe according to LMI to startup with roots in MIT, we can and they say they can read your brainwaves, manipulate them, and fix issues like sleep disorders, tremors, pain, as well as speeding up learning. Today we’re chatting with Meredith Perry, the CEO and former NASA astrobiology Researcher plus David Wang, Dr.

David Wang, co-founder and CTO, who has a PhD in AI from MIT. Welcome both of you.

So

Meredith Perry: nice to be here. thank you so much for having us.

David Wang: Thanks for having

John Koetsier: Super pumped to have you. Meredith, are you sure you didn’t bring something back from outer space?

Meredith Perry: I’ve never been to outer space. but I’d love to bring something back for you if I ever get the privilege to go.

John Koetsier: Give us the big picture. What does lm I do.

Meredith Perry: So M Mind is a Neurotech Health company, and we’ve developed a wearable device that’s going to allow you to optimize and improve your health without the use of drugs. So let me give you some context. So as we know, healthcare today is reactive, it’s blunt, it’s largely dependent on pharmaceuticals.

And while pharmaceuticals can often be effective, we also know that they can sometimes be addictive and they can have negative side effects. So, uh, our co-founders and I have spent the last four and a half years developing a way to achieve the same effects of drugs. Without chemicals and without side effects.

And this approach we call electric medicine. So with electric medicine. We use a wearable neurotech device to read the brain in real time and intercept it in real time with something called neurostimulation. And so that’s using sound or light or vibration or electricity to stimulate the brain. And when we do that, we can actually guide the brain precisely, and that leads to a behavior change.

So like a drug, but much smarter and without the side effects. And there’s a lot other cool things that we can do. Once you have the, uh, once you can insert AI and machine learning and use sensors to be able to even see whether or not something like this is working in the body.

John Koetsier: So when I’m listening to what you’re saying, the picture that comes into my mind is something like some deep sea helmet that I wear on my head. It’s got sensors, it senses my brainwaves, and then it’s got all kinds of equipment on there to influence them.

What is it actually?

Meredith Perry: it is a very simple form factor. When you think of an EEG device, what comes to mind might be some sort of helmet with tons of wires. We have made a very sleek. form factor of something that looks somewhat like a headband, uh, that has a suite of sensors that’s reading what’s going on inside your brain and inside your body.

Um, and we use all of that information to let us know when to stimulate and how to precisely guide the brain to achieve the desired outcome.

John Koetsier: Is that something like a muse headband maybe?

Meredith Perry: today we’re not going to be talking specifically about our other form factor but it is a headband, wearable, and, um, and it will be comfortable and flexible and low profile and easy to move around or sleep with

David Wang: Interesting hint, you can sleep with it. That seems to indicate it is soft or at least somewhat soft. Okay. I won’t dig too much farther on there. I, I appreciate you’ll show it to me when you’re ready. That’s great. how it works has been the focus of our work for the past five years or so.

at the core of LM i’s technology is the ability to predict, When a biological oscillation is going to reach its peak or reach, its troffer, any phase in between. And with this ability to predict we then have the ability to intercede. And Meredith just listed all the different stimulation strategies we used, but fundamentally, you can go back to maybe freshman physics.

we are doing. Constructive and deconstructive interference on your brainwaves. What makes our technology interesting though, is the method in which we do stimulation doesn’t need to be the same electrochemical language that your brain speaks. Fortunately, your brain is wired to sensors all around your body, so if we stimulate those sensors through light, sound, touch, we can drive your brain to.

impact and neuromodulate other brainwaves

John Koetsier: First of all, you’re a funny guy. But secondly, it’s really interesting.

That to impact the brain, you don’t need to make electrical changes upon the brain per se. You simply need to present stimuli to the brain via our normal natural senses and the brain then somehow reacts in how it interprets and that makes a change. The desired changes precisely, precisely.

Meredith Perry: So John, you can think about it kind of like noise cancellation for the brain.

So you know, to, to give you more context, the brain is an electrochemical organ and we can measure brainwave activity on the outside of the brain using something called an EEG. So David talked about biological oscillations. A brainwave is a biological oscillation and different brain states are characterized by different.

Frequencies of brainwaves. And what we’ve learned is that by stimulating at certain parts, uh, at certain. Times relative to the brainwaves.

We can speed up certain, frequencies. We can slow them down. We can amplify or suppress them. This is what neuromodulation is, and we’ve found that by changing the brainwaves themselves, we can actually change the state that someone’s in.

John Koetsier: Again. Fascinating. Really, really cool. You’ve really got me wondering about the form factor here.

I know you’re not talking about that. I’m wondering if it’s like a muse or maybe the neuro from the crown, uh, neuro device. We’ll see when that comes out. I. Uh, talk a little bit about how effective it is. You’ve done, I believe, three studies on it, three published studies. You’ve probably done a lot more work on it.

How effective this if, if my pain is a 10, can you reduce it to a three? Uh, if, if, if I’m having trouble learning, how, how much can you speed that up? What kind of effectiveness data do you have? Yeah, that’s a great question.

David Wang: we’ve done studies in a variety of different areas, which is really cool about our technology.

It is quite broadly applicable. within the area of sleep, we’ve, studied over a hundred subjects. We’ve recorded over two and half years worth of, sleeping data and we’ve shown in the particular case of sleep, that we can help people fall asleep significantly faster for about 73% of our subjects, about 30% faster, which is huge.

we’ve also been working with, All sorts of fantastic academic collaborators. from the University of Washington, we have really amazing work that shows that we can, increase someone’s tolerance to, pain in this case a temperature threshold. quite significantly.

if we stimulate at the correct time, work from Louvin and McGill University has shown that, we can improve, learning and response times, by, stimulating at the correct time. we have a fantastic paper in, nature about two years ago that shows a different form of simulation.

So in this case. electrodes placed on top of the scalp that we have the ability to reduce tremors as well. So for people with physiological shaking, by about 50%, with less than a minute of stimulation.

John Koetsier: That’s huge because, uh, for people who have that, that can be completely debilitating, uh, not able to do anything.

It’s really interesting to me personally about the pain thing. I’ve been told by three doctors that I have a high pain tolerance and I don’t know, I just have this switch I can flip on my brain. I still feel the pain, but I can stop caring about it. I don’t know if your device works that way or how it works.

I mean, you talked about in terms of sleep, you talked about speeding up by 30%, the the time that it took for them to get to sleep. Typically, if there’s something like tremors or other things, and it may be different for different conditions, what’s the treatment time period like?

Meredith Perry: with sleep we see people fall asleep up to 76% faster. the average for that group is 30%. it really can make an enormous difference, the timing of the effect. Depends on the application with tremor, you can actually see a video, of somebody, using LMI to suppress their tremor and you can see the impact instantaneously.

after 30 seconds of stimulation, we see the effect actually grows. But it’s instantaneous. Sleep, you’re not falling asleep instantly. We are accelerating the time that it takes for you to fall asleep. Um, and so, you know, if you normally take 30 minutes to fall asleep, we might help you fall asleep in 15 minutes.

with the pain in that example, the study that we did, we were amplifying, the delta waves that are associated with deep sleep and we could amplify the sedative effect of, an anesthetic. And so we saw that instantaneously too. we were able to effectively, put higher temperature against someone’s skin.

And when you were able to combine Ella mind with the anesthesia people, from that pain. which indicates that we would be able to actually give people less drug, if they were undergoing anesthesia.

John Koetsier: Love it, love it, love it. The scientist is telling me the 30% average, the CEO, who’s already planning the brochure says up to 70%.

Makes total sense. And of

Meredith Perry: we can’t leave money on the table, John. The facts are facts.

John Koetsier: Absolutely. Now, is this gonna be a medical device? Is this gonna be something that you have to. prescribed for, and maybe used in a medical scenario under supervision, or will it be off the shelf?

Buy it at a drug store, use it at home.

Meredith Perry: the first product we go to market with is a consumer wellness product. It’s not a medical device. not FDA approved, and focused on a wellness application. moving forward we will have form factors and models that will be medical devices and will treat medical issues.

John Koetsier: Now you’ve been in stealth since 2019. that’s a decent length of time. It’s a hardware startup, so that’s not easy and takes time. And it’s also cutting edge. It’s new technology that you’re inventing. And by the way, there’s been immense advances in AI in that whole period of time.

And you guys use AI to understand the brain and then, implement your treatments. That’s interesting. It’s also been a crazy period to be installed. You pick the covid and lockdown and shut down and then return and all that stuff. What’s the next few steps? What’s the next couple years look like for you?

 

Meredith Perry: So, in a number of months we’re going to be announcing our first consumer product. it’s going to focus on one application that we’ve conducted, clinical trials on that we’ve been successful with. a large focus of the company is going to move in that one direction. But after that, our grand vision for this company is to ultimately be an app store for the brain where you can effectively download a treatment, like you download an app from an app store or download a brain state like you download it.

an app from an app store. our technology has capabilities that go far beyond just treatment. So when you, when you’re wearing something, uh, for a significant period of time that has biosensors on it, we can detect or diagnose in real time with the assistance of AI and machine learning.

Whether you have an issue, you know, perhaps you’ve had a stroke or perhaps you’re having a seizure, or we see that you are anxious or potentially have, um, you know, signs of mild cognitive impairment. That’s something that we’re going to be able to tell people. And I. With the use of ai, we’ve also developed a tool that’s going to allow us to learn over time what stimulation protocols will optimize, uh, the, the person’s state, um, the fastest.

And so the vision here is to be able to develop individualized treatments for different people for their different conditions, to allow them to be the most optimized versions of themselves, um, at any given time for whatever those disorders are.

John Koetsier: the mind kind of, uh, explodes here because of course, the whole quantified self comes into play here a little bit.

and when we saw, uh, fitness trackers, the Fitbit was one of the first big ones that came out there. I’ve interviewed somebody who says they’re injuries in the Fitbit for your blood, and you can test like a drop of blood. And it’s not, it was not the one that was a skin. It was more recent than that and see, you know, just what’s happening in terms of what changes, uh, or your diet is impacting everything that you almost think of this as like a Fitbit for the brain or for the mind.

And you almost wonder, will the wearable become something that has fashion aspects so that you could wear it full-time if you wanted to. Perhaps if you’re a high risk individual, you’ve had strokes in the past or something like that, and you want instant awareness of when you’re going to have very early signs of one so you can do something about it.

Uh, wow. Maybe you can build into something like a, a headband or, or glasses or something. Very interesting.

Meredith Perry: Absolutely. and John, I think a key distinction between some of these wearables that you mentioned is that. Almost all of the wearables that exist today just read. They just track. They tell you, things that are happening, but they don’t change anything.

So you can think about us, not as just a tracker, but we read and kind of write. So when we see that a problem that there is a problem, we can also fix that problem, or at least try to, as opposed to telling you, Hey, you should, go for a run or you should do this. we try to meet people where they are and be as simple as a pill.

And a pill doesn’t judge you. A pill doesn’t tell you anything. It just, it does the thing passively for you to get you into the state that you wanna be in.

John Koetsier: I’m returning the Apple watch. It. Can’t do the workout for me. How lame is that? Right? Wow. So lazy, interesting stuff. Very, very cool. I look forward to seeing the device, what it looks like.

I almost wonder if you’d partner with other people. There are so many cool startups in hardware and brain interfaces. I mentioned muse, uh, generosity, uh, is another one. Uh, but this app store for the brain concept. Wow. Uh, and then ideally with some kind of sort of private application of AI to understand my brain, and I’m sure there’s many similarities to everybody’s.

And then there’s some similarities in different types of people whose brains work similarly. But being able to handle that, manage that. and treat that in some level, is very cool.

David, I don’t think you got the off switch yet, do you? I mean, sometimes you just wanna turn the switch off and wake me up in 10 hours and I’ll deal with life at that point.

David Wang: I like to think of the technologies we create as guiding or directing the brain. it gives an individual, more control over themselves, than they would have otherwise.

sometimes we think it’d be great if I just had the willpower to work a little harder, do something more. What if there was a device that gives you that willpower? That’s what we’re trying to enable.

John Koetsier: there’s so much potential for a device that, is not just read, but also write, we give drugs to people who have anxiety.

Right, and it calms ’em down and it has side effects and sometimes their emotions, just flatline. I know somebody very, very well, who has had some schizophrenic episodes and so he has to have treatment for that in the form of pharmaceuticals. And that has almost destroyed his personality and ability to be a functioning spouse and father.

And obviously this is super futuristic, but if you can come up with something that does that non-invasively and very. Intelligently, selectively doing only the things that are required for the condition. And not just totally obliterating somebody’s personality or willpower or motivation

Um, that’s huge. And I’m not even getting into addiction issues where people are addicted to certain feelings or drugs or different things that they get. And being able to address those, is incalculable.

Meredith Perry: We agree and certainly hope so, the way that chemicals work means that they have to go through your entire body to be able to hit its target, which is why there are off target effects.

And with alim mind, we’re interfacing with the brain and the nervous system directly, and we’re not, we don’t have to go through the bloodstream. So we don’t have any of those off target effects. And that’s, and it makes, you know, improving your health, something easy, low effort, and without compromise.

with a drug, we are deciding, okay, I’m gonna take allergy medication, but I know that I’m gonna get groggy. you shouldn’t have to make that trade off.

David Wang: If I could add, we’re all different. When you take a drug, it is like a one pill fits all type of solution, but that’s not actually how humans work.

We are all unique and so there’s an aspect of tailoring and customizing, the neuromodulation, the brain modifications we’re trying to do in order to help some, in some way, that really tailors the effect and minimizes the side effects.

John Koetsier: thank you both for your time.

Thank you so much for having us.

David Wang: Thanks, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Hacking reality: Apple Vision Pro and security

VR 2024 inflection point

Can someone hack your reality if you’re wearing an Apple Vision Pro?

Apple Vision Pro is launching and the reviews are amazing, even from VR unbelievers. But we’ve barely begun conversations around its privacy and security implications. Think about it: it has literally dozens of sensors, cameras, mics. It maps your home, bedroom, kitchen, living room, and its video is so good, it looks like the real world.

This is not something you want bad guys controlling.

In this episode of TechFirst, I discuss Apple Vision Pro privacy and security concerns with Synopsys principal security consultant Jamie Boote.

Subscribe to my YouTube channel here

Listen: hacking reality with Apple Vision Pro

Subscribe on your platform of choice:

 

Hacking Apple Vision Pro

(AI-generated overview)

The launch of the Apple Vision Pro has taken the digital world by storm. With its advanced technology and impressive features, it promises to revolutionize the way we experience virtual reality (VR) and augmented reality (AR). However, amidst the excitement, it is crucial to consider the privacy and security implications that come with such a powerful device.

The Apple Vision Pro offers a whole new level of immersion with its array of sensors, cameras, and microphones. It can map your surroundings, making the virtual world seem like a part of your reality. But what happens if someone manages to hack into this technology?

“If somebody could hack that, what could they inject into that experience? The implications range from inducing fear or influencing brand preferences to potentially mapping your entire living space, says Jamie Boote, Principal Security Consultant.”

Every new device brings with it the potential for vulnerabilities. Jamie Boote sheds light on the challenges of securing a device like the Apple Vision Pro: “Anytime software can do something new, there’s a chance it can do something wrong. Adding complex sensors, interfaces, and computing power can create unforeseen security risks.”

The big question: can we trust Apple more than other tech giants when it comes to privacy and security? Of course, Apple is primarily in the business of selling hardware and building trusted computing platforms. While no company is immune to vulnerabilities, Apple’s focus on data protection does set it apart from data-harvesting companies.

Considering the history of software vulnerabilities, it is important to acknowledge that new technology will have its share of security issues. Old vulnerabilities can resurface in new ways. The paradigm shift in hardware and software calls for continuous vigilance and proactive security measures. As virtual reality devices like the Apple Vision Pro become more prevalent, the value and innovation in products increasingly reside in software and AI.

However, this also expands the attack vectors for potential hackers. The hardware and software’s continuous growth makes it an attractive target for attackers. The more we add layers of intelligence and data to what we see, the more avenues there are for potential security breaches.

The Apple Vision Pro is an impressive device with remarkable capabilities. While it offers an immersive and unparalleled experience, it is essential to be aware of the potential privacy and security risks that come with it. As with any advanced technology, ongoing efforts in securing these devices will be vital. With Apple’s track record and focus on data protection, the Apple Vision Pro is a step towards a more secure future in the realm of virtual reality and augmented reality.

“The more things that people want to do with software, there will be more ways to abuse that,” says Boote.

By addressing the privacy and security implications, we can ensure that the benefits of the Apple Vision Pro are enjoyed without compromising user safety. As the world continues to embrace VR and AR technology, it becomes crucial for industry leaders to prioritize robust security measures to safeguard user experiences.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Smart buildings 2024: an essential part of a smart grid

smart buildings 2024

How smart will our buildings become in 2024?

It’s early January and I’m in Vegas at CES where everything is smart, everything is AI, everything is cutting edge … apparently. But how smart will our buildings and homes get in 2024? Where are we headed, and where are the gaps?

Our guest for this episode: Dan Hollenkamp, CEO of Toggled, which makes sensors, software, and appliances for smart buildings.

Subscribe to my YouTube channel here

Subscribe to the audio podcast

Find your favorite platform:

 

Smart buildings 2024: AI summary of podcast

Here’s an AI summary of the podcast:

  • How smart will our buildings become in 2024?
  • The goal is to have a building that responds to our needs
  • Confusion between smart devices and remote controllable devices
  • Buildings that are assisting us and not taking away from our tasks
  • Data sharing and decision-making based on data
  • Flexibility in working times and locations
  • Buildings becoming active participants on the grid
  • Using AI to understand building usage and optimize environments

Summary blog: smart buildings in 2024

Note: this is AI-generated

The podcast episode titled ‘Toggled’ explores the intriguing world of smart buildings and the significant role of AI in shaping their future.

The episode kicked off with a question – how smart will our buildings become in 2024? Predicting the progress of technology is no small feat, but the guests provided their fascinating insights based on current technological advancements.

The dream of creating smart buildings is more than a looming reality; it’s an intention, a goal, and a process. The ultimate aim is to establish buildings that respond to our needs without adding any unnecessary complexity. This incorporates building structures that are intuitive, adaptable, and capable of providing the necessary support for a plethora of activities.

As the discussion carried on, a crucial differentiation was made between smart devices and remote-controllable devices. This segment emphasized the importance of going beyond developing technology that enables control to developing technology that predicts and anticipates our needs. This would pave the way for genuinely enhancing living and working spaces without introducing added work or confusion.

The benefits of smart buildings extend not only to structural design but also the function and feel of the space. The episode painted an image of the future where buildings are equipped to support and even enhance our daily routines. With smart buildings, you won’t have to fuss around adjusting the ambiance whenever you walk into a room; instead, the room would already be tailored to your preferences, ensuring a comfortable and productive environment.

The conversation in the episode highlighted the undeniable importance of data. In an era where data is king, building systems need to not just collect but also share and analyze this data to make informed choices. By leveraging the power of data, intelligent and efficient building management becomes a reality.

A significant chunk of the discussion revolved around the changing trend of workspaces due to remote working. The shift towards remote work has ushered in an era of flexibility in working times and locations, which in turn, affects our buildings’ energy usage patterns. Smart buildings must transform and adapt to these changing patterns to ensure an optimum level of comfort and productivity.

As the future unfolds, buildings are expected to evolve from mere energy consumers to active participants on the grid. With the increasing shift towards renewable energy and microgeneration, buildings will have a crucial role in balancing energy demand and supply, actively managing energy consumption, and aiding the integration of various energy sources.

Last but certainly not least, the podcast highlighted the immense potential of AI in smart buildings. Leveraging AI, buildings will be able to analyze user behaviors, adapt to personal preferences, and make automated adjustments to the environment to create optimal conditions. In essence, AI will help buildings become smarter, more adaptable, and energy-efficient.

To sum up the podcast, it offered a captivating glimpse of the future of smart buildings and AI’s transformative potential. Through the integration of cutting-edge technology, AI, data, our everyday living and working spaces have the potential to become intelligent, adaptive, and energy-efficient.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

VR in 2024: inflection point with Apple Vision Pro?

VR 2024 inflection point

Where will VR go in 2024? VR is in an interesting space right now: doldrums in the consumer market, but lots of cool new tech in Meta Quest Pro, Quest 3, and some App Store download data indicating Quest 3 is doing unexpectedly  well.

And of course we have Apple Vision Pro coming out SOON … which I hope to buy in Q1. But will likely be low-volume, as will other very high-end headsets intended for professional and business use like the Varjo XR-4 (which I recently demo’d in Las Vegas at CES).

So what can we expect in 2024 for VR? In this TechFirst, we chat with Rolf Illenberger, the founder and managing director of VRdirect.

Subscribe to my YouTube channel here

Subscribe to the audio podcast

 

AI summary of this episode on VR in 2024

John Koetsier and Rolf Illenberger discuss:

 

  • the future of VR in 2024, focusing on its potential in the enterprise space
  • the current state of VR technology and its use cases in areas such as training, safety, internal communications, and virtual tours
  • challenges and opportunities for Apple’s upcoming Vision Pro headset
  • the importance of creating immersive experiences and intuitive user interfaces
  • Rolf’s prediction that 2024 will be a pivotal year for VR with increased adoption and integration into business operations

VR in 2024: a GPT-4 blog post based on this episode

As we enter 2024, the world of virtual reality (VR) is at an interesting crossroad. In this episode, we dive into the current state of VR and discuss the future and potential inflection point for this technology in 2024.

The Consumer Landscape
In the consumer space, VR has seen ups and downs. However, there are some exciting developments on the horizon. The Meta Quest Pro and the unexpected success of the Quest 3 App Store downloads indicate that VR is still capturing the interest of consumers. Additionally, the much-anticipated Apple Vision Pro headset, set to release in Q1, offers the potential to redefine the VR landscape with its innovative features.

The Enterprise Adoption
While the consumer market might still need a few more iterations to gain widespread success, the enterprise adoption of VR is on the rise. VR is proving to be a valuable tool for various industries, particularly in training and skill advancement. Surgeon training, for example, is being revolutionized by immersive VR experiences, allowing doctors to practice procedures multiple times before performing them in real-life situations. Other applications include safety training, internal communications, virtual showrooms, and simulations of real-life processes.

The Impact of Apple Vision Pro
One of the most highly anticipated VR devices is the Apple Vision Pro. While not initially targeted at the mass market, the Apple Vision Pro aims to make a significant impact on both the enterprise and consumer fronts. With intuitive user interfaces, facial recognition, and the ability to integrate with existing Apple content, the device offers a unique VR experience. However, its success may heavily depend on the development of a robust content ecosystem.

Challenges and Considerations
The successful adoption of VR in enterprises relies on overcoming several challenges. User experience is key, and companies need to carefully introduce VR to their workforce to avoid overwhelming initial experiences. Additionally, the ever-evolving hardware landscape poses a challenge for IT departments, as they navigate the numerous options available and ensure compatibility and security.

The Future of VR
Looking ahead, VR holds tremendous potential for immersive entertainment and unique storytelling experiences. The combination of AI and VR will play a crucial role in the development of captivating content and enhancing user interactions. As the technology continues to evolve, we can expect VR to become an integral part of many industries, creating new opportunities and transforming how we perceive and engage with the digital world.

Conclusion
As we move into 2024, VR stands at a crucial inflection point. While the consumer market continues to evolve, the enterprise adoption of VR is already making significant strides. The upcoming release of devices like the Meta Quest Pro and Apple Vision Pro will further shape the VR landscape. As organizations recognize the value of VR in enhancing training, communication, and productivity, we can expect VR to become an essential component of their digital strategies. With the right approach, VR has the potential to revolutionize industries and provide immersive experiences that were once only imagined in science fiction.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Reinventing speakers: replacing 100 year old tech with MEMS chips

reinventing speakers with MEMS chips

Can a new innovation upend a $50 billion industry that uses 100-year-old tech … and that is probably in your ears multiple times a day?

Pretty much all of us use earbuds, often wireless ones, and they all rely on old tech … literally hundred-year old tech from the 1920s with roots in the 1800s. There’s a new option based on silicon: a microchip means of creating sound. And, it could be better while also being cheaper. In this episode of TechFirst, we chat with  Mike Housholder, a VP at xMEMS Labs.

Here’s our chat:

(Subscribe to my YouTube channel here)

 

Subscribe to the TechFirst podcast

 

MEMS and speakers: replacing 100-year-old tech

(Note: this is an AI-generated summary of this podcast. As such, it will take guests’ statements as undisputed fact. This should not be taken as been written by me, John Koetsier.)

The world of audio has come a long way since the invention of the coil and magnet speaker in the 1800s. This century-old technology has served as the foundation for sound reproduction in nearly all our devices, from headphones to speakers. However, it’s time for a change. Enter xMEMS, a company at the forefront of innovation that is set to disrupt the $50 billion speaker industry with their revolutionary solid-state semiconductor alternative. In this blog post, we’ll explore the technology behind xMEMS, its benefits, its current market presence, and the company’s ambitious plans for the future.

The Problem with Legacy Speakers
For decades, we’ve relied on coil and magnet speakers for our audio needs. While they have undoubtedly improved over time, they still suffer from limitations. These mechanical devices are prone to wear and tear, lack uniformity and consistency across speakers, and can be easily damaged by external factors such as water or drops. Moreover, they struggle to provide the level of audio detail, separation, and precision that consumers desire in today’s fast-paced digital world.

Introducing xMEMS
xMEMS is here to change the game. They have developed a solid-state semiconductor alternative to traditional coil and magnet speakers. By leveraging the advantages of semiconductor technology, xMEMS can produce high-quality sound with precision and reliability. Unlike their mechanical counterparts, xMEMS speakers are more robust, resistant to damage, and have a longer lifespan. They also eliminate the need for magnets, reducing weight and electromagnetic interference in wireless devices.

The Benefits for Manufacturers and Consumers
The shift to xMEMS speakers offers several advantages for both manufacturers and consumers. From a manufacturing standpoint, solid-state components are easier to test, scale, and integrate into products. They offer greater uniformity and consistency in loudness and phase alignment, ensuring a well-balanced audio experience. Additionally, these speakers are more reliable, surviving drops, water, and dust better than their mechanical counterparts.

For consumers, xMEMS speakers deliver superior audio quality. With a faster mechanical response, they can reproduce complex and dynamic sounds with exceptional detail and separation. Imagine hearing every instrument in a song or distinguishing between background and foreground vocals with utmost clarity. Furthermore, xMEMS speakers boast a flat phase response, which means that audio is faithfully reproduced without the typical phase disturbances found in conventional speakers. This results in a more accurate sound representation, akin to that experienced at live music performances.

Current Market Presence
xMEMS has already made waves in the audio industry, with their speakers featured in premium-grade in-ear monitors and hearing aids. However, their most significant market breakthrough is on the horizon. In November, the company will launch its first true wireless stereo (TWS) earbuds with xMEMS speakers in collaboration with Creative Technology. This consumer-friendly product will offer the benefits of xMEMS technology at an affordable price point, making it accessible to a wider audience.

The Path to Market Dominance
xMEMS has ambitious plans to dominate the speaker market. While their initial focus has been on personal audio devices, such as earbuds and headphones, their ultimate goal is to reinvent loudspeakers in every form. From smartphones and smart speakers to cars and home theater systems, xMEMS aims to revolutionize sound reproduction across the board. While the physics challenges are significant, the company is already working on pioneering a transduction mechanism called ultrasonic amplitude modulation. By moving outside the conventional auditory frequency spectrum, xMEMS hopes to bring full bandwidth audio to even the largest speaker systems.

Patents and Future Competition
xMEMS understands the importance of protecting their technology and has secured over 110 patents to safeguard their innovations. While they expect competition to arise in this growing market, xMEMS’ strong intellectual property portfolio provides them with the necessary freedom of operation. Although they may not be the sole source for this technology, xMEMS is currently leading the market and anticipates significant growth as they expand into various segments.

Conclusion
With their solid-state semiconductor alternative, xMEMS is shaking up the speaker industry. By delivering high-quality sound with precision, reliability, and innovation, xMEMS speakers offer significant advantages over conventional coil and magnet speakers. The company’s current market presence, including collaborations with renowned brands, sets the stage for a full-scale disruption in the industry. As they continue to push boundaries with their ultrasonic modulation demodulation scheme, xMEMS is well on its way to revolutionizing sound reproduction in everyday devices. Get ready to experience audio like never before!

Full transcript: reinventing the $50 billion speaker market

Note: this is an AI-generated transcript.

John Koetsier: Can a new innovation upend a 50 billion industry that uses a hundred year old tech, still uses a hundred year old tech, and it is probably in your ears multiple times a day. Hello and welcome to tech first. My name is John Koetsier. Pretty much all of us use earbuds, maybe wireless, maybe wired. They all rely on.

Pretty old tech. It’s a hundred year old tech. Now there’s a new option that’s based on, of course, silicon. It’s a microchip. It’s a means of creating sound that is very different. It could be better while also being cheaper. Here to chat and give a big product announcement at some point is Mike Householder, a VP at XMEMS.

Welcome Mike. Thanks for having me, John. Hey, super pumped to have you. Let’s start at the beginning. One of the things that I saw in the pitch when you said, Hey, let’s chat about this, was that headphones rely on a hundred year old tech. What is this a hundred year old tech? 

Mike Housholder: Yeah it’s the speaker that we’ve all been using are pretty much our entire lives.

So the coil and magnet speaker, has been our only means of experiencing sound our entire lives. And it was invented way back in the 1800s, perfected in the 1920s and just slowly improved ever since then. But it’s a mechanical structure. It’s got. First of all, a coil and magnet for actuation.

You drive a current through that coil. It moves the magnet that then pushes through various layers of suspension, a paper or plastic diaphragm that Moves air and generates sound fundamentally unchanged for a hundred years. Wow. So what we’re replacing it with is a solid state semiconductor alternative.

So now instead of a complex mechanical system, we can produce very sophisticated, higher quality sound. With a tip. So this is a speaker on a chip that can produce very sophisticated audio. So you’re getting really all of the benefits of semiconductor technology paired with sound generation. 

John Koetsier: So let’s unpack that a little bit and talk about what those benefits are.

And it’s really interesting because totally different industry, but about a year ago I interviewed somebody and they’re making a solid state silicon based fuse box for your home. And nobody thought it was possible. Nobody thought it could be done. And they did that. And you’re talking about something that is a totally different space, but is somewhat related.

It’s solid state and using silicon chips to recreate this. Why is that a good idea? Why is that important to do? 

Mike Housholder: If you look at the consumer electronics industry, there’s just a natural gravitational pull.

To solid state components, they’re scalable, more reliable, generally faster, better performing. And really, if we look at a, an average consumer electronics product today, be it a phone a TV or whatnot, there are very few non solid state components remaining in those devices. The one remaining one.

For the most part is the speaker coming after the last survivor, exactly. There are so many examples in the industry of a traditional mechanical device being replaced by a solid state semiconductor variant. I can walk through a couple of examples. If we go to the opposite end of the audio spectrum, the microphone, most people don’t know the microphone is mostly.

Semiconductor MEMS technology today for a majority of the microphones. The mechanical microphones still exist, but the majority of the unit volumes are MEMS, MEMS semiconductors. For those who, we all have phones and PCs today, the. The spinning mechanical hard drive for the most part has been replaced by a solid state drive.

Why? It was more reliable. It’s faster. You get your data faster than a traditional spinning mechanical variant. You brought up the fuse box. You look in automotive now. Everyone’s pushing towards solid state batteries. Once a solid state variant exists, it will, over time, take the majority of unit volumes for that function.

John Koetsier: Okay so that’s a little bit about how it’s constructed, how it’s built and this overarching flow of… The world of technology towards solid state. Why is this a good thing for speakers? Why is this a good thing for headphones? Here’s some headphones that you sent me that I’ve been testing and trying out.

Why is it a good thing to apply here? 

Mike Housholder: Yeah, good question. So I’ll I’ll answer that question from two angles. One is benefits to the manufacturer of the product and then benefits to the consumer. All right. So to the manufacturer they want a solid state component because again, of all the quality and reliability advantages versus their mechanical variant, easier to test, easier to scale.

They’re more uniform than a traditional coil based speaker. And what I mean by uniformity is each speaker. Loudness level is equivalent. Their phase alignment is more consistent. So that’s what you want out of audio. When you’re trying to match a left and a right, you don’t want them louder or quieter than the other side.

You want them perfectly phased aligned. So you don’t muddy any of the the audio response. So semiconductor is just going to be more uniform and consistent. It’s going to be more reliable. It’s going to survive a lot longer. It’s going to survive drops and, water and moisture and dust better than a mechanical variant.

John Koetsier: it’s one of my original AirPods I dropped and the audio was never the same after that. 

Mike Housholder: Yep. Yep. So more robust to drop, one of the things we get asked by a lot of our customers is could it survive a washer dryer cycle? Hey, who’s left their AirPods in their jeans and run them through a washing cycle.

So the answer is yes, you can run our speakers through a washing machine and dryer and the speaker will work. We can’t guarantee the rest of the electronics is going to work, but we can say confidently the speaker is going to survive. And we also remove the magnets. So we’re taking out weight.

We’re taking out a source of electromagnetic interference with the wireless antennas in a wireless earbud. so Those are really all the benefits to a manufacturer. They like solid state. So now, but the consumer may care a little bit less about whether it’s easier to test or this or that. So really what the consumer cares about is audio quality.

Are they going to, get better music quality by putting a solid state speaker in their ears versus a conventional speaker. There are three unique aspects to our sound signature that a conventional coil based speaker doesn’t do and can’t do. So the first characteristic, is is really in the mechanical response of the speaker.

The speaker actually moves up and down. Its mechanical response is about 150 times faster than a legacy coil variant. So what does that mean to the consumer? A fast actuation means that you’re going to pump air and then you’re going to recover and be ready for that next audio stimulus a lot sooner. So as the music gets really dynamic, really complex, lots of instrumentals, multiple voices coming in, you really care about detail and separation.

You want to hear one instrument clearly delineated from the next instrument. You want to hear that background vocal as well as the front vocal, and you want to. Believe that they are all separate and unique and the slower the speaker gets, speaking of legacy speakers, some of that detail starts to muddy together and you lose that precision, that sense of separation.

A faster speaker will present that detail in all its glory. So the speed gives you that detail and separation. So that’s one unique aspect. 

John Koetsier: It’s almost like having a higher resolution display in other words. 

Mike Housholder: Exactly. So it’s the equivalent of, HD video. Now you’re really stepping into, there is HD audio.

There’s high res audio out there. Can the speaker truly resolve that content? Are you taking HD video and running it over a CRT monitor or are you pushing that HD video over a high pixel density? HD screen, same equivalent, same analogy on the audio side, the content may be HD, but can the speaker truly resolve that content and present all of that additional detail?

That’s what you can do with this speaker. So the speed is number one for the detail and separation. The second aspect of the sound signature is really flat phase response. Most it’s really not known and really not well known, but conventional speakers have a phase disturbance, a phase shift in a 500 to 2 kilohertz region, which is a really sensitive region of the human ear.

But I think the human brain has adapted to it because it’s the only way we’ve ever experienced sound. That we, we just don’t hear that phase disturbance. 

John Koetsier: What exactly is a phase disturbance in sound? 

Mike Housholder: So it’s a, it, the shift in phase will basically alter the original recording of the music.

But again, I think the human brain has been tuned to just. Ignore it, as you saw from that Brian Lucy video, he’s a mixing and mastering artist in the music industry, his argument is, so these conventional speakers have a 180 degree phase shift at the resident frequency, which is 500 hertz to 2 kilohertz.

Typically, our phase is flat. Out to 10k. There is no phase disturbance. There is no shift in phase. And, for a professional ear like Brian’s, it was immediately apparent to him that, what he is mixing and mastering, how he wants the consumer to hear the audio. He typically doesn’t find a consumer audio product that renders it as cleanly as he wanted you to hear it.

And what he heard from our speaker with the kind of the pure phase response, he finally heard a speaker that presented the audio in the way he wanted it presented to the consumer. 

John Koetsier: Interesting. So that would be more aligned with how you would hear live music, for instance. 

Mike Housholder: Correct. And the and while we’re on the phase discussion, this gets back to the uniformity and consistency out of the semiconductor process, chip to chip, our phase consistency is there.

So this really steps into spatial audio. So as you’re moving audio from the left to the right, up and down, you want perfectly phased match speakers. To render that spatial content accurately and not muddy the spatial response. So having perfectly matched left, right speakers that are perfectly uniform, you’re getting more crisp and clear spatial audio.

So the phase is in two aspects, the phase shift and the phase consistency. The third characteristic to our our sound signature is the fact is really in the materials. So most speakers today use a diaphragm material that’s, paper or plastic, there’s certainly a lot more exotic variants as you get into really expensive speakers.

But the majority of consumer low, reasonably placed speakers, paper, plastic diaphragm, what you want out of a speaker diaphragm is something that’s stiff, rigid, but lightweight, because you want the material of the diaphragm to be stiff enough that when you drive it really hard, it doesn’t go nonlinear.

You want that whole diaphragm to just stay consistent to not muddy the audio, but what you’ll see in paper or plastic, because they’re very pliant materials. When you drive it really hard, part of the diaphragm goes up. Part of the diaphragm goes down. This is a concept called speaker breakup, and that will muddy the audio and it’s most present in the mids and the highs.

Not so much in the lows. So we, being a monolithic semiconductor speaker, our speaker diaphragm is silicon. And silicon is 95 times more stiff than paper or plastic. So you have a stiff and rigid material pushing up and down, generating sound. It does not go non linear, it does not muddy the sound. So you’re getting more pristine mids and highs that you wouldn’t get from a conventional speaker.

John Koetsier: Interesting. Amazing. Very cool. One of the things that the audio engineer that you shared, he was talking about it. He talked about pistonic pressure which is something that he said you typically get with many different headphones or earbud solutions. What is that? Why don’t you have it? 

Mike Housholder: Sure.

So a conventional speaker, that paper plastic diaphragm is typically isolated or sealed front to back. The front of the speaker and the rear of the speaker are not exposed to each other. So in the front chamber of that speaker, the part that’s connected to your ear. You have that pistonic motion of that speaker diaphragm, that coil and magnet pushing up and down and it’s generate, it’s pushing air to generate sound, but it’s also creating, in a vacuum, it can create some pressure buildup in the ear.

That’s why some earbuds typically have a spatial vent in the front chamber to vent out some of that pistonic pressure. But even though they have that vent um, it can still lead to fatigue over time. If you’re going to. Watch a movie on a plane for two hours or be on your earbuds for multiple hours. It will lead to fatigue.

I think everyone’s felt it. What’s unique about our speakers again, we’re dealing with micron level precision of a semiconductor process. We can actually do. What conventional to what conventional speakers is a no, we can actually integrate micron level slits in our speaker diaphragm, because again, we got that micron level precision that as we are doing that pastonic pressure, there are little vents that are opening up and any pressure that’s building up leaks out the back.

So what we’ve observed in our own testing, our testers are working with our speakers every day. We’ve got them in our years. We’re just sensing that we can keep these earbuds in our ears longer without having those senses of fatigue. 

John Koetsier: So sounds super cool. What’s the path to market dominance and is it related to the announcement that you’re going to make?

Mike Housholder: We’re we’re at the the leading edge of that right now. Yeah, we’ve been in the market with. Product since 2020, we’ve been in, we’ve completed our production qualification with our fab and our manufacturing partners since 2021 2022. so now those production speaker chips.

Are now in the process of being integrated into consumer products, and those consumer products are now reaching production. There are a few products out in the market today using our speakers more on kind of the niche market side. We’ve got some hi fi audio earbuds, probably thousand dollar products.

Believed enough in the sound quality of our speaker to say that this is different and it’s worthy of 1, 000 price point. Those are in the market now, so you can buy in ear monitors for hi fi audio with our speakers in them. There are some hearing assistance products on the market, hearing aids with our technology, but in November, there will be our first TWS customer true wireless stereo earbuds.

Reaching the market with our speakers. So we’re at that kind of that cusp of, getting into high volume, mainstream consumer earbuds that is happening in very short order. 

John Koetsier: So if somebody wants to check it out, wants to try it, maybe they want to buy a thousand dollar pair of headphones, or maybe they just want something that’s a hundred bucks, 50 bucks, whatever.

Are there any brands that, that, that are having this in the market right now that they can check out? 

Mike Housholder: Yeah, absolutely. So on the, the hi fi audio side there, there are two, premier grade in ear monitors one from a U S company called singularity audio is a very high end in ear monitor, about a 1, 500 price point.

And then there’s another Asian in ear monitor company called Ceramic that also has an in ear monitor with our MEMS speaker in it. So those are premium products, high price point really for the audiophile who invests in audio equipment, but it really, our interest as a semiconductor company is high volume business and reaching that mainstream consumer.

There’s a forthcoming product announcement for middle of November. From Creative Technology, a well known brand in both consumer audio and PC and gaming audio. They are releasing a true wireless stereo earbud with our MEMS speakers. They will be the first MEMS speaker TWS. Brand on the market and that is coming to market at a very consumer friendly price point.

John Koetsier: Very cool. Interesting. And do you, have you patented this technology? Is it possible for others to do the same or if this takes off and everybody starts demanding this and Apple wants it in their AirPods and everything else did they have to get it from you? 

Mike Housholder: Yeah. So we we’ve been very aggressive in patenting all of our innovations we have well over 110 patents granted.

We’re a five year old company and we already have 110 patents granted covering all aspects of our technology, our process, our manufacturing methods. So we’ve gotten good freedom of operations for ourselves and for our customers with that said, there are different ways to, to design the product.

And this is a large enough market where we would expect to have competition. So I wouldn’t say that we’re going to be the only source in the world to get this stuff that’s not going to be true. But we, we have certainly, we’re certainly ahead of the market and we’ve got sufficient protection to, to have freedom of operation.

John Koetsier: , you’re obviously looking at earbuds and headphones, , first, but there are speakers all over the world. There are speakers all over the place. And as we, , add intelligence to just about every product we have, , sometimes the ability to speak to it is handy. And certainly with Alexa devices and Hey Siri at risk of getting Siri upside here, or other things like that.

, there, there are speakers all over the world. , do you have grand visions of being. In everything. 

Mike Housholder: Yes. So the North star of the company was to not just reinvent personal audio speakers. The North star of the company is to reinvent loudspeakers. So it’s the speaker in your phone, the speaker in your watch the speaker in your TV, your smart speaker.

Your car, your human retainment system. We want to touch every corner and nook and cranny of the speaker market. The easier lift for us, the fastest path to market is getting close to the year. So starting in personal audio was that easier lift to get to market, get the revenue base going.

And to continue to fund R and D for the bigger lift, which is to produce full bandwidth audio in free air and free space in a little thin semiconductor chip. So you can imagine the physics challenges that have to be overcome to achieve that. So we’re taking things in a logical, staged approach to open up different corners of the market as the technology matures.

John Koetsier: As you do that looks interesting, and I know that’s not your initial focus, but if you look at a Sonos or you look at some of the other big stereo brands, I’m assuming that you can scale up the physical size of your chip, not the chip per se, but the bit of silicon that’s doing the resonating and make it work in a larger environment.

Mike Housholder: Yeah. So this is where this is where the fundamentals of semiconductors. Instead of being a benefit, as we’ve talked about actually become more of an inhibitor, if you want to, the logical approach would be if I’ve got a six inch mid range or a six inch woofer, okay, I’ll replace that with a six inch semiconductor speaker.

And pretty soon you’ve consumed an entire semiconductor wafer, and, just financially and fiscally, that just won’t make any sense. Semiconductors are good when they’re small so that presents a physics problem, which is how do you produce sophisticated sound in free air in really tiny packages?

And again, if you look at the side profile of a conventional, say tower speaker in a home entertainment system, that speaker has some depth to it. Why does it have depth? It needs… Displacement to push air. Okay, so fundamentally, a semiconductor will always be at a disadvantage from a physics perspective.

You don’t make semiconductors sticker. They’re always going to be really thin. So you will never have that displacement of a big, a big free air speaker. This kind of gets to a forthcoming product announcement that we’re going to be making in November, uh, around our Cypress technology, which we are reinventing the transduction mechanism.

John Koetsier: I would like to know what the transduction mechanism is. 

Mike Housholder: The conventional speaker, coil based speakers, even the speakers from XMEMS that exist today, work fundamentally on a push air transduction mechanism. You have an actuator. That pushes a diaphragm, moves air and generates sound. But that displacement is fundamental to your ability to generate sound at distance.

More distance, more displacement. We’re never going to have that displacement advantage. We need to find a different way to generate sound. And, in a hundred years, there hasn’t been a product that has generated sound in a different way that could produce equal or better sound. So we are moving to a methodology or a principle of ultrasonic amplitude modulation.

So we are using all the advantages of MEMS semiconductors, which is speed, which is uniformity, consistency, to basically build an ultrasonic modulation demodulation scheme to move outside of the audio frequency spectrum, operate in ultrasonic regions to modulate and demodulate, and then extract. You basically modulate a series of air pulses to the original audio signal, then you demodulate the ultrasound and extract the audio.

So it sounds 

John Koetsier: very sci fi. 

Mike Housholder: It sounds sci fi and actually sound from ultrasound has been in research mode since the 1960s. And no one has been able to achieve a performance. significant enough to commercialize this in a broad way until now. So what this ultrasonic modulation demodulation scheme gives us that our current generation speakers don’t is additional, we already, you’ve already listened to our first generation speakers.

That’s full bandwidth audio. You heard. The lowest frequencies to the highest frequencies. But if you want to move into freer air applications or leaky applications, you need to displace more energy. In the low frequency. So by moving into ultrasonic modulation demodulation, we are now putting ultrasonic air pulses into the audio envelope.

And low frequency energy is wider. It’s wider wavelength. Yes. So we can put more air pulses into a wider wave, wavelength low frequency energy so that more air pulses generates more air pressure, which generates deeper base. 

John Koetsier: It sounds amazing. But I’m totally missing how the ultrasound, which I cannot hear because it’s above the hearing frequency that my ears, especially my ears I’m, not 22 anymore, but above the frequency that my ears can hear how that gets translated in my open air environment, maybe my home theater, whatever, to something that I can hear.

Mike Housholder: Sure. So we’re getting an input audio signal from the sound source, whether it’s your phone, your laptop, or a receiver at home. We get the input audio signal and really our controller and driver will then basically implement. A ultrasonic air pulse scheme to map each ultrasonic air pulse to the frequency of the audio.

. . So we’re gonna use that ultrasonic modulator to generate air pulses, but then we’re going to demodulate that ultrasound. To then extract that, that audio signal. So it’s just another way to generate air pressure to create sound, but outside of the audible spectrum. 

John Koetsier: Are you almost virtualizing the speaker?

Is that a way that you could describe this? Is it is the actual sound production almost happening outside of the speaker assembly, the new speaker? 

Mike Housholder: No no. There’s definitely a, a control and amplification in a separate controller chip, there’s still, our MEMS semiconductor that is, all the MEMS structures are generating the ultrasonic pulses.

Through MEMS, demodulating through valves and letting the audio flow through. So there is still fundamentally a speaker that has moving parts in it but it’s now a semiconductor and not a mechanical device. 

John Koetsier: Fascinating. I look forward to seeing it. Thank you so much for this time, Mike.

Mike Housholder: Thank you, John.

 

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Can immersive storytelling via VR change history? Maybe 1 mind at a time …

immersive VR

In this episode of TechFirst, I chat with Emmy award-winning XR director Michaela Ternasky-Holland about whether immersive storytelling via virtual reality can change the course of history.

Documentaries already have.

The Day After aired in November 1983 and is credited with changing U.S. president Ronald Reagan’s pursuit and signing of a nuclear non-proliferation treaty with Soviet premier Mihail Gorbachev.

Can it happen again?

Using her VR documentary project, On the Morning You Wake, as a case study, Michaela explains how the deeply immersive nature of VR can change the audience’s perception of a global threat – nuclear weapons. She compares the engagement and impact of VR experiences to traditional 2D experiences, highlighting how the narrative and the audience’s sense of agency play key roles in creating quality engagement. The discussion further explores the future of immersive storytelling, addressing their potential and challenges in the technology field.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast

 

Episode synopsis: the power of immersive storytelling

(Note: this is AI-generated)

Immersive storytelling is revolutionizing the way we consume narratives, blurring the lines between fiction and reality. In this blog post, we explore the fascinating world of immersive storytelling through an interview with Michaela Ternasky-Holland, an Emmy award-winning XR director who specializes in creating experiences in VR. We dive deep into her acclaimed project, “On the Morning You Wake to the End of the World,” a three-part VR documentary about the threat of nuclear weapons. Join us as we uncover the power of immersive storytelling and its potential impact on audiences.

The Project and its Inspiration
The interview begins with Michaela providing insights into the genesis of her project, explaining how it originated from Princeton University’s Science and Global Security program. The goal was to create a world-changing documentary that could shed light on the effects of nuclear weapons. Michaela reveals how immersive technology, especially virtual reality, was essential in making the audience feel a sense of intimacy with the subject matter. The project aimed to activate the audience and take them on an emotional journey rather than simply providing facts and information.

Overcoming Accessibility Challenges
Accessibility is a crucial factor in the success of any VR project. Michaela discusses the challenges of making VR experiences comfortable and approachable for users. She mentions the need for carefully managing the logistics of VR spaces, ensuring the audience’s comfort, and minimizing waiting times. Training docents or volunteers to properly communicate with users and create a welcoming atmosphere was also essential. Michaela emphasizes the importance of ensuring that users feel safe and comfortable not only with the technology but also with discussing intense topics like nuclear weapons.

The Impact of Immersive Storytelling
One of the core topics of discussion is the impact of immersive storytelling. Michaela shares fascinating insights into her research, revealing that users who experienced “On the Morning You Wake” through VR were more likely to engage with the topic, explore further information, and feel empowered to take action. Comparing the effectiveness of the VR experience with a 2D film version, Michaela explains how VR evoked more positive emotions, instilled hope, and made the audience feel like they could make a difference. The immersive nature of VR created a stronger emotional connection and increased engagement.

The Future of Immersive Storytelling
As the interview progresses, Michaela and John Koetsier, the interviewer, speculate on the future of immersive storytelling. They ponder the challenges of mass distribution and accessibility, acknowledging that VR technology is still evolving and perfecting its form. Michaela highlights the importance of integrating VR into people’s lives in a productive way, similar to how smartphones became integral to our daily routines. They discuss upcoming technologies such as smart glasses and immersive projections, which may shape the future of storytelling.

The Versatility of Immersive Storytelling
In the final section, Michaela and John explore the diverse possibilities of immersive storytelling. They agree that the ultimate expression of storytelling depends on the purpose of the project and the intended audience. Michaela draws parallels between interactive games and linear experiences, highlighting how each medium caters to different emotions and objectives. She emphasizes that there is no single perfect apex for storytelling, but rather a wide range of possibilities that can be tailored to create specific impacts.

Conclusion
Immersive storytelling is an ever-evolving field that holds immense potential for engaging audiences on a deeper level. Through our interview with Michaela Ternasky-Holland, we gained valuable insights into the power of immersive storytelling and its ability to evoke emotions, drive engagement, and effect change. As technology continues to advance and accessibility improves, we can look forward to a future where immersive storytelling becomes a mainstream medium, enriching our lives with new perspectives and unforgettable experiences.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Meet the man who made the first cell phone call ever in the wild

The first cell phone call all started with a stolen car.

In 1983 Chicago resident David Meilahn’s car was stolen. He bought a new one, a Mercedes Benz 280SL 2-seater. But then he needed to replace his old radio-phone … and the sales rep told him there was something new: a cellular phone. He was one of the first few to be selected, then won a race to place the very first cell phone call by a customer, which ended up being from Soldier Field in Chicago, IL, to Alexander Graham Bell’s grand-daughter in Germany.

This is his story, along with the story of Stuart Tartarone, the AT&T engineer who helped build that system and still works for the company to this day.

(Subscribe to my YouTube channel here)

AI summary: the first cell phone call

The script titled “First cell call” is a conversation featuring the first commercial cell phone user, David Meilahn, and an engineer who helped develop the technology, Stuart Tartarone. The conversation is hosted by John Koetsier, and captures the historical journey of cellular technology from its birth in 1983 to modern times.

They discuss early mobile technology like the radio telephone, the development of cellular technology, and David’s experience as the first person to ever make a commercial cellular phone call.

AI transcript: chatting with the man who made the first cell phone call as a customer, and the engineer who made it happen

The year was 1983. President Ronald Reagan proposed the Star Wars initiative. Mario Brothers was just released in arcades. Rent was $330 a month. Ford Mustangs cost $6,500. A gallon of gas was 96 cents. And you could buy a brand new Timex Sinclair color computer for just $179 and 99 cents. It was fall in Chicago, October 13, the setting was Soldier Field, the stadium that the NFL Chicago bears had just started playing out in 1971. 14 cars were lined up for a very unusual race: the race to make the first commercial cell phone call ever in the history of the world.

One man won that race and he placed a call from Soldier Field in Chicago to the granddaughter of Alexander Graham Bell in Germany. 

This is his story. Along with the story of the engineer who helped make it all happen.

John Koetsier: What would it be like to be the very first person in the entire world to use a new technology that would end up utterly revolutionizing everything? Hello and welcome to TechFirst. My name is John Koetsier.

Today is a super special day for TechFirst.  We’re literally going to speak to the first person who ever made a cellular phone call.

The very first cell customer in the world. It was a car phone, of course, and he still has that phone, by the way. We’re also going to chat with an engineer who helped build and commercialize that very first cell phone service. Joining us are David Malon, the very first cellular customer and Stuart Tartarone, who grew up taking phones apart and eventually built and launched AT& T’s global first cell phone service.

Welcome David. Welcome Stuart. Thank you. Thank you. Good to be here. Good to be here as well. Thank you guys so much.

I got to say, the average age of TechFirst guests just rose a little bit.

Stuart Tartarone: So I’m glad. we Don’t want to talk about that, but I guess it must have.

John Koetsier: My friend, you were building cellular networks in the 1970s, so you’re not 25 anymore,

Stuart Tartarone: but I guess not.

Maybe not.

John Koetsier: David, I want to start with you. How did this come about? What happened? Did you see an ad in a paper? Did somebody talk to you? How did you learn about this opportunity?

David Meilahn: It all started with the theft of a car . I had my car stolen. And for business, I had a radio telephone, a good old fashioned radio telephone, which was very expensive to buy and very expensive to pay for minutes, and not the easiest thing in the world to use, but it was extremely efficient, all things considered.

So my car got stolen in 1983, and I bought a new car. I immediately wanted to get a phone because I really missed it. So I went in to purchase one and they said, we can do one of two things. You can do a radio phone again or you can get what’s called a cellular phone, which I had never heard the word before actually.

And that’s a brand new system that’s going to be coming up and they’re hoping to put it on line. In the next three months. So this was the middle of the summer of 83. So I made the decision that I’d rather be more on the cutting edge than on the back end of an old system. So I said I will do that.

They said, we’ll install the equipment. It’ll sit in your car for three months and then we’ll turn the system on. And I said we’ll see if it happens in three months, knowing what normally happens. Unbelievably, within about a month or two, they called and said, How would you like to participate in the kickoff of the cellular system?

We happen to be the first place that this is going to be kicked off for the nation. And I jumped at it because it sounded like a lot of fun. They said, We’re going to have it at Soldier Field, which is perfect because I lived in Burnham Harbor on a boat in Chicago, which shares the same parking lot with Soldier Field.

Nice. I could literally walk to the event, so to speak. So anyway, so on, on the day of the event, it just so happened it was my birthday, October 13th, so that gave me like a doubly blessed day. The result was that they ended up having a race to, with, I believe it was 14 cars, in order to kick off the first cell phone, official cell phone call.

The race had the 14 cars lined up side by side. And they also had the technicians, each technician that actually installed the equipment in each person’s car was lined up to run a 50 yard dash. When they ran the 50 yard dash, they had to get the keys from the owner of the car, unlock the trunk.

And put in the final chip that I’m calling it a chip. I’m probably mislabeling it … that activated the system and it was a cell phone. It wasn’t like a cell phone of today. It was a big box, just like a radio telephone in the trunk of your car that powered I kept calling it a princess phone, but a little phone in the trunk.

And the car. So my technician lines up and he says, Dave, I’ve got some bad news and some good news. What’s the bad news? The bad news was I’m going to be the last guy to the car. He was in his mid 30s. So he’s an old man for technicians and all the rest were young 20 year olds. And he said, but I’m going to have the chip in first, I’ll be the first one to install it and then he held up the chip and I believe Stu’s going to correct me. Probably. I believe that it had about 20 prongs on it and they were about 3 quarter inches each. He said they are going to bend them and they’re going to make it impossible to get it in efficiently.

John Koetsier: Was this a SIM card? The very 1st SIM card?

Stuart Tartarone: It was it was called the number assignment module, but as David said,

John Koetsier: a lot bigger. How big was this? Oh, that’s so big. Not huge, but not

Stuart Tartarone: like a SIM card today.

John Koetsier: Now we have micro SIMs and we have eSIMs, which … Back to you, David. Did he win the race or did you win the race?

David Meilahn: So anyway as he said, he was the last guy to the car. And he was the first guy to get this in, to get this plugged in. And he had to give the keys, they had to give the keys to the owner of the car. The owner had to unlock their car door, get in, start their car. He gave me a great piece of advice.

Jeff told me, When you get in the car and you stare at the car, just sit there and look at the phone because it’s going to light up like a Christmas tree. So once all the lights stop flashing, make your call, and don’t do it before or you will trip the system up, it’ll have to reset itself. So I listened to him, did exactly what he said.

And our call made it to what was a head car that was bridged across the other 14 cars. That’s where the first phone call went, and then from there… It was forwarded to Alexander Graham Bell’s, I believe it was his granddaughter, in Germany. Wow. So that was the official, technically the official first call for a commercial cell.

John Koetsier: So David, I have to ask a question. You talked about having a radio phone. I have no idea what that means. I understand the concept. It was perhaps a phone system that went over radio waves, perhaps to some central switching station that then interfaces with the landline system. But what is a radio phone?

David Meilahn: I think Stu’s going to know more than me but it was literally radio waves going to us. I think a central, there were operators involved in it and it’s so long ago that it’s hard to exactly remember it, but they converted it basically to the land to a land system.

John Koetsier: Wow. Stuart, what is a radio phone?

Stuart Tartarone: basically, long before cellular , probably dating back to the 1940s, there was mobile phone service.

And there was, a transmitter in people’s cars in the trunk, a handset, big handset and device in the passenger compartment. And as, as David said, it operated almost like broadcast TV or radio. There was one big antenna, in metropolitan areas that broadcasts over the entire area. But the big deal about it…

is that there are only 10 or 12 channels. So think about metropolitan areas like Chicago, and after 10 or 12 calls were made, the system exhausted. Wow.

John Koetsier: Wow. Okay, so would they, David, if you tried to make a call on your radio telephone before you had the cellular call, before you had the cellular phone, then, would it sometimes just fail because the channel was occupied, or would you talk over somebody?

David Meilahn: It probably had some of the characteristics of a party line um, from the standpoint that it had limited use, but it wasn’t so limited that it was a frustration. You just, you lived with the way the system, you understood worked, you understood it, and it really worked fine.

John Koetsier: I want to stick with you, David.

We’re going to go to Stuart in a moment and talk about the technology, the process, building the project, coming up with it, all that stuff. David, did you have a sense at this point that you were doing something that was world changing? That was revolutionary. That would literally culminate in what we have these days.

These tiny little devices in our hand doesn’t have to be in a trunk of a car. Did you have that sense?

David Meilahn: Not at all. I had a sense that, it’s just amazing what has happened, because my sense back then was, here’s the newfangled phone system, technology go, back then now, technology moves on with the speed of light, so we’ll see how long cell phones last and one of my thoughts was, my gosh, they’re going to physically put towers, dot the United States in towers, and that’s how we’re going to talk to each other, so it was a little unusual that I thought, Compared to satellite technology it, I’m not no expert, I said, it seems like you should use satellites, but I understood there’s a whole another layer of difficulty there.

So I just thought it was going to be another method of telephones, and in 10 years, we’ll be using something different.

John Koetsier: Really interesting. You took the first step on the moon and it was just the next day.

It’s interesting right now. Actually there’s a lot of people who are trying to do cell phone service, quote unquote, cell phone service via satellite. So maybe that is the next step, but it’s 50 years later. Stuart, let’s turn to you. You grew up taking phones apart. You’re the typical tinkerer.

How does this work? Take it apart. Your parents must have loved you especially for that, but you became an engineer and you started work on this project. What did you think about it when you first heard about it?

Stuart Tartarone: Going back to what you said yeah, I did take phones apart, except we weren’t supposed to do that.

And because, it was back to the old Bell system where everything was controlled by the Bell system and by the local telephone companies. And I fully expected, I went to an engineering school in Brooklyn, close to where I grew up in Queens. It’s now part of NYU. It’s called, it was called the Polytechnic Institute of Brooklyn.

And in those days, recruiters came to campus. And the Bell system would always show up with a recruiter from the local telephone company, New York Telephone from Western Electric, which was our manufacturing wing and from Bell Laboratories, which was later to become AT&T Bell Laboratories, which was our technology, our R& D organization.

And I fully was expecting to talk to someone from New York Telephone because as a New Yorker, I didn’t expect I was going to move out to New Jersey. But lo and behold, I was only given the opportunity to speak to someone from Bell Labs. And afterwards, I was unhappy about that and spoke to my advisor who said words to me that many people of my generation heard, and those words were, if you were given the privilege and used the privilege of working at Bell Labs, you have no choice but to accept.

So I said, Whoa, I said, yeah, I said, I can listen to that went off in those days to what was called a plant interview and drove down from from New York City to what was Holmdale, New Jersey, not too far, went off from Middletown, got off the Garden State Parkway, and I felt like I was in farmland and this was the sticks to me and many ways.

It’s not too different today. If you do have occasion to come, to come here and made a turn and drove down to the road, and there was this tower that was coming out of nowhere. And I was later to find out it was modeled after a transistor. It was the water tower that would supply water to the Bell Labs complex at Homedale.

And that complex was just this beautiful building which was designed by Saranin, who also did major architectural Things like like the TWA terminal in the Great Arch in St. Louis, he designed this building and you walked into it and you looked around. It’s just amazing. And how the interviews were set up at Bell Labs at the time, we got to talk to four different organizations.

And the first organization I got to talk to was an organization called Mobile Systems Engineering. And the interview in those days were a lot different than today. You weren’t put through tests, you weren’t put through having to, design something on the spot. It was a conversation. To me, it was probably a lot refreshing, as opposed to what we do today.

And I spoke to all the people there, and when I got done, the last person I spoke to, a gentleman by the name of Joel Engel, who said to me, Now you’re going to talk to three other organizations, and left this subliminal message in my head. Nowhere will you ever get the opportunity to work on something brand new, something that doesn’t exist today.

And he held up a book, which I… I can’t find my copy of it, was the technical report that AT& T presented to the FCC and what they would do if they were given the opportunity to create a new cellular communication system. They said, this doesn’t exist, and if you join us, you’ll have the opportunity to work on this.

Wow.

John Koetsier: So you joined, you had that privilege, you took that opportunity, you came on board, did you start working on this project immediately or out of college or what did it take a couple of years till you got embedded in it? No,

Stuart Tartarone: How it worked then and how it works today. And you, yeah. You walk in the door as I did in the end of July, 1972, and you’re right into doing something.

And the opportunity I was given at a school was to work with at and t Marketing on doing a market survey of the opportunity for cellular communications so I could apply some of my statistical background in looking at data and working with this company. And they did. This is like 1973 by the time we got out there, very professional survey survey questions went out, there were focus groups and major markets, and I got to sit near the side of the glass and listen to customers and talk about what they might do with it.

And the conclusion from that survey was there was really no market for such a survey. This was 1973. No market for such a survey, for such a service.

John Koetsier: Amazing. And not necessarily shocking or surprising because when you come out with something entirely new, entirely different, you have to invent the market, right?

You have to show what is possible and you have to say, Oh. Interesting. I didn’t know I wanted that. In fact, I didn’t want that until I understood what it was. So you’re in a big organization, you’re in a massive organization that basically preeminent in its day. They have this new idea, this new technology, but the market surveys aren’t promising.

They aren’t saying, wow, this is a multi billion dollar opportunity. Jump on it right now. How did it actually start? Did somebody take a big risk?

Stuart Tartarone: Simple answer is yes. But at the time we were this very large company called the Bell system. A million employees strong and there were pockets of revenue to invest.

That was the great thing. If you look back over the history of Bell Labs. And a lot of, and if you think about it, if that didn’t exist, a lot of the technologies, if you think about the digital and wireless age was invented by AT&T Bell Laboratories, it led to the transistor, to digitization, information theory, solar panels, charge couples, devices. And cellular technology. All of those were invented by Bell Labs and all of those were invented in New Jersey. Amazing. And there was this thought, what would need to be done, and investments were put in place. Think of the transistor.

The transistor, which is the basis of everything. Millions of transistors in this device today. But the technology that was created by AT& T was given to the larger industry to build on and use. Think about what was the big first thing transistors were used for, transistor radios that came out in the 1950s.

John Koetsier: Okay, so you’re working there, you’re bringing out this technology, you’re about to launch it David is unaware at this point but, he’s, you’re starting to work with the network and the installers and everything like that. Talk about the technology. You what kind of, I came on to using a mobile phone when we had 3G.

 And, then LTE was a big deal, right? Which is essentially 4G, I believe. And now 5G is the thing, right? So give us a sense of where the technology fits on that scale.

Stuart Tartarone: Yeah, so I go back to what David and I talked about. The concept of one big transistor. The big underlying.

Concept of cellular was to be able to use low power transmitters and take the frequency spectrum. This is scarce commodity. It’s a scarce commodity back in those days, scarce commodity today and but able to reuse that many times. If you’re broadcasting at low, at low power, you can reuse that.

That’s the whole basis. Cellular technology. And this concept was brought forth by two people, Doug Ring and W. Ray Young, back in the 40s. They actually wrote a paper that talked about this. And Ray Young actually became my first department head at Bell Labs when I started, but he went back to the 40s and this.

So you have this concept, and how do you implement this concept? And back in those days, we had You know, cell sites, we call them base stations, originally cell sites, to, to provide the signal, you needed a smart controller, a central controller, and Bell Labs had invented electronic switching, came out in the 60s to be able to have stored program controls, control switching machine.

And the other element of it was a device in people’s cars. But one of the big deal things that just happened as I joined to talk about enabling technologies is the microcomputer was born by Intel. Without that, none of this would have been possible because that was the enabling technology, the game changer.

That made us possible to develop and deploy the system.

John Koetsier: Talk a little more detail over what the innovation was in cellular networks that we heard David talk about he had a radio telephone and you said, Hey, you could get like. 10 or 12 conversations going at the same time. And then you were out of spectrum.

You’re out of bandwidth. What was the key innovation in cellular technology? You mentioned the low power. So you’re it’s low power. It’s local. You’re talking to a local cell tower. So somebody else could be maybe on the same frequency, but five miles down the road, 10 miles down the road, they’re talking to a different tower, was that the only innovation or were there other innovations allowed thousands, millions to use phones?

Stuart Tartarone: So related to that is, think about it, it was a vehicular service at the beginning. And as cars drove around the city you had to track where those cars were and to be able to recognize that they were driving at an area that they were in and needed to be served by a cell site from a, from another cell site.

This is the concept called handling, handing off from one cell site to another. And the ability to do that, to track that, to receive the signal, to look at the algorithms by which you’re going to tell of, this vehicle, this device sitting in someone’s vehicle, to switch from one channel to another channel, that was a huge innovation and all the and part of it has to do with the distributed nature.

And one of the things I got to work on very early was all these things, were coming out is the distribution among. Different elements of the system from the switch to the cell site controller to the mobile controller and how to optimize that in the best way so it would support this growth.

John Koetsier: So landline phones were analog the signal was, transmitted as analog and recreated as sound in somebody else’s ear.

Were the first cell phones digital? Did you send the voice digitally?

Stuart Tartarone: No, because again, let’s roll back to the 70s. Just based upon what you said, it was analog voice. And behind that was a whole digital logical processing that went through with commands coming from the switch coming from the cell site to direct the mobile what to do.

So you had this, voice analog and you had the control structure, which was all digital.

John Koetsier:  Is that control structure what opened the door for SMS for texting?

Stuart Tartarone: It’s, we’re talking. We’re talking many years later. We’re talking many years later. So I’m not sure I would say that. Yeah, it was that basis.

But, by the time texting came around, I think we were into 3G uh, when it came around and lots of changes from 1G, the 2G, the 3G. That was all brand.

John Koetsier: Would you have characterized the transmission speeds or the transmission technology in the first days that David had his cellular car phone as 1G?

Stuart Tartarone: Yes, it was. It was 1G. And, but the quality of it, the voice quality, because we’re talking about someone in someone’s vehicle with a high power transmitter in their trunk. It was exceptional. It was interesting. It was as good. David can, can provide the feedback on this. It was good as a landline.

David Meilahn: It was crystal clear. It was excellent.

John Koetsier: Wow. So it was better in some ways than what we have now. Certainly in the days of 3G, voice quality wasn’t amazing. Maybe 4G as well. Huh?

Stuart Tartarone: And we knew it at the time because, think about it. This is a low power, this is low power, and you just could not get the same quality that you could with a high power transmitter in someone’s car and a similar receiver.

John Koetsier: So David, you were in at the very beginning of a revolution did you keep upgrading? Did you stay on the cutting edge and you still have that phone today, right? You still own that first phone? Yeah I

David Meilahn: actually there was an event, I think it’s about 10 years after 1983, after that went on and my apologies.

John Koetsier: It’s all good. Everybody,

David Meilahn: what can you do? Let me, it’s a new phone system too, I gotta find how to.

John Koetsier: I think if you hit it with a hammer, then it will stop.

Stuart Tartarone: That’s a necessary device.

David Meilahn: Those darn landlines. Anyways. About 10 years after the cell phone uh, started being used commercially in 1983, they went to the digital.

And they had an event to go to digital and they dragged me out for this next event. And I actually consigned my phone as a donation to the museum of science and industry. So I still have a car that I, that the phone was, a phone call was made in but the equipment is now at the museum of science and industry in Chicago.

John Koetsier: Amazing. Amazing. What was the car by the way?

David Meilahn: It was a 1983 Mercedes-Benz three 80 sl. Nice fun little car to run around in now. Not as easy to get it into as when I made the first call. .

John Koetsier: Excellent. David, if as you look back so you talked about at the time. You didn’t have the sense that this was revolutionary, that you were the first person to make a cellular phone call that this would take over the planet, like literally but as you look back and as you use your mobile phone today what’s it mean to you?

David Meilahn: I think it’s amazing how it started, what the average person thought about it. And the different, I’ll call them milestones as, for the user. And it, and whereas it was an instrument for business basically only, or the wealthy it has over the years progressed to, bag phones, the brick phone the then a handheld cell phone, flip phones, and then all of a sudden they became smart phones.

And the actual phone call part of a telephone is, Not necessarily the most important piece. It’s that everybody’s glued to their smartphone. And it’s able to be bought by everybody in the world. It does not have to be only for the business or the people who can afford it. Everybody can afford a cell phone.

And they use them like crazy,

Stuart Tartarone: right?

John Koetsier: Amazing. Amazing. Stuart, maybe some closing remarks from you, because you’re still working AT&T. Amazingly, I’m not going to ask your age, but you’re no spring chicken. This has been the work of your life in a lot of sense, and I’m sure you’ve done a million other things as well, but this is this the biggest thing that you’ve done in your career is launch this and being part of this?

Stuart Tartarone: Most definitely. So talking about first cell phones, this was. One of the very first cellular phones that existed. This was the control unit that went into these vehicles. I’ve had this with me all these years and it really was to come out of school. A lot of people don’t have this opportunity to come out of school and back to what I said earlier, to work on something brand new that didn’t exist, that people questioned the market for.

And then here we are today with the proliferation that’s occurred with, with the, with going from here to here. So what a huge transition. And yes I’ve got to work on lots of exciting things in my career from there to personal computers to lands and today involved with with as we look at our network as as we visualize, virtualize our network and, working on.

Today, tools to improve how we develop software, platform engineering, but, and, and even got to work on one of the first internet banking applications, but there would be nothing like. I got to work on my first 10 years with the company.

John Koetsier: Amazing. And what a privilege, as you were told by your advisor, your student advisor, way back when in the early 1970s.

And I have to echo that today. What a privilege to chat with you, David and what a privilege to chat with you, Stuart. I thank you for your time and thank you for sharing your story. It’s fascinating. It’s part of history and I really appreciate it.

Stuart Tartarone: Thank you so much. Thank you. Good to see

David Meilahn: you. Thank you.

Thank you. Good to see you.

 

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Can generative AI make rockets launch faster?

generative AI in the enterprise

Generative AI won’t be building Falcon 9s or new space shuttles just yet (wait a few years!). But can it help with all the work that goes into running an organization that builds the future?

According to Kendall Clark, CEO of Stardog, yes.

Generative AI that democratizes access to data and insight and knowledge speeds up organizations can help with launching space ships, or anything else. For NASA, a generative AI solution is apparently helping the team to do in days what used to take weeks.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast

 

AI summary: using generative AI to speed up the enterprise

The script is a conversation between John Koetsier and Kendall Clark, CEO of Stardog, during a technology podcast. The discussion revolves around the role of generative AI in speeding up complex processes within large organizations. Kendall Clark discusses how Stardog leverages generative AI for data management, differentiating their approach from other companies like Salesforce by focusing on on-premise and hybrid multi-cloud environments. He also explains their strategy to prevent hallucinations or errors in AI-generated responses. The conversation concludes with the significance of creativity in AI applications.

AI transcript: generative AI and institutional knowledge systems

John Koetsier: Can generative AI make rockets launch faster? Hello and welcome to Tech First. My name is John Koetsier. I’ve done a ton of TechFirsts on generative AI. It’s getting pretty good. OpenAI just announced it can see, listen, talk back. What about the enterprise? What about companies, big organizations?

Specifically, what about NASA? A generative AI solution is apparently helping NASA do what used to take weeks, take it down to days. Thanks, of course, to generative AI. 

To dig in, we’re joined by Stardog CEO, Kendall Clark. Welcome, Kendall. Hi, John. Thanks for having me. Hey great name, Stardog. That’s an awesome name for a company, My first question is, so is NASA going to Mars next month , thanks to generative AI? 

Kendall Clark: when NASA returns to the moon and goes to Mars for the first time is more a function of the U. S. Senate to be honest, and, and those budgets and that sort of thing, it’s obviously an expensive endeavor.

 But, to answer your question, I think generative AI can help speed up a lot of complex things, largely around I think in this first wave, there’s going to be a bunch of waves, and I think most of the interest among the, Educated public, or people who are paying attention, right? It’s not everybody.

We should remind ourselves. The people who are paying attention largely because of its impact in what we call B2C space. Help me make a invitation for my children’s seventh birthday party and make it have dragons and fairies and something else. And I’ll pop these amazing photos.

We’ve all played with that and I love it. I’m addicted to it. Frankly, I can’t make a deck now without having some generative AI images so much so that. My employees now tease me about it. Stardog is a great brand partially because it, it lends itself so well to, I have a folder called astronaut dogs on my computer.

That’s got God knows how many variations I’m obsessed. In the enterprise. I think the first impact from generative AI will be in what we can call question answering. The movement from query writing to question answering. I would say that something like, and I’ll just make this up, 60 percent of the value, more than half of the value that enterprises get from technology information technology at all is in the area of answering questions of data.

And the dominant way that’s happened to date is there is some data in a database somewhere. There’s a lot of those, in fact. And someone who’s either very smart or someone who spent a lot of money on a product like Tableau, maybe, it’s got a BI tool, does something, manipulates an interface in some way.

And that results in a simple to very complex query, often SQL query, that SQL query goes to a system, gets executed, and answer comes back and value is achieved because the answer says false instead of true, or it says, 17, 000 instead of minus 17, 000, or it says false. John, instead of Kendall or whatever.

John Koetsier: Or more likely you discover, Oh, shoot. That was actually not the right question to ask the data. I should have asked this question. And then you send that question back to the data analyst. The data analyst goes, why did this idiot tell me the wrong question in the first place and then runs that query and there takes another level of time.

And then you go back and forth and finally dial in five or six levels down. You have what you think might actually be the real answer. You want it. 

Kendall Clark: You’re a hundred percent on it. And what that means is there is space organizational friction. Now, people were employed because of this friction to make it work anyway, but there’s space between my intent of wanting to ask a question of the data through the process that gets translated into a query, either by a piece of software by some people or both typically, and then it gets executed and then it comes back to me and, oh, it wasn’t quite, and this is a little bit like, for the folks in your audience who are it minded or history of it minded, this is a little bit what it was like to write code in the sixties.

So Or even in the seventies, like you’ve got two chances to ask questions, to make changes to your code base per day, because the compiler and the tool chain and the nature of the languages and the slowness of the computers back then, you start your morning run. It took four hours to compile.

You went and found something else to do. You came back at noon day. Oh shit. I forgot a semi colon or something like that, right? Oh, run it again. The day is shot, but now we use these, super fast computers, high level languages. Dynamic compiler, stuff like python, I can ask a million questions.

So the iterative cycle has sped up tremendously for programmers for generative AI and the enterprise. The first big impact, my prediction is not even a prediction. What I think is we’re already seeing is the movement from query answering query writing, which is this process we’ve been talking about, which works, but it’s slow and it’s error prone and it’s got a lot of extra people in it.

There’s a lot of people between the data and my intent. That’s going to get all compressed. And so question answering is. The LLM takes the place of all that space, right? And it says, Oh, if you’re asking what is the assessed or what is the test readiness of this subcomponent of the humans, of the man’s space capsule returned to moon thing.

And, give me the full lineage and traceability of all the. thAt’s a complex question that’s literally rocket science, right? And the answer, now if people can, not just technical people, but something like everyone, knowledge workers. Can interact with the data. I will say directly from their experiential point of view, although there’s a lot of stuff in between, obviously a big stack, uh, that’s going to compress all those cycles, much like the rise of dynamic languages did for programming, uh, programmer productivity.

We’re going to see that in what you would call just a, we used to call general office worker, people, but now business analysts, knowledge, or people who need whose job depends on. Interrogating the data where, you know, now, according to normal technology, interrogating is a metaphor. It’s like a fancy word in the LLM era question answering era.

Interrogating is not a metaphor anymore. We’ve concretized the metaphor. We’ve made it literal. You’re literally like, what about this? What about this? What about this? Firing questions at the data by typing them out as you say, opening extend this to this kind of verbal thing. But, That’s a fun thing, but it’s not going to really make a difference to the questions you get back.

I don’t think, uh, but yeah, that’s what’s good. That’s the first thing we’re going to see. I think that’s the original nub of your question. So in that sense, it can help everything that NASA does, everything that all of our customers, everything that everyone’s who’s engaging this technology. Can help them make their jobs go faster, better, because effectively, I like to say, democratizes access to data.

John Koetsier: So it’s interesting because when I got the pitch for you to jump on the podcast, what I immediately thought of was enterprise knowledge management and there’ve been huge. Products and projects that enterprises have been going on for, I want to say decades, and I don’t even think that’s an exaggeration of enterprise knowledge management.

What do we know? How do we categorize what we know? How do we put it in a place where people can access it? How do we search it? How do we surface it? And as we’re pre chatting before we start recording saying, hey, that’s not our space. We’re not about that sort of static data, that static knowledge and documents and stuff like that.

We’re about data. Talk about the evolution of knowledge management or how you fit or contrast to it. 

Kendall Clark: Yeah, it’s a fair question. I feel like I’m cynical about knowledge management, lots of but that’s the average view. I think, as we were talking before you, you said, you signaled some of that in your, in yourself.

I think the, let’s start with the fair thing to say. I think the fair thing to say is it has made sense at all points since, let’s say, I don’t know, 1970. For big companies to make some investments and really what we should call it as library science because that’s really what it is And I don’t mean that in a I guess I played it for a joke a little bit But I mean it I mean seriously librarians Serve a really super useful function in our society by organizing knowledge, right?

Like I like and I know you know, no one in your audience or who’s listening to us who’s under What would you say? 40? Certainly no one under 30 will know what these next words mean. But remember you used to be able to go to a library. It was a place right in the world. It’s not the mall and it had knowledge in it.

Primarily in the form of books, but other forms as well. And there were these people there who basically organize that knowledge and they sat there and waited all day for you to come in and ask a question. And they love to help you answer that question. That was a way that our society worked, right? It was this kind of a socialist vision, frankly, strictly speaking that knowledge was a common asset that we had created as a civilization.

We should all have equal access to it. You could even use to be able to call them on the telephone and say, I’m, I’m writing a story about. The winter migration of carrier pigeons in Finland, I don’t know, something, whatever. And they’d go, okay, there’s a book about that, come get it, they’d be very excited.

And then you would read it and you would know stuff. The web ruined that, or destroyed it, or changed it, altered it forever. But I still think there, but, and in some ways what’s happening with the web, what’s Google has done is find a way to make the machines do a lot of that work. And so with respect to documents, websites, web, it’s just a collection of documents.

After all, we mostly self serve. We go into the search bar. This is what people, everybody knows how to do. That’s replaced that call a librarian, but it’s the same. We’re satisfying the same human need. sO with respect to knowledge management, doing that for large bodies of information inside of a big enterprise, my hat’s off.

It’s on the side of the angels, right? I’m thinking, be cynical about that. What I think we can be cynical about is there were, there was always this obvious. Obvious to me, collision course between what we typically call data management, enterprise data management, ETL databases, data warehouses, and knowledge management.

Then those needed to as I like to say, be on a collision course, smash together, mutate into some new thing. And I think that’s been obviously, it’s been obviously, it’s obviously been the case for the last, say, three or four years that’s happened. LLM. In particular makes it, I think, I don’t think you can deny that anymore, that this question answering capability we were talking about previously, and then you can extend question answering to all the other traditional jobs to be done in data management, data modeling, data mapping, data quality discovery, metadata management, inference rules.

And then the tradition, and then the, the traditional realm of data science. That machine learning has eaten all those things. In a way that’s now really accessible to everyone, not just to Google. And that’s going to forever change the practice of data management, just like the web forever changed the practice of.

Library science and knowledge organization. That’s my non cynical take on what’s happening. 

John Koetsier: It’s really interesting, actually, if you could somehow study and understand what percentage of human knowledge, let’s say, resided in dead trees. And then how that moved to documents, electrons in hard drives, and how that transition is happening as a greater and greater percentage of our operational knowledge.

Transitions to maybe more dynamic forms of knowledge that are in databases that are measuring processes that are ongoing, that are real time in a lot of senses. And that’s an interesting transition of what percentage of the world’s knowledge is in different places. Certainly the percentage in databases that is a live database, that is growing, that is that is measuring live activity.

And that you want to query because you want to know the status of that live activity is certainly growing and being able to access that easily is really impressive. 

Kendall Clark: Okay. So this is a super interesting question. You forgot, or you didn’t mention, I should say a third important source, which is the knowledge that only resides.

In and between people. Yeah, exactly. That for a variety of reasons, people didn’t need to write down. They haven’t had time to write down or it’s just too fluid and it doesn’t fit in a database. The thing about, you don’t really put stuff into a database until it has this kind of a particular kind of ossification of form by which I mean, in a database of traditionally what we mean by database is a relational database, a particular data model.

It’s not the most agile, flexible thing. In fact, it’s rigid. Relational databases were typically intended for basically accounting data and accounting data, whatever else it is. It’s not dynamic and fluid and creative. The values may change. You have a. A status of the, of an account that changes, but the rules of structure, right?

The gap rules are pretty, it goes back to what? Seth 16th century, Florence, double entry accounting. A lot of that stuff is really old and well understood. Then you jump ahead to somebody like NASA, literally trying to do rocket science, get humans. Back and forth across the solar system and they’re learning new things every day.

They’re right on the cusp of the barrier between or the boundary, the border between knowledge and ignorance, on the other side of what we know is the black, scary, nothingness of ignorance. And if you’re trying to peck away at that, push that border out a little bit every day, you may need different techniques for data management.

What I think is interesting, and then you add to that the fact that while you say that the big historical trend is to go from books. So electronic form, all of the growth in the next 10 years forecasted for enterprise data is not in what we call structured data databases. It’s in semi structured and unstructured data.

So like this conversation 20 years ago. Was two guys talking right this conversation five years ago was a thing you could watch on YouTube this conversation Now or any time in the future, I push a button at the end You push a button out pops a transcript We stick that into some kind of knowledge platform and all the entities and everything we mentioned, you know I mentioned migratory patterns of birds and Finland and We mentioned libraries and you mentioned, like those accounting rules, those all pop up as nodes in a graph, the knowledge graph, right?

And connections between them and. Then, okay, this is a conversation of a different kind, but if we’re doing this for work, we, this might be work product, right? So that transition about where the knowledge is, what they call in the academia, the sociology of knowledge, the production of knowledge, thinking about knowledge, getting produced, like thinking about cars, getting produced.

It’s an industry and there are processes and there’s inputs and outputs, and you can measure it. And. There’s this whole big field since probably the eighties and academia and studying the output of knowledge, what we’re talking about. And when we were talking about knowledge management and data management, colliding and fusing into a new thing, taking a lot of those techniques, a lot of the new algorithmic insights and helping big companies.

Manage the data that they produce better. I will say, I’ll stop with this. I’ve been, I don’t want to give you a filibuster here. It’s interesting to think about company’s competitive landscape vis a vis one another, like you take two big global pharmaceutical companies. Maybe the most differentiated assets they have are their data sets.

Like you cannot, I mean you could maybe swap all the people and the people at one pharma can do the jobs and there’s some, winners and losers at the margins, but if you took the easiest way to destroy a company is a thought experiment is to take all their data and swap it with their nearest competitor.

Just so you come work on Monday and let’s say all of Glaxo GSK’s data belongs to, uh, Nova Nordisk and vice versa. Like you haven’t destroyed anything. Every bite’s preserved, right? But you just swapped like what happens the, they’re destroyed, right? So it’s difficult to, I think it’s difficult to overemphasize or exaggerate the importance of managing data and managing knowledge and yeah, generative AI, given that it produces text and all of this knowledge and data management ultimately more or less ends up in text.

Let’s say images or a form of text, right? Close enough. The applicability of these techniques to this area is pretty endless. 

John Koetsier: Yeah, it’s an interesting space, and I just came back from Salesforce’s big Dreamforce conference in San Francisco, and they’ve added a ton of generative AI to Tableau, which you already mentioned, other products as well.

Their vision also is that you can query your data. Natural language, anyone can do it. Everyone is a data scientist, all that stuff. And so that’s super powerful. Of course, if you are going to buy Salesforce, I’m pretty sure you’re in for significant charges for each user. And significant challenges there, but it is a compelling vision that all the data.

That company produces is at your fingertips that you have control over, that you have access to, that you have permissions for, and you can query it, you can know what’s going on, you’re a sales rep, you’re a sales manager, you’re a product manager, you can instantly know all this stuff. How does your vision differentiate from that?

Kendall Clark: Well, at a high enough level, it isn’t. It’s, that’s exactly what we want to do. I think there are differences that matter. First off, I would say Stardogs really focused, we’re focused on financial services, farm and manufacturers. Of a certain size and unlike Salesforce, now it’s interesting.

Salesforce is an interesting example because they did not start off as a data management company, enterprise data management company, become an enterprise data management company because strategically they decided to move in that direction because they make a lot of money and they got, frankly excess capital they need to deploy and they could have done many things, but moving into data management makes sense because they do control a critical.

Strategically critical corporate data asset in terms of the CRM. And that gives you some leverage. And so it’s clear with the acquisition of Mule, MuleSoft a couple of years ago, six years, whatever that was a big signal. Hey, we’re going to be a data management company, but I think it’s important.

I, the big, I think probably the biggest differentiation between our vision and theirs is Salesforce is really a cloud company. And they’re really best at managing and connecting data that exists in the cloud. But companies still have a lot of data in what we call on prem, not in the cloud. And our focus has always been on that data that either hasn’t gone to the cloud or we’ll never go to the cloud.

So Stardog is a cloud platform, but it also can operate on prem. It’s a Kubernetes platform, which. Technical folks in your audience just means, Kubernetes basically replaced the Java virtual machine as the enterprise delivery mechanism. The dominant one. But that just means you, we can operate our platform.

Our customers operate it both on prem and in any cloud environment. Excuse me. That just means startup could be adjacent to data no matter where it is, not just the part of the data that’s in the cloud, even if in the end, in the next 10 years, let’s say, 80 percent of all corporate data resides in the cloud, 20 percent of all enterprise data is still a very large amount of data.

And it needs to be connected. And what we’re really focused on is connecting data and then making it accessible with this LLM technology we’ve been discussing in what I like to call the hot everyone calls the hybrid multi cloud. So that part of the data that’s on prem hasn’t moved to the cloud yet.

Or again, my favorite statistic, 85 percent of all businesses, irrespective of size, have data assets at more than one cloud. Now for most businesses, that means they have Salesforce and HubSpot, right? Which is fine. And those are different solutions, different clouds, but really, that problem is going to get solved for SMB and small businesses by those vendors.

But it’s true of big businesses, like our big banking customers have data everywhere in every conceivable. Location format. And 

John Koetsier: it’s not comforting that the financial industry has this data everywhere. 

Kendall Clark: That doesn’t mean they’re not controlling it, but I just mean by everywhere.

Like what’s the newest, like most globally significant banks, unlike say Facebook and one important regard, Facebook is like a teenager of a business and globally significant banks are like grandparents there. On average, what, 75, 100, 150 years old. So they’ve existed longer than computers have existed, which just means you take a cross sectional slice of a big bank, you’re looking at the archaeology of the last 70 years of IT.

They started with mainframes or what was even before mainframe, boxes of punch cards or whatever. And they’ve had, they got one of everything, and there’s legacy all over the place. They have data everywhere in that sense. That may also not comfort you, which is fair. But that’s a tough problem to solve.

You’ve got a system that’s running. It works. It meets requirements. It’s just old. And you meet some it people in the smart, their smart view is don’t mess with that. Leave it alone. It’s running. Why mess with it? And then, somebody else equally smart says no, we need to modernize that they’re not, nobody’s right or wrong there.

It’s just, it’s a hard problem. Not to come on your show and make a brief for banks, but you get my point. Like they’re in a tough spot when your organization’s 150 years old. They are in a 

John Koetsier: tough spot. And that’s why we see the rise of neobanks, but we are straying way far afield, so we’re going to, we’re going to pull it back here.

So you’re building your solutions so that organizations can query their data. That’s great. NASA is using it. Others are using it. How do you solve hallucinations? That’s obviously a challenge with generative AI, and that’s the problem you cannot have in your scenario. That’s a problem I’m okay with. If I talk to open AI, I can, does it pass the sniff test?

I can double check it. Bar, Google just added or bar just added some double checking of what it says as well. How do you solve hallucinations? 

Kendall Clark: Yeah, look, there’s a cheating answer to this question, which is what we’re doing. And then there’s the hard research question of making the LLMs stop hallucinating.

I won’t address that. That’s a research question. It will get solved, I suspect, to some appreciable level of, quality, precision and recall. I think the first thing to say is LLMs are not databases, and they should not be treated like databases. The way we solve it, which is like somewhat cheating, is we don’t use LLMs and Stardog as a source of data.

We use it as a mechanism to discern human intent. Which is a kind of a fancy way of saying, what is the person talking about in their natural language? Whether that’s my, my, one of my co founders is my CTO is from, born in Istanbul. So he speaks Turkish and English. And I said to him at some point, I said like, how’s the LLM working in Turkish?

As as a joke, I knew they worked for many natural languages, but not necessarily for all of them. And he just showed me a demo. This was this summer. And it was just straight up Turkish and it just worked. It was amazing. We use the LLM as a way to figure out what the person is talking about.

Translate that into a query or a search or a hybrid query search or a data modeling, piece or a data mapping piece or a rule or something that then gets executed by our platform. And that cuts out the chance for hallucination. What it means is sometimes the LLM will get the human intent determination wrong.

But then that just means a query. Now we’re just back in the case. You said earlier, someone expressed the need for a business question. Some other mechanism translated that into some query and didn’t get it right. Yeah. Frankly, relative to the status quo. So what happens … you just redo it.

John Koetsier: And I’ve been in that scenario quite frequently, and usually it’s. Not the data analysts who got it wrong. Usually it’s me who asked the wrong question. 

Kendall Clark: Not specifically, but yes, almost always. It is just frankly, and I tell the team this all the time, an LLM is not magic. It cannot determine intent when no crisp intent exists yet, but that’s okay.

People often find, we find our way by. Asking a kind of, frankly, not very good question, and then we ask a slightly better question, and then we iteratively improve, and then we discover what our intent was all along, more like probably we create the intent in this iterative process, and then we retroactively attribute it to ourselves, and that’s a, it’s a psychological thing we do, and that’s fine.

Oh, what I meant all along was this probably not. It’s what you mean now, and here’s the answer. Fine. Nobody’s throwing rocks at those two. It’s just normal human stuff. Yeah. So in our approach, we don’t treat the L we don’t ask the l m any questions that, where it’s hallucinations can bother us, right?

. . . Cool. And so in the near term, that’s the best solution. It’s not su, this is use case specific in context relative. That’s what you want to do in a regulated industry where the questions really matter. But, if I want another cool picture of an astronaut dog and, mid journey or something, I want that slightly random quote unquote wrong component because that’s really the source of creativity.

John Koetsier: Absolutely. And creativity is a wonderful thing. Just not always when you’re querying data from your own company. That’s exactly right. Kendall, this was interesting. It went places I didn’t expect it would. Thank you for taking the time. 

Kendall Clark: Thanks, John. I appreciate it.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

DEI in AI: Is diversity, equity, and inclusion a solved problem in AI?

dei in AI

Is diversity, equity, and inclusion (DEI)  in AI a solved problem?

I’ve written a lot of stories lately about AI. AI is critical to our future of automation … robots … relf-driving cars … drones … and … everything: smart homes, smart factories, safety & security, environmental protection and restoration. A few years ago we heard constantly how various AI models weren’t trained on diverse populations of people, and how that created inherent bias in who they recognized, ho they thought should get a loan, or who might be dangerous.

In other words, the biases in the people who create tech were manifesting in our tech.

Is that solved? Is that over?

To dive in, we’re joined by an award-winning couple: Stacey Wade and Dr. Dawn Wade. They run NIMBUS, a creative agency with clients like KFC and featuring celebs like Neon Deion Sanders.

Enjoy our chat with full video here, and don’t forget to subscribe to my YouTube channel …

Or, listen on the podcast

DEI in AI: solved problem?

Not a subscriber yet?

Find a platform you like and let’s get connected.

Transcript: fixing biases in AI

Note: this is AI-generated and likely to contain errors.

 Is equity, inclusion, and diversity in AI a solved problem? And are there new challenges with generative AI? Hello and welcome to TechFirst. My name is John Koetsier. I’ve written a ton of stories lately about AI because AI is critical to the future of automation. It’s critical to the future. Robots, self-driving cars, drones, and everything else.

John Koetsier: Smart homes, smart factories. Safety, security, environmental protection, restoration. A few years ago, we were hearing a lot about how various AI models weren’t trained on diverse populations. It created inherent bias in who they recognized, who maybe a model thought should get a loan, who might be dangerous.

 

In other words, the biases in the people who created the tech are manifested in our tech, what a shock. 

 

Is that solved? Is that over? To dive in, we’re joined by an award winning couple, Stacey Wade and Dr. Don Wade. They run Nimbus. It’s a creative agency with clients like KFC, featuring celebs like Neon Deon Sanders.

 

If you’re not into football because you’re into tech, he’s a Hall of Fame football player. And they won multiple awards over the past few years for their agency. They’re in a unique position to see the impact of AI on everyone, not just white ass English speaking tech workers. Welcome. How are you guys?

 

Hey, so pumped to have you guys, thank you for taking the time. Let me just start here. Why did you guys start digging into AI?

 

Dawn Wade: From a multifaceted perspective, um, I’m a researcher by nature. So when new things come out, I always look for gaps within that. And it was just a personal interest to me. But then as you see how our country is moving, it always makes me weary when things aren’t created in a.

 

And I’m going to use a wholesome way, but created by an individual, because having a computer engineering degree, I’ve recognized you’re only as good as the data, or you’re only as good as the input that goes into a system. So, it made me question, how is this being developed and how does it know enough about me to represent someone like me?

 

So that was the 1st facet as to why it was very interesting to me. But then the 2nd, 1 is we work in advertising. So it relies on a creative nature and creative has to be very nuanced to people, their habits, what they like, what they don’t like, how they talk, where they’re from. And that’s very hard to capture on a day to day basis and marketing.

 

So we were very interested in how does that work in something where we’re trying to resonate with a real person? How can a computer resonate emotionally? To convince you to buy something or to do something. So very two different ways in, but it’s continued to be interesting to us.

 

John Koetsier: What have you found when you’ve looked into generative AI and how it’s created people and imagery in your space in, in advertising, in, in marketing?

 

Has it been, um, compelling? Has it been interesting? Have you seen challenges with it? I think

 

Stacey Wade: definitely mixing challenges in it. I mean, um. You know, it’s similar. I think for me, when I, when I think about a, I, uh, it’s so new. I feel like we’re, we’re just like, barely creeping up the hill with this thing. Uh, but what stands out to me is something that, uh, was happening probably in 21, which was.

 

And even when we started the agency, which was representation in the industry, what we’re noticing is that the representation of those images that are being, you know, the output of those images are very similar. There’s confluence to what we experience in agency life, which is, are we represented in a way that is authentic to who we are?

 

And is the voice and tone also authentic? And that’s the 1 thing that there seemed to be some juxtaposition there, which was no. And that was something that was a little bit scary for me personally as an artist. I’m not. You know, uh, computer engineer, I’m not done. So I look at things completely different.

 

It’s very abstract for me sometimes. So to be able to see that shown in a way that look very familiar to when we started the agency 21 years ago, seeing that show up in a was a little

 

John Koetsier: concern. 21 years ago. You guys don’t look old enough to have started the agency 21 years ago. Is this generative AI in action right here?

 

You’re deep faking yourself.

 

Let’s dig into that, Stacey. Um, you, you talked about representation. Uh, you talked about authenticity. Don talked about authenticity. What were you seeing? That was not adequate representation. What were you seeing that was not authentic? Yeah. I just think

 

Stacey Wade: tonality, I think, you know, you’re so quick. A lot of those.

 

Conversations that were happening in 2021 about a show up today in a way from a gender from the generate generative experience. You’re seeing it show up in the advertising. So for us in advertising last year, it was all about, you know, the metaverse this year. It’s all about and the people that are using these tools.

 

Sometimes may bring in those biases in the tools that they’re using and the tools, the output of that is showing up and the output doesn’t look like me, it doesn’t have my voice. And I think that’s the part that we’re trying to, as an agency, luckily enough, we have right, you know, left brain and right brain on this call, you know, so to have somebody, you know, That understands that space and has, you know, the background to be able to kind of put, you know, just very logical, pragmatic thoughts wrapped around AI, what that looks like for us, and then to take more of an artistic approach to understand tonality and touch and feel and making sure that we show up as from a culture standpoint, we’re showing up and not being erased or being dumped down in a way that AI is basically pulling from these.

 

Inherently biases, inherent biases, pulling those into the images, I think, is something that we’re trying to aggressively, uh, have a conversation about and aggressively be a part of that conversation. So that. We can start to the same way that we came into an industry that left us out. We can start to include our thoughts and tone and authenticity inside of, as a part of the output of a,

 

John Koetsier: I think I’m wondering what that looks like.

 

Go ahead

 

Dawn Wade: is that is perfect as people we are imperfectly perfect. Right? So, if you’re looking at an image. That’s AI generated. All the strokes are going to be right. It’s going to be balanced. It’s going to be symmetrical, right? But a true artist that does that are going to have his signatures within that.

 

His brush of how he strokes are going to be slightly imperfect. And I think we as people are okay with being imperfect, but AI is looking to achieve that perfection that I think takes away from the authenticity. So like, that’s my lamest term of saying that in a way. Is that the nuances of that person or that artist are some of the things that somebody like Stacy is going to value because he’s an artist, but that generated image.

 

The eyes are going to have that slight, but that line means something on the eye, or they’re just certain attributes that is not going to see. But a real person looking for the beauty in something is going to be missing based off of the AI generated content.

 

John Koetsier: So Don, AI’s gift to you and generative AI from mid journey and stable diffusion is people’s hands.

 

They’re getting better, but they’re not great. But that’s, I know that’s a different thing than what you’re talking about there. I’m trying to dig into This thing about authenticity because I think that many people might have that from a diversity perspective, but also from an artistic perspective. Right?

 

Um, and, and I’m wondering, you know, do we have people sitting in, um, a high rise in New York City or, or somewhere in LA saying, give me an urban scene to, you know, mid journey or stable diffusion. And then it’s getting something very stereotypical or what’s happening.

 

Dawn Wade: That’s exactly what that is. It’s what you think you saw on TV or saw in some magazine or saw in some way you think that that’s representative of this particular scene, and it’s not until you live those experience.

 

You can’t dictate that for the next person. And that’s where DE& I comes into the space, because there’s not a program that teaches us diversity, equity, and inclusion. These are lived experiences, and I can say as an African American woman, but Stacey’s experiences are going to be different because he’s not an African American woman.

 

He’s a man. And my Hispanic counterpart is going to have a different experience, but this isn’t just a race situation. This is like when you look at the LGBTQ community, the disability community, like those are nuances. They cannot be captured in AI appropriately. So, you know, when it started. Really ramping up in 2020, it was all about facial recognition.

 

But when it comes to other things, it goes deeper. You can’t do that from an AI perspective adequately. That’s going to be representative of those communities, even amongst women. You know, but I think there’s a technical aspect of it. There’s also the community usage of what that means and then who is developed for, right?

 

So if it’s developed for a consumer versus Just an individual user is going to have different nuances in it. So I think that we can’t address AI with a broad stroke. It has to be chiseled in a way if we want it to be sustainable and safe to use.

 

John Koetsier: What’s that look like then? Um, because generative AI isn’t going away.

 

Um, if people are going to use it and frankly, many of those models are open source, they’re out there. There’s a massive amount of innovation and creativity happening there. And it’s kind of a land grab. And it’s also this explosion of capability of people who could never create something like this artwork or that scene or these people or whatever are also doing it.

 

So that’s not gonna, the genie’s not going back in the bottle.

 

Dawn Wade: Absolutely not. But we have, we can’t go from that. If you’re not first, your last mentality. And that’s what a lot of the software is. That’s what a lot of the platforms want to be first to the scene. You have to get away from that mentality when it comes to AI.

 

You have to invest in the time of connecting with those that you want to target. So if it’s AI that’s targeted to a certain group or certain usage, you have to bring in people who are experienced or have those cultural nods and allow them to give you the inputs to get it right. Because the one that’s going to be long lasting and most successful is the one that’s going to get it right.

 

The one that’s first is not going to make it to the end point. So I think that’s the, the, the. The, the moment in which you need to pivot the mindset, you don’t have to be first to market, but you need to be last in the market to make it successful because you’ll be the one who focused on getting it right versus getting it first.

 

And I don’t think many aren’t that way at this point. Stacey, talk

 

John Koetsier: about what that looks like. Uh, talk about what that looks like for a brand that plans on being around for a while. Plans on serving its community well. Wants to connect to its community, AI and generative, and it looks like a cheat code for boom, check, uh, done, got it.

 

Uh, there we go. Talk about how you believe that the lack of authenticity in that will impact that brand over time. I think that’s

 

Stacey Wade: something, even, even when you remove AI, that’s something that brands struggle with today. So now you’re throwing in another level of complication with AI, because you ever seen that book, you ever read that book, Blink by Malcolm Gladwell?

 

Where he just speaks about, you know, you just look at it and you just know something, even though it looks like me, there’s something just not quite, it’s not me. And I think that that’s what, listen, you’re, just as AI is changing the landscape. Let’s not get it twisted. The consumer is also changing and they’re changing really quickly and they’re becoming very smart and they can see something that’s authentic.

 

They can sniff it out. They know. So I think brands are as much as they want to jump into a, I, we’ve seen brands jump into a quickly to Don’s point. You see brands making this quick charge in to be 1st and what we’re noticing what we’ve noticed is that they’re getting it wrong. And now they’re trying to like, you start to see them kind of take steps back and not wanting to be first.

 

And now it’s becoming laggard. So now they’re trying to figure out, okay, how do we actually do this in a way that’s authentic? And I think, uh, that starts with brands actually being authentic so that they can understand the blink effect. Like, okay, I know that this is real. I know that this is not real. I know that we need to make this as perfect as possible, but we need to also bring in those cultural nods.

 

So you need to bring people that are able to see those nuances, able to understand tonality, able to understand, you know, the hat is actually not a Detroit lion’s head is actually fear of God. Those are very. Small details that don’t show up in ai because AI would take this image and make it into Detroit lines, but it’s not, I’m not, you understand what I’m saying?

 

Tigers, not lions. I get it, but it’s not, it’s actually, there is a nuance, a cultural aspect to the logo that has to come in on the output. And a lot of what you’re seeing, you even mentioned it when you talk about urban communities and AI inherently is going to bake in some biases, what it views as what an urban community is, but my urban community is completely different than, say, your urban community.

 

So brands are going to need, we say, you know, when in Rome, bring a Roman. They’re going to need people that really understand these cultural nods and nuances to be able to one, protect them from themselves and add a layer of authenticity to it that actually is going to be beneficial, not only to them, but the consumer that they’re trying to reach.

 

John Koetsier: It’s pretty crazy challenging, isn’t it? Because our technology is a reflection of ourselves, and sometimes it amplifies, uh, bits of what we do and what we create, and all that stuff. And if we look at our culture over the past 30, 50 years or so, And then you look at, um, the token person of color in the TV show in the 70s or the 80s or whatever, and how that person was represented or how that person needed to be represented, uh, the corpus of knowledge that AI is drawing on, whether it’s that or whether it’s just remnants of that.

 

Cultural detritus that accumulates over decades and then manifests itself in my image of what somebody in Detroit who is black and grew up there is versus your image. It’s such a. Insanely complex web of everything. How can anybody get it right?

 

Stacey Wade: You’re, you’re, you’re, you’re nailing it. But I think that’s where Don says something that’s so, it almost clicked like a flag in my own head.

 

It’s like, you know, it’s the ones that are taking the time to not. You know, rush down the hill, but are actually taking the time to walk and understand what’s actually happening so that they can give you the best output. So, being able to curate is similar to, you know, how we curate our own agency, being able to bring different people into the agency.

 

It’s not a black agency. It’s not Hispanic. It’s not a Hispanic. It’s not white agency. It’s a culture agency. It’s being able to weave in these different culture. To be able to slow it down fast enough so that you can speed up as the, as the technology is moving forward. So it’s not a matter of saying it’s going to go away.

 

We know it’s not going away, but we are saying that we want to offer up cultural nods and nuances to make it better.

 

Dawn Wade: But I think anything without checks and balances is dangerous. Anything that doesn’t have checks and balances is dangerous. So when it comes to AI generated content and things like that, it’s generating that, but what’s the check and balance?

 

When you get that output, do you then go and check to make sure that it’s going to resonate or that it’s safe or it’s not offensive? Or is it assumed because you did it using AI that it’s safe already? So where’s that safety check? Where’s that quality assurance that needs to take place? And those are things that I want to hear from, from the developers in terms of how that’s coming to market.

 

To have those safety checks, but time and budget often eliminates that, and that’s the reason for AI. So those are some of my watch outs when it comes to that.

 

John Koetsier: Don, talk about how you see generative AI developing over the next couple of years. Um, there’s a, there’s a, a vast group of people. Who now have access to whether they’re open source models or whether it’s a model, you have a subscription to, uh, in discord or on the web or an app or something like that.

 

Millions, maybe probably tens of millions right now are actively building stuff with generative AI. How do you see that developing over the next couple of years? And how do you want to impact the development of that to make it something that you think is net positive globally?

 

Dawn Wade: There has to be checks and balances, just like you develop a new food or a new drug, there have to be checks and balances before you can, to me, uh, infiltrate the, the, the world with it and that we don’t have that there.

 

And I think that there are some government oversight groups that are being developed to do that. But part of that takes, you know, it takes the fun out of it. So I think of if there were 10 million solutions, right? 90 percent of them may be great, but there may be 5 or 1 percent of it. That could really set us back.

 

And I think that’s why we need oversight because that 1 percent could really, um. Mess something up for us, whether it’s between our country and another country, when you can take somebody’s voice and their likeness and create a video that anybody from the human eye could not know is fake, what happens as we’re debating or working with countries that we don’t have the best relationship with.

 

And it looks like our president is sending a message that may not be true or doing something. And you can fake so many things who, and you only have minutes to react. So that’s what I mean about checks and balances. And as a person who loves my individualism, I don’t necessarily love the thought of having oversight at that level, but to know that.

 

It could be something very dangerous, and somebody could get hurt or killed. I think that anything that crosses those boundaries that could really hurt people, kill people, represent something in a way it’s not intended, requires that, to my understanding.

 

John Koetsier: I think. Honestly, everything negative that you just wanted to avoid there is going to happen and almost inevitable.

 

I, I, I think that there’s large language models that are out in the wild that somebody will train, uh, somebody who’s neo Nazi will train on that content. Um, I think there’s, there’s generative AI art models, uh, that are out in the wild that somebody will drain, uh, train on very racist ways of imagining how different people look like.

 

Thank you. I think that you will have weaponized AI and generative AI and deepfakes globally, and I don’t know that there’s any solution. I don’t, we’re going to have to invent some new technology. It’s funny, I mean, I think Elon Musk has created cultural vandalism with Twitter, um, and lots of other challenges, but he wanted to invent a new AI that will determine the meaning of reality.

 

Great! Um, who’s reality, um, but I, I don’t know how we’re going to avoid this semiotic catastrophe. This, this, this, this. Dissolution of meaning this, this destruction of truth, because I don’t know how we can possibly escape it. I hope somebody smarter than me has a plan. I feel like you, I feel like you

 

Stacey Wade: and Dawn could get together and she reads the books on this, like, this is like her favorite subject matter.

 

So I feel like you all could talk about this all day, but I agree. That’s the part that scares, scares me is, uh, you know, Don, I kid you not. Don’s been reading these. Books that speak to this, you know, grids, you know, hostile takeovers for a long time. It’s kind of like her, her, you know, when we go on vacation, that’s one of the things she always picks up a book.

 

John Koetsier: This is how she relaxes.

 

Stacey Wade: Yeah, it’s a hundred percent, but you know, it was crazy two years ago, even speaking with when she would. Throw this at me. You kind of like, Oh, it’s a nice book to see this come into reality. Like some of the things that she, you know, we’ll have conversations about to see some of them actually become real.

 

It’s almost like watching the Simpsons, you know, how they actually like predict the future. It’s like, you’re seeing it happen in

 

John Koetsier: real time. Don save us. You’re the technologist. What are you going to do?

 

Stacey Wade: Somebody has got to do

 

Dawn Wade: it. So. The thing is, if you can dream it now, it’s like, if you can dream it, you can do it, and that can be scary when people don’t have your best intentions at heart, you know, so I don’t know that anybody has a solution because it’s such, like you said, thousands and millions of solutions that are open source for somebody to take hold of that and to customize it, and 90 percent of the time it’s going to be for good, but it’s going to be some of those that aren’t for good, you know, And I’m just not looking forward to that in any way.

 

John Koetsier: It’s a crazy, challenging world that we’re moving forward in. And I guess, um, I’m going back to Gandalf’s wisdom in Lord of the Rings. We have to live in the times that we’re in. We have to do the tasks that

 

Stacey Wade: we have. No, I’m going to have this one. I’ll do this one. So keep going.

 

John Koetsier: Do the best we can. Uh, hopefully there will be some new technological solutions that will. Tag, uh, when something is artificially created, and I know that we can detect it right now, but it’s going to get to a level where the human eye, as you were saying, cannot detect it. The human ear cannot detect it.

Right? We’re going to need some solutions that tag something that is real. Even as our definition of real changes technologically, uh, it’s a crazy world we’re moving into, Stacey.

Stacey Wade: It scared me a little bit. I mean, I’m excited about it, but I’m also scared the same, the same. 

Dawn Wade: Yeah. When you think about like where we were a couple of years ago when they’re like self driving cars, you know, like, and I’m like, Oh, that’s cool. You know, but now like it really can happen, but then you see, you know, one or two really bad stories and it makes you doubt the, you know, efficiency, like, well, who tested this or how did that?

So can you imagine something like this as such a humongous scale? You know, so I think that within our generation, you know, and what’s going to be that in 50 years is the landscape won’t look the way that it looks now, you know, and that’s exciting. But I think we have to have the foresight to get ahead of it.

And figure out how to set boundaries so that, you know, we don’t mess it up for ourselves or for our future generations. So we’re going to, somebody has to have that brain power and that oversight to do it. 

John Koetsier: Yeah, and government is painfully slow in all these things. So they’re like 10 years behind. Right.

We’re going to have our generative AI “Senator, we sell ads” moment. It’s going to be, you know, 10 years too late, but hopefully someone will figure it out and the companies, uh, big tech will also do the right thing. That’s wow. We’ll see. Who knows? I gotta say it. Go ahead, Stacy. We hope. We hope. We hope.

Absolutely. I gotta say it’s been, this has been fun. It’s been, it’s been interesting. Um, it’s been great. I usually look at stuff. Um, and I talked to people who are inventing new technology. You’re dealing with the consequences of it and using it and, and, uh, talking about the impact of that is also very, very useful.

Thank you so much for taking some time out of your day. Thank you. 

Dawn Wade: Thank you so much. And I appreciate this.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Apptronik has a totally different approach to building humanoid robots

humanoid robots apptronik

Who will win the race to have the world’s first usable general purpose humanoid robot? I thought I knew all the companies making general purpose robots:

  • Tesla
  • Sanctuary AI
  • Figure AI
  • Fourier Intelligence
  • Agility Robotics
  • Boston Dynamics

I was wrong … there’s probably a bunch I don’t know. But one that popped up as interesting is Apptronik. They’re based in Austin TX, they’re partnering with NASA, and they’re building Apollo, a 5’8” 160-pound robot.

In this TechFirst podcast, we chat with CEO Jeff Cardenas. And we learn that he has a completely different approach to building a humanoid robot than probably every other robotic company out there. Keep scrolling for more, or hit the Forbes post for my take on one part of what Cardenas said regarding robotics and international competitiveness.

(Subscribe to my YouTube channel here)

TechFirst audio podcast: Apptronik’s ‘Apollo’ humanoid robot

Subscribe on your favorite podcasting platform:

 

Transcript: humanoid robots, work, and the future with Apptronik CEO Jeff Cardenas

John Koetsier:  Who will win the race to have the world’s first usable general purpose humanoid robot? Hello and welcome to tech first. My name is Jonkitz here. I thought I knew all the companies that are making general purpose robots, right? There’s Tesla, there’s Sanctuary AI, there’s Figure AI, Fourier Intelligence, Agility Robotics, Boston Dynamics.

I’m sure there’s a bunch more that I don’t know, but I know that I was wrong because one that just popped up that launched this week is. Apptronic, they’re based in Austin, Texas. They’re partnering with NASA. I want to hear more about that. And they’re building Apollo, a five foot eight, 160 pound robot.

Here to chat is CEO, Jeff Cardenas. Welcome, Jeff. 

Jeff Cardenas: Thanks for having me here. 

John Koetsier: There’s so much competition right now. What is going on? 

Jeff Cardenas: It’s an exciting moment. I think for robotics and the world as a whole, we finally reached an inflection point. And it’s funny because when we first got started, everyone told us not to do humanoids.

And now everyone’s getting into the race. But it’s been exciting and interesting to see it all play out. 

John Koetsier: Everyone is doing humanoids, and that’s a real challenge, right? There’s pieces that are known and capable. Shoulders, necks, maybe, right? Walking with a bipedal robot is not necessarily the easiest thing in the world, but it’s been done, and it can be done.

Really challenging parts, obviously in the brain and the hands and fingers, we’ll get into all that. Tell us about Apollo, which you’ve called the world’s most capable humanoid robot. 

Jeff Cardenas: Yeah, so Apollo is the result of many years of hard work and research and development. At Abtronic, we’ve built 13 different robots overall in eight iterations on humanoids.

So we started in humanoids when we were still working in the lab at the University of Texas at Austin. Started working with NASA back then for the DARPA Robotics Challenge. Two of my co-founders, Dr. Nick Payne and Dr. Luis Cintas, were on the Valkyrie team. And basically, Aptronik was created in 2016 to commercialize the work out of NASA.

So back then, Valkyrie was… Millions of dollars. It was 300 pounds, but it was one of the very first electric humanoid robots. So we felt like, Hey, general purpose robots, more versatile robots are going to be the future. Electric is going to be the way to go. And we want to build a commercial version of this.

And so. Got it out into the world now, finally, seven years later, in many years of work to get here. It’s really, 

John Koetsier: It’s great to hear that because you launched with Apollo yesterday, right? And so the world wakes up and says, Oh. What’s this new company? What are they doing? Right? And it’s like that old phrase, like an instant success.

You just weren’t around to see all the work that went into it, right? So you built a bunch of robots already. You built a number of iterations on the humanoid robot. Talk about that journey a little bit. 

Jeff Cardenas: Yeah, for us, we always saw that this was really a technology problem more than was a market problem.

I think a lot of entrepreneurs and other folks are looking to get into this space because they see the market opportunity. But for many years, the technology problems had to be solved to make it viable. So it’s interesting to hear you say, walking, we can do that. That was not the case even.

Five years ago, we had no idea how to do dynamic walking. Boston Dynamics was really 10 years ahead of academia in terms of the type of walking that they were doing. And everybody was trying to figure out how to catch up and do what they were doing for many years. And so, piece by piece, we had to solve these problems.

And, the way that we viewed it was, this is a technical challenge, and we need to solve the key pieces that are needed to make this real. How do we go towards viable, commercial, general purpose robots? And we basically just broke the problem down into, and from, and solved it from first principles.

So started with electric actuation for humanoid robots, we’ve done over 35 iterations on electric actuators. Some of those are small, medium, large of the same family of actuators, but a tremendous amount of R&D there. Elon’s talked about the need for actuators for these robots. And that’s been our body of work.

That was my co-founder, Dr. Nick Payne. That was his thesis in grad school was next generation actuation for legged robots. And, the electronics didn’t exist. We needed more real time communication because you have a lot more sensing in these robots. And then certainly the software, which I’m sure we’ll get into, but we started basically at the foundation and we bootstrapped the company.

So I mentioned, it’s funny that everyone’s into humanoids now, because when we got started, everyone told us, do not build hardware, focus on the software, focus on the AI. And the problem was the robots didn’t exist. And we’re like, well, what are we going to put this AI on eventually as it matures and develops, we don’t have the robots yet.

So we have to build these systems. And so we bootstrapped ourselves and what we would do is we would work for other companies. We’ve worked with several big automotive companies looking at things like humanoid robots and we’ve helped them. Build their systems. We’ve built and delivered a variety of systems, including some of the robots for sanctuary.

We’ve partnered with sanctuary in the early days and they’ve been a great partner along and we built their first prototype for them. And each time that we would. We would build these robots. We would iterate, we would learn something new, all getting towards the point of building the robot. We always ultimately wanted to build, which is Apollo.

And so, the thing that I love about robotics is, you got, at the end of the day, you can talk about what a robot’s going to do, but you have to show it in the real world. So our philosophy has always been show versus tell. So we didn’t have a need to really get out there and say, Hey, we’re going to do this.

Our view was like, well, let’s do it. And then we’ll show off what we do. And we’ll let our work speak for itself. And so we really had our heads down over these years, just. Trying to get this stuff working, solving the technology problems iterating pretty quickly as well. And, we’ve had robots walking for 7 years.

We’ve iterated on, we’ve built full systems in 3 months. So it’s not that we’ve been taking our time doing this. We’ve actually been cracking these problems and we’re at this point where. We’ve met the threshold where everything is good enough, which I think of this like the personal computer in 1982, right?

It’s like, it’s the beginning of, a lot of things had to build on each other and converge for this breakout moment to happen, but we’ve got that work behind us. And I think the robot we’ve put out there we’re really proud of and excited to see where it goes. 

John Koetsier: I love that approach. I really love that approach because you didn’t put a guy in a suit and walk up on a stage and say, here’s our robot.

Right? That has a couple hundred year history. I’m glad you didn’t do that. Super. Interesting in terms of the approach talk about before we get into the details of the robot, give us the high level specs. I mentioned five foot eight, 160 pounds. I think it’s five hour battery life. Is it a swappable battery?

Is it a hook it up and charge? How fast can it go? What can it do? 

Jeff Cardenas: Yeah, so 5 ft 8 weighs 160 lbs, it can lift 55 lbs, and it has a 4 hour battery initially, and it’s swappable. So we’re targeting 22 hours a day, 7 days a week, uptime. It can also be tethered as well, and opportunity charged, like what in autonomous mobile robots.

It is fully electrically actuated. We, as I mentioned, we’ve had a tremendous amount of iteration in that space over the years. So, we think that we’ve got a really unique solution for performance and cost, right? Performance at cost. So there’s a trade off where you’re, you can purely focus on performance, which to me, Atlas is an amazing machine.

The Boston Dynamics robot … It’s like a formula one car. It’s really performance optimized but is very difficult to mass manufacture Atlas, it’s got custom hydraulics and other things in it. And so what we’ve really focused on is how do we get performance at cost? How do we find the right trade off and ultimately build a commercial product that we can build for less than $50,000 as our target.

And it can still have the performance that’s needed to do the work that we needed to do. And then through COVID, we learned a lot about the supply chain. And so a lot of the ideas that are now in Apollo are getting around. Supply chain constraints so that we can really scale this thing up into big volumes and we don’t have any single source vendors in terms of what it does.

Initially, we’re focused on what we call gross manipulation, and that’s compared to dexterous manipulation, and we’ve learned that because we’ve built a lot of robots over the years, and dexterous manipulation is very difficult. We have a ton of respect for folks that are going after that in this space.

But it’s a really difficult problem. And the exciting thing for us at this juncture is we don’t have to solve that problem to get these things out into the world. Turns out there’s a huge shortage in logistics. And a lot of those tasks are just moving boxes or totes from point A to point B. And that’s something that we know how to do now.

And so that’s where we’re going to start, but then the beauty of a general purpose robot. It’s a software update away from doing something new. And so we’ll continue to get more advanced as we move into this and get them out into the world. 

John Koetsier: Super interesting to talk about hands and gross manipulation versus dexterous.

One robot CEO that I chatted with before said, the robot’s basically a hand delivery mechanism because the hands do all the work. Right. And he said, actually, creating a hand, a robotic hand, like the human hand and capability is beyond our capability right now. It’s not just beyond any one company, it’s beyond human capability right now.

There might be some different opinions on that we’ll see, but that’s where, of course, maybe half of the degrees of freedom that a robot might have exists. So, it’s a really challenging thing. And then, of course, wear and tear. All those little motors in the hands, skin, whatever you use for skin, super, super hard.

So I see that challenge. You talked about a software update, which is amazing. That’s incredible. Is there a possibility of a hardware update? So let’s say three years from now you crack human hands, maybe not quite as good as this cause you want to hit your 50, 000 target level, 90%, 70%, whatever enough for many manufacturing type jobs.

Can you like, take a hand off and plug a new hand in. 

Jeff Cardenas: Yeah. Yeah. So Apollo is modular. There’s this big debate of wheels versus legs too from the traditional folks in automation. Like what do you need legs for? We can use wheels in all these applications and what we’ve done with Apollo is just taking everything we’ve learned because we’ve been building these robots with customers over many years.

And so, We’re able to take all of that learning and inject it into Apollo. And some people are going to want these things on wheels. There’s a big, huge number of advantages to legs. We think legs will win the day. Overall, there’s this problem with legs that the robots can fall over. And so in some cases you can have wheels.

The beauty of robots is you can have your cake and you can eat it too, and that you could build them to be modular. So Apollo is modular at the torso. So if you want to put it on wheels, you can throw it on wheels. We think that will demonstrate that legs will be the most versatile platform longterm and.

The reason we know about the challenges of wheel bases is because we’ve designed them. And so we’ve deployed versions of humanoids on wheels and we’ve learned from that. And so the same thing is true with the end effectors. I agree that I’m sure that’s Jordy that’s saying hands and, I agree that long term the humanoid needs hands, but in the near term.

There’s many applications where you don’t require a full five fingered hand and you can do things with a one degree of freedom hand. There’s a whole range of things you can do. There’s a whole range of things that robots do today with pincher grippers or, and so you can expand and you can, you don’t have to solve all the problems at once.

I have a ton of respect for Geordie. I’ve worked with Geordie over many years. He’s a visionary. But we’ve just taken a different approach in terms of how we’ve thought about that. And we want to partner with folks like Sanctuary. They’ve been a big partner on it with us. They can put their hands on a robot like Apollo, and we can work together there as they start to crack the dexterous manipulation problem.

So yeah, it’s modular at the chest. It’s modular at the end effectors, it’s also modular at the head as well in terms of putting different sensor payloads on it. So we have a standard sort of camera based vision system, but there’s also debates about LIDAR. Do you need lighter or not?

Our vision approach doesn’t need LIDAR, but in some cases, when you start to put these robots outdoors, if you want to add LIDAR, you can. This is something I think Boston Dynamics has done a really good job of with SPOT, is they created the ability to put different mission payloads on the back of SPOT, and something that we learned from along the way and part of what’s designed into Apollo.

John Koetsier: Love it. And as you hinted, Geordie is Geordie Rose. He’s the CEO of Sanctuary AI and the former CEO of a quantum computing company that sold a 15 million quantum computer to Google, but it’s still around. I want to, there’s so many places to go here. I do want to talk about the brain. That’s really challenging, right?

You’ve got to, How are you building intelligence into your robots? Is it pre programmed maneuvers? Is it versatility with a certain level of intelligence? Talk about how you’re doing that. 

Jeff Cardenas: Yeah. So I think the long term goal is to start to get towards more and more intelligence overall.

But I think in terms of AI and intelligence as a whole. For humanoids, you can really break it down into two buckets. So the first bucket is physical intelligence. So that’s like coordination, hand eye coordination, the ability to balance walking as part of that’s physical intelligence. 

The other side is cognitive intelligence.

So how do you make decisions? How do you reason about the world? How do you abstract ideas, things like that. What I’d say is that we’ve really focused on building from the bottom up and there’s different approaches. You can go from the top down, as in, start with the intelligence and think about how to build a machine around that we’ve gone from the bottom up, which is start with the actuators, the motor controllers, the electronics, really the basic building blocks and then build up into intelligence.

My view was that you want to build. The most capable platform you can possibly build and then you can think of these intelligences as software that you can put on top of the robot. There’s people that disagree with that and say, well, in order to get to full intelligence, you need deeper integration. And I think we’ll see, but we’ve really focused on this physical intelligence and.

The exciting thing for me, and you had a question that maybe we’ll get too deep, but it’s like, where are we at? I think of this as really an evolution of what’s already being done out in the world, and we don’t have to solve new problems to get humanoids out into initial applications to show utility for humanoids to be fully realized.

Yes. You need much higher levels of intelligence than we have today to be fully realized. Humanoid, I think, but we have a lot of the building blocks already today. So for example we in, if you think of the evolution of robotics, 2004 collaborative robots came out. Those are human safe robots.

By 2010, compute got good enough. Batteries got good enough. We could have mobile robots. And we started to do things like SLAM and navigation. By 2016, machine learning came to the scene, and we could do intelligent grasping. So what we’ve done is build on all these things that we’ve seen work in production, and really build from the bottom up, and integrate those things together, and taken maybe more a conservative approach than some people are taking, to basically, use what we know works, and use what we’ve known, we can deploy into the world today.

And then. We can always add these other sort of more difficult sort of R and D problems later on down the road. And so, we can dive in deeper where we want to go. 

John Koetsier: It’s fascinating to see the different approaches and that’s the beauty of the sort of free market innovation system that we operate in the Western world, at least where you do have those people who are coming top down and want to build intelligence and the intelligence will do everything.

That’s a risky bet. It’s an amazing bet if you make it and you win, because if you win, you’ve solved everything, quote unquote, everything, right? But if you don’t win, you end up with an expensive boondoggle that doesn’t accomplish anything. It doesn’t, it’s either. Really good, because you can’t have a robot out in the wild, maybe making a sandwich or slicing something up or using a tool that could be dangerous that is potentially dangerous.

Your approach in software mirrors your approach in hardware, which is starting from the ground up. What can I do? What do I know I can do? What do I know I can do today? And that seems to be a very Pragmatic and practical way of doing it. I think that’s super interesting. I do want to go deeper into what this means and what it looks like, but maybe before we do that you’re doing this in Austin, Texas.

And you feel that is significant. Why? 

Jeff Cardenas: I think if you look, there’s people out there that are saying there’s going to be more humanoid robots than people one day. Like I said, it’s funny to me to have that out there when everyone thought these things weren’t viable even.

Five years ago there were novelties. And I think that’s exciting. I’m not sure that they’ll all be humanoids, but I think there will be a lot of humanoids. And I think it’s the most versatile platform you can build. But the reason I think Texas is important is, where are we going to get all the robots we need?

So if you look at the world today. There’s hundreds of humanoids, maybe right? There’s not very many of these systems. So how are we going to go from hundreds to thousands to millions to maybe billions? And I believe that Mexico is going to play a big role in that for North America. I think if you look at Mexico relative to other cCountries where we’re doing manufacturing today.

It’s got a lot of advantages. It’s about a third cost of labor, depending on how you measure it. It’s geographically located, really close to the U. S. market. So we can get from Monterey to Texas in 2. 5 hours anywhere in the U. S. in 24 hours. And they have the skill sets to be able to pull this off.

And so I’ve always felt like the Texas Mexico corridor was going to be one of the most important manufacturing corridors in the world over the coming decades. And this is something that I was saying, coming out of grad school we were, we had two key ideas when we started Abtronic.

One was that robots had to become more versatile. And two was that somebody had to be building. Robots domestically here to serve the U. S. Market. We didn’t have any major domestic O. E. M. S. And we got started. And so if you agree with the premise that, hey, we’re gonna need domestic manufacturers long term, the question is, where is that going to happen?

And Typically, it’s been on the east or the west coast. The big hubs for robotics have been California or Boston, but I think Texas actually has a number of unique advantages over those hubs where I really believe that Texas has actually much better adjacencies than those other places and so I always felt like Texas was the place to do it.

We’re already producing gearboxes and a lot of the heavy machinery. We’re doing a lot of that for the energy industry before. But as you look towards what’s coming next, is there’s a lot of people in Texas that are looking for. Okay, as the oil and gas boom ends, which it will at some point, where do we apply all of this industrial base that we already have built?

And for a number of reasons, I think robotics actually makes a ton of sense for that. And that’s why I think Texas is going to be important. 

John Koetsier: Well, it is interesting. Cost of living is certainly less, proximity to Mexico. I was going to make a joke, what is this access to labor that you’re talking about?

Aren’t the robots going to build the robots? But I’m sure there’s going to be some tricky jobs for humans in all that stuff. Let’s look forward a little and let’s say we’re in a future where we have tens of millions, and we’re progressing. How does this change our world?

How’s it changing our economy? 

Jeff Cardenas: I think that it fundamentally changes the way that we live and work. And the reason I think that is because as humans, our most valuable resource is time and our time here is limited. And, you had great thinkers in the last century, John Maynard Keynes famously predicted that we would have a 15 hour work week.

And so what I think changes is that instead of doing things that we have to do, that somebody’s just got to do, we can now have machines that do that for us. And what that does is free us up to spend time on things that we really value. Why do we spend more time at work than we do with our families?

And the answer to that today is, well, someone’s got to do that. We’ve got to keep the economy running and going. We have to provide goods and services to our fellow man in order to keep all this moving. But I think what robotics has the potential to change is to change that equation. What if the cost of goods and services dramatically falls because They basically will, slope towards the cost of the raw materials as the cost of labor continues to go down.

So goods and services could become much cheaper and much more abundant than they are today. And that frees us up to do, spend time in a way that we want. And there’s this interesting quote that I heard, what did Darwin, Galileo Newton, what do they all have in common? They were all very wealthy, and so they had time to think and contemplate their existence and think about these higher level ideas.

And today you have people that are stuck in a cycle of working all the time to make ends meet. And I think, an optimistic version of the future is you start to free people up. They’re able to think about things like their own health, about taking care of each other. We start to fix the health care, the education system.

And ultimately we evolved as humans. And I think that, applied in the right way. It could be a really positive thing. 

John Koetsier: Super interesting. Okay. So, you’ve launched the robot, you’ve launched Apollo. What can somebody buy or rent today? 

Jeff Cardenas: So today we’re working on pilots and, my whole philosophy, a core sort of value for us at Aptronic is show versus tell.

So what we’ve done is we’ve built a demo center here on site at Aptronic. We’re mocking up the use cases that we’re looking to deploy Apollo into. So for the remainder of this year, we’re basically signing up pilot customers and we have some of the the marquee customers in the world that have already signed up, and we’re doing these on site proof of concepts through the rest of this year, where we’re demonstrating it might what I tell the partners we work with.

If I can’t do it here at my facility, I can’t do it at your facility. So make sure that you’re comfortable with the performance of the robots here, and then we’ll get it on site early next year. So next year we start with The initial sort of fielded pilots and we feel that a lot of systems up into this point.

So we’ve already put one of the things I tell people when they come to Apptronic, they’re looking for all the robots and we have a lot now. But for many years, every robot we ever made, we sold because that’s how we stayed alive. That was our, that was our business model. We were entirely funded on revenue for the first five years.

We only just raised money in the last couple of years. And so, but next year, we’ll get them out into the world. We’re not putting out pricing just yet. But a big part of what we’ve cracked is the ability to make these things affordable. And so that’s going to be a big part of our value proposition as we move ahead.

John Koetsier: You appear to have been remarkably capital efficient. I know many of the other entities, companies, departments that are building or trying to build general purpose robots. They’ve raised 100 million dollars. They’ve, they’re part of a Trillion dollar company or a multi-billion dollar company.

You’ve been scrappy, you’ve been bootstrapping, you recently raised what, 14, $15 million which is interesting. But that all seems to accord with Connie, your philosophy of start small, build what we know and even to the, to your go to market strategy of bringing people in, seeing what can do super, super interesting.

And I really liked that actually at 50, 000, assuming you can get there. Cause that’s your goal. It’s not there yet. And you’re not in mass manufacturing yet. I’m assuming it’s 50, 000, that’s a very interesting price point, because if you look at the kinds of jobs that you’re going to place us in and logistics and stuff like that, you’re spending probably a bit more than that.

If you look at entire costs for having an employee and benefits and other things like that. And. That sounds interesting. Do you know how you’re going to bring them to market? Are you going to sell them outright or are you going to have a SaaS solution? What’s your thinking there? 

Jeff Cardenas: Yeah. So, once again, the way that I asked, right.

John Koetsier: Robots as a service, not software as a 

Jeff Cardenas: service, robots as a service. And the, one of the exciting things about why this is feasible now is we already have business models that exist for selling mobile robots. It didn’t exist before the AMR market took off, but now there’s lots of companies that are selling mobile robots to these exact same markets in logistics.

And so customers now know how to buy them. The two ways they’re buying AMRs today are either robots as a service or CapEx. Typically with a SAS component to that. So some, the larger companies want to own their own fleets, they want volume discounts and other things. And so they’ll buy them outright. But I think largely what you’ll see in the early stages of the humanoid market is robots as a service model where they want to try them out.

They want to see how this works. There’s people that are worried about technical obsolescence, right? Like how quickly is this going to mature and develop? Am I ready to buy a fleet yet? Or do I want to wait some time? And so for a number of reasons, I think robots as a service is going to be important for this.

And now as an industry, we know how to do that. There’s third party financing groups that have already set up around this. It’s now something that the market understands, which wasn’t the case, five years ago. 

John Koetsier: I keep thinking about additional questions. We’re having a great conversation.

I want to ask this one. Where do you situate the United States and maybe North America a little more broadly in terms of global innovation of humanoid robots? Because you’re obviously U. S. based. There’s a bunch of others .. Figure. 

We’ve talked about the one in Vancouver, right? Geordie Rose’s company. 

We’ve talked about Boston Dynamics, right? So yeah, Sanctuary is the one in Vancouver. I visited Robot Island. It was sort of the name for it in Denmark. It’s gotta be about three years now, just pre COVID where there’s been, there has been for about a couple of decades, a global concentration of companies in automation and robotics is Odense Denmark.

And I haven’t seen anything come out of there. And they’ve been instrumental with some of the big manufacturing companies globally with large scale robots and automation stuff. But I haven’t seen a humanoid robot come out of there. Why am I seeing so many in the States and where do you situate the U. S. in terms of global innovation for humanoid general purpose robots? 

Jeff Cardenas: Yeah, so this is a topic that is important to me because I think the race is on. There’s a lot of interesting stuff that’s coming out, all over the world. I think one of the reasons that you see the U. S. leading right now is because the U. S. Government has invested so much money and the R&D that was required to make this happen. So whether it’s Figure, Boston Dynamics, or us … any folks have teams that have their roots in something called the DARPA Robotics Challenge. The DRC, they injected tens of millions, I think it was over 100 million total DARPA did to really advance state of the art for general purpose robots.

And that was 2013 to 2015. 

And the seeds were planted back then. And we’re just now seeing the fruit that’s being beared off of that investment. But that’s really what made Boston Dynamics big, was DARPA funding in the early years. Atlas came about for the DRC. So the very first Atlases were designed for the DARPA robotics challenge.

And that’s really what spurred a lot of the innovation. There has not been as much government funding in this space since the DARPA Robotics Challenge. And I think that if the U. S. wants to continue to lead, the government’s going to need to step in a big way and really inject more money into it.

But, this is the same thing that happened with autonomous driving. There was something called the DARPA Urban Challenge, and there was a couple DARPA challenges that… That really seeded the technology that moved it from the lab and may give it the 1st nudge out into the commercial space. And then companies were built out of that.

And so, Jerry Pratt, that’s over at Figure … he had a big role in the DARPA robotics challenge and did great work there. And then Boston Dynamics certainly came out of that in terms of who’s leading in the world and where do we go from here, I think China is making a big push in humanoids in particular.

And the government there is really stepping up to make it happen. And so I really want to see the US government respond. This is a race that I think is important long term, how should these be deployed? And I think it’s an area that we can lead, there’s other great countries doing amazing things as well.

There’s great work happening in Korea, certainly in Japan as well. And then all across Europe the stuff coming out of ETH Zurich and groups like antibiotics, they’re doing really great things though, not in humanoids and more versatile, cutting edge next generation robots. 

John Koetsier: It’s important.

It is really important. Hey, because if you win here, you win. We talked about the cost of information approaching zero. We’ve talked about the cost of software approaching zero because replication is essentially free, right? If you achieve a working and capable general purpose humanoid robot, the cost of labor, as we’ve already talked about, starts to approach zero as well.

And all of a sudden, that opens up a ton of capability for manufacturing cheaply. Onshore manufacturing, other things you want to do, jobs that were not, you couldn’t pay for them before, maybe environmental reclamation, maybe public works projects, you couldn’t pay for them before, you couldn’t afford them before, all of a sudden they become affordable and desirable, and everything that a society wants to, as well if you.

You are concerned about the declining population in some of the older nations of the world, Europe, Japan, those sorts of places. China as well. Those that’s all critically important. 

Jeff Cardenas: Yeah. And I have an interesting story about that because the U.S. actually invented the very first industrial robot. So it was invented in the late 50s, went into a General Motors factory in the early sixties, it was called the Unimate arm and the company that built it, Unimation actually ended up folding in the eighties.

I’ve heard a variety of different stories on what happened, but long story short, they didn’t. They didn’t keep getting funded. They didn’t keep getting backed in a critical time. General Motors, I think, was involved somehow and pulled out well. So the U.S. invented the very first industrial robot and effectively lost the first wave of industrial automation.

The big four that were producing all the industrial arms, two were Japanese Fanuc and Yaskawa. One was Swiss ABB and the other one was German KUKA. And so, I think this is important for policy makers and others because this next wave dwarfs the first wave in impact and size. And so I think it’s important to get it right.

But it’s an interesting story and something that I try to tell everyone that will listen when, whenever I get a chance. 

John Koetsier: Jeff, this has been a wonderful conversation. Thank you for taking the time. Yeah. 

Jeff Cardenas: Thank you very much.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Subscribe to my Substack