Tag - e4e4e4

Robots in agtech: what’s next?

robots in agtech

What’s next for robots in agtech? Everyone who’s paying attention knows agtech is massive right now … there’s so much innovation from laser-equipped weed killing machines that keep chemicals off our food and out of our land, to drones, autonomous tractors, AI-powered seeding and watering plans, and much more.

What about robotics?

In this episode of TechFirst, I chat about the future of robotics in agtech with Kevin Dowling, managing director at Robotics Factory in Pittsburgh, Pennsylvania. We discuss the evolution of robotics in farming, from traditional methods to the modern use of drones, autonomous tractors, and AI-driven systems, and Kevin highlights the diversity of robotic forms in agriculture, including wheeled, legged, flying, and swimming robots.

And — of course — we chat about humanoid robots with legs. Are they useful on farms?

Or … when will they be?

Kevin predicts a shift towards smaller, more affordable robots for smaller farms and emphasizes the importance of technology in reducing environmental impacts, enhancing food production efficiency, and potentially democratizing farming.

00:00 Exploring the Future of Robotics and Agtech
00:46 The Evolution and Future of Robotics in Agriculture
03:39 The Role of Humanoid Robots in Farming
07:38 Challenges and Opportunities in Ag Tech Startups
10:05 Innovative Startups Shaping the Future of Agriculture
12:49 The Complex Environment of Farm Robotics
15:30 The Potential of Indoor and Vertical Farming
23:30 Envisioning the Future of Farming with Robotics

(Subscribe to my YouTube channel)

Subscribe to the audio podcast

Find the podcasting platform you prefer:

 

Robots in agtech: get the full transcript

John Koetsier: What is the future of robots in agtech?

Hello and welcome to TechFirst. My name is John Koetsier. Everyone who’s paying attention knows that agtech is massive. Right now. There’s so much innovation from laser equipped weed killing machines that keep chemicals off our food and out of our lens to drones, autonomous tractors, ai, it’s seeding and watering plans, all that and much more.

What about robotics? To chat, we have Kevin Dowling. He’s the managing director at Robotics Factory in Pittsburgh, Pennsylvania. He’s a former scientist at Carnegie Mellon Robotics Institute, serial entrepreneur, former VP Innovation for Phillips. Welcome, Kevin.

Kevin Dowling: Thank you. It’s good to be here. Thank you very much, John.

John Koetsier: It’s good to have you here, and maybe let’s just dive right into it. Is the future of robotics in Ag Tech, does it come on wheels, legs? Does it fly? Does it swim? All of the above.

Kevin Dowling: It’s definitely all of the above, and I think it’s ironic in that of course it started on legs with people and then animals.

Yes. And then it went wheels and tracks. So almost every form of locomotion there is maybe not serpentine, but certainly a lot of these others. So I think the future will be all of the above.

John Koetsier: Yeah, that reminds me, I think the word robot comes from the Czech and it means worker or something like that, right?

Yes. So yeah we have been our own robots sometimes compelled in history, sometimes not, fortunately, not so much anymore today, but I. Where do you see the most interesting innovation?

Because I’ve seen some cool stuff on wheels and that works in the farming environment and you’ve got massive, hundreds, thousands, tens of thousands of acres sometimes, and so big machines that can do big jobs there, there’s a role there, but there’s also probably some roles that, something that looks more like a human could do as well.

Kevin Dowling: Potentially, I think most of it because of the size and scale of the fields and also the equipment often either requires you carry a significant payload or that you gather a significant payload, tilling, seeding, harvesting, and the machine, like the human, would touch the field anywhere from a half a dozen to 30 times a year across the growing season.

And so you need to have machines that can handle that and today for the more automated aspects of farming them literally standing in their field. It’s tended to be row crops, the wheat, corn, soybeans, and so forth, where you have very large acreage of thousands of acres in a given farm field, especially in the Midwest here in the us.

And so that tends to drive that approach to very large machines that can cover 12 or more rows at a time. And that happens all the way through harvest when you have combines that do everything at the end. But what your fundamental question is what is the configuration, the morphology of these farming machines?

And it can be everything. Whatever’s best suited for that particular job will likely win out.

John Koetsier: Yeah, it’s also interesting because like you say, the big machines, the massive tractors, combines it makes sense given the scope and scale of North American farming. Maybe a little bit different in Europe, maybe a little bit different in Asia and other parts of the world as well.

But certainly in the States and in Canada, the massive farms there. It’s interesting. It’s interesting when we see this explosion of humanoid robots do you see that coming into farming in any way, shape or form?

Kevin Dowling: I think it’s interestingly anthropomorphic nature of humanoid robots is attractive because it’s what we are, what we resemble and what we look like.

How do you reconcile that with the job to be done. So, for example I’ll push back on that a little bit to say, if we were designing a dishwashing robot today, would it be humanoid and why? Where we have this very efficient box that we put dishes into, and that’s a robot for cleaning dishes.

So why would we bother with the humanoid form if we don’t need to do it? And airplanes don’t necessarily mimic what birds do. So although the humanoid is trending right now in terms of robotics investments and so forth, it’s not clear to me as it wasn’t with some of the walking dogs and other things that you see all over YouTube, for example, they are compelling because they look like things that we know.

And a part you have mentioned here the laser weed killing robot, which looks like something from space. And hopefully it won’t scale so big that it’ll become a threat, but it’s fascinating to see that I’m not convinced that a humanoid robot necessarily will be what you need in order to do tilling and seeding and harvesting and so forth. But I think for I will say though, that for smaller farms who cannot afford these large autonomous vehicles, the big autonomous tractors, there’s a real opportunity there because most farms, well, the US tends to much larger farms overall, but you mentioned other places around the world where these types of machines could be used on smaller farms, if they were affordable.

So, I point to a company, the big ag equipment makers John Deere leads the pack, but you also have Agco, CNH, but then rapidly rising behind them are companies like Kubota or Mahindra. Mahindra is now the world’s largest tractor manufacturer by unit numbers, not by revenue.

So they’re shipping more units to more farmers than anyone in the world. So where did that, where does that come from? What kind of technology can they add? What simple improvements and the computers and the sensors that are added to these machines are lowering in cost every time. And I’d add one, you pointed out that robot has derived from the check word for surf or worker.

The computer is the same way. The first computers were human. They were typically women in World War II who were actually calculating things all the time. And then that gave rise to the name, which was then applied to the inhuman, right. The vacuum tube based large computers of the past, and then became the the canonical name for calculating devices and obviously the computers we use today.

But there’s a lot of room for all kinds of machines.

John Koetsier: Absolutely. Absolutely. And that’s always the tough point, right? Like if you’re going to build a robot for washing dishes, hey, the dishwasher is pretty fricking good, right? Yeah.

On the other hand, it can’t load itself. On the other hand, it can’t empty itself.

Now, you could build in things for that, and the question is, do you build in, do you build 25 different types of robots that can do every task, or do you build one that can do what we do and make it work? And there’s no right answer or wrong answer to that, but certainly in farming it with the big machines that you need, you can totally see autonomous tractors and they’re here already to some extent, right?

Being a thing. I guess one of the challenges, and you’re in the business of finding, supporting growing startups in ag tech and related fields, that’s not cheap. If you look at the size and scale of those machines, that’s not the hardware Startups are notoriously challenging. I’m more used to software startups personally.

Right. But. When you get into Ag Tech and you look at the size and scale of these machines, the jobs they need to do they, the amount of funding required must be significant.

Kevin Dowling: It is, I think any hardware based company has a greater challenge than simply writing good code to solve a particular problem.

The advent of a large machine. But, even I’m not sure that the scale of the device matters so much as the ability to do the controls and everything. In other words, if you’re making, a new, let’s say like go back to the Nest or an Apple watch, very complex, small mechanisms, but the scale of that device matters less than the complexity of the thing you’re trying to put together.

So a tractor is typically an engine today. Sometimes you’ll find an electric motor and a drive, and those are well known, well understood, and made by. In the millions today across the planet. But to your point testing requires larger areas. You need real estate to do that. Sometimes it has to be seasonal, so you can only do it at certain times of the year, so that only compounds the challenges.

But there are many small companies focused on ag and doing it today, whether it’s mowing or harvesting or forestry all the way through row crops, specialty crops, and livestock not so much, but there is increasing automation even in livestock, beginning with milking parlors for dairy and, other things that people want to do.

I’ve talked to investors who want to invest in things that are heavily manual today, and the most heavily manual today. Still requires people to process animals into food. So, if you’re looking for a protein source like that there are many concerns around being able to manipulate and use the tools that they use in order to turn animals into food.

John Koetsier: What are some of the more interesting startups that you’re seeing that are coming across your desk and maybe that you’re investing in?

Kevin Dowling: So we there are about a half a dozen here in Pittsburgh right now. One is focused on vineyards and monitoring vineyards as well as doing harvesting and so forth.

One of the most interesting things that I have seen is that if you think about grapes, for example, normally if you just let them grow, they would become these scraggly sort of bushes, but they’re essentially trained. To wrap a structure and then form a particular thing. They’re now doing that with apples.

They’re doing that with other crops too, so that you can change what is in the field either potentially genetically. But the other way to do that is to simply provide a structure that they can grow in and around making it more accessible for automation, more easy to use. It also makes it easier, even for the human to do that, to reach the top of a cherry tree or an apple tree typically requires a specialized ladder or other equipment.

Or sometimes you can find these really interesting videos where they have shakers that come up and grab the trunk and shake the entire tree, and that’s how they harvest. So all of that I believe could be automated whether or not you use an, entrainment of a plant or you create plants that are genetically easier to harvest or produce an apple, for example, that is tougher and able to be shipped and stored and then eaten eventually.

So, there are companies like that. We have graduates of the programs here in robotics at Carnegie Mellon, for example, in Pittsburgh, who are off doing these types of things elsewhere. We also have a company you mentioned earlier the idea of indoor agriculture. The idea is you have more than a greenhouse where it’s actually set up to be fully automated. And we have a company here for growers, which is harvesting tomatoes in greenhouses. And they’re doing this in, in, not only here in the US but in the Netherlands, which is a hotspot of of indoor agriculture as well as, excuse me, as well as automated agriculture.

And they’re harvesting tomatoes rapidly.

John Koetsier: That’s really interesting because a lot of tech comes out of the Netherlands because they’ve vastly expanded their agriculture on a tiny land footprint. So they’ve gotten very sophisticated at getting high yields and high automation. ’cause their costs are high and everything like that.

Obviously in flowers, but also in agriculture. That’s right. Super interesting that they’re taking their tech to the Netherlands.

Farms are a crazy challenging environment to put robotics into. Right? They’re complex. They’re changing, they’re moving. It’s not laid out like a factory. It’s not relatively stable. Ground conditions change. There’s weather there’s animals, right? And every type of farm is different as well. A wheat farm is very different from a vineyard, as you mentioned or, a beef dairy concern, that sort of thing.

That leads to significant challenges, I’m assuming.

Kevin Dowling: It does. It’s not entirely unstructured. It’s not a walking into the woods, for example, where you have no control of anything. But it is even though it’s. It is slightly structured, it’s still challenging. It’s soft materials that you have to traverse. You’re also some big concerns right now in agriculture have to do with soil compaction.

So how do you prevent that from happening as time after time as these machines roam these fields? How do you prevent those kinds of things? Well, you can make them bigger, have softer tires, have more tires. More legs perhaps depending on the approach. So, and the farmer is assuming most.

In some cases, I think people are arguing all of the risk because they face those challenges. It’s outdoors and you’re subject to all of the vagaries of of the outdoors in general. And the, so they get the harvest time and they produce the crop. But if it’s, if they had bad weather, if they had floods, if they had drought they’re the ones assuming the risks.

So how do we help the farmer to mitigate that risk? One way as I mentioned, is moving indoors. But who knows? There, there can be other potential ways. There are crops, for example, that can be grown entirely indoors like mushrooms. And you might, you and others might be amused by the fact that mushrooms are a crop, but they are, and they harvest every several weeks.

It is actually Pennsylvania’s number one crop by volume, by dollars. It’s, wow, six to $700 million a year to the state of Pennsylvania. Wow. So it’s a, and if Pennsylvania was a country, it would be number four in the world in the production of mushrooms. So it’s very likely the mushrooms you have on your pizza or in your salad are made right here in Pennsylvania.

But they have problems harvesting because of labor issues. So how do you solve those?

John Koetsier: Yeah. Yeah. It’s interesting that we had the discussion at the top, like, what is a robot? What do you classify as a robot and what’s the most efficient sort of machine, whether you call it a robot or not, to operate on a farm.

We talked about bringing farms indoors. Vertical farms are a thing. Right. And there’s huge potential there for year round production. Right. There’s huge potential there for production where. It’s needed, so right in the city, let’s say, or nearby or production for, let’s say far northern climates, Alaska’s the Alaska’s of the world, right, where farming is not really a thing. It’s a thing, but it’s not really a thing. But fresh fruit and vegetables are required. In that case the farm itself is a giant robot.

That’s an interesting world to consider too.

Kevin Dowling: Yeah it is we have a rather broad definition of what robotics is, and I think of it as the cycle of sensing planning and actuation.

And so you take that loop, that triumvirate of these three aspects of it. One is capturing the data planning based on the data you’ve acquired, turning that into information and then turning that into potentially movement. It doesn’t have to be movement, but a thermostat in your home also has follows that same cycle these days.

It’s not necessarily a robot, but you, to your point John the idea of a home, I think a famous architect, Le Corbusier said, a home is a machine for living. And I think we’re living inside of robots because our climate is controlled, our temperature, humidity many other things about our home, the lighting everything can be controlled.

And so that’s an environment that we’ve created for ourselves. That is robotic in nature. But agriculture I think could be exactly the same type of cycle, the exact same type of thing where we’re trying to control things in order to produce food or do some task in a factory, for example.

But all of those loops together, so I think your analogy of the farm being a robot is exactly spot on.

John Koetsier: Is there progress or is there, are there even projects to create something like a FarmOS? Because let’s say, we’re five years in the future, 10 years in the future, and you’re starting to see a significant surge of. Automation and robots and semi-autonomous machinery on farms, there’s a command and control issue, or there’s at least a work together issue.

Right. And do these things know about each other and do some of them cooperate and some of them are harvesting, but then some are gathering and transporting and do they talk and how’s that communication? Is there a project underway to, to manage that or is it basically using. Some of the operating systems that people are developing for coordination, robotics in general.

Kevin Dowling: I, I think most of it is exactly your last point, which is using real-time operating systems, for example, or using Ross Robot operating system to control the robot itself. But in terms of complex networks of robots, there’s not a whole lot of work going on in that area. I do think it will be absolutely necessary to do that if you have.

I believe that there will be a trend to move away from the very, very large machines, the half million dollar 12 row kinds of systems to larger numbers of small machines, which are more easily maintainable. You can pull them out if you need to. And by having a larger number of smaller machines means that they do have to communicate, they do have to know where each other is.

And that will require, as you pointed out, the ability to communicate amongst themselves like a mesh network in a way, but mobile and being able to coordinate and synchronize their operations. If you think about in a field today, even the. The operation of a harvesting, let’s say you’re moving through a field of corn.

The US has around 90 million acres of corn, so there’s a lot of it. But as you’re harvesting threshing as you’re generating the grain and the cobs and so forth that are becoming eventually silage or other things, there, there’s a coordination of vehicles, one driving to capture that flood of granular material coming into one vehicle from another.

And so there’s already a little bit of that work. But typically one and twos not tens and twenties of these machines working together. sometimes you’ll see these wonderful, beautiful pictures of probably Kansas or further north where they’re row after row of combines working together.

Yes. And then they migrate …. those aren’t robots yet, but they essentially migrate northward with the harvest. And that’s a specialty machine that allows farmers to harvest a great deal more quickly. And because they can’t afford that many machines, no one really can. Yeah. But as they move north toward Canada it allows them to do at a, do it at a rate that is far faster than anyone could do in history prior to that.

John Koetsier: Interesting. Super interesting robotic migration. Here we come. Very cool. Do you do any investing in, I wanna say farming adjacent areas? Because there’s a lot of work, let’s say we don’t wanna have however many tens of millions of cows that there are in the United States. States and Canada and we don’t want to kill that many for food.

And so we want to grow beef and have the impossible burger and other stuff like that. Lab grown meats and other things like that. Do you do any investing in those areas?

Kevin Dowling: We haven’t. I think that’s probably a little beyond our direct knowledge and capability. I have nothing against that. I think creating protein sources like meats from sort of chemicals is certainly a valid way to do that and reduce some of the impacts that the the full animals have.

But I’m not especially going after one versus the other. What perhaps, one way to think of this is there’s the farm and that’s really what we focused on in our recent Ag tech summit, which is specialty crops and row crops. But we did ignore for the purpose of this particular summit, livestock, pigs, chickens, cows and so forth. Cattle. And then the beginning there’s also seed. Chem fertilizer, chemicals and other things. And there’s a, there are ways now there’s one company here called Robot, and it’s spelled R-O-W-B-O-T. They’re focused on small robots going between rows of corn, which happened to be planted perfectly on center, 30 inches apart.

And so they designed a robot to do exactly that and inject nitrogen directly at the roots of the plants. So it cut the use of nitrogen, which has other impacts in the environment of course. It ends up in the Mississippi River. It will then turn, make its way to the Gulf of Mexico and so forth.

But it cut the use of the nitrogen fertilizer by half

John Koetsier: Nice. Which is,

Kevin Dowling: Not only a great cost savings, but reduces that impact on the environment substantially. And they’ve been working for quite a bit and hundreds of acres in Iowa right now.

John Koetsier: Absolutely huge. If you look at the amounts that farmers spend on fertilizer, it, you have, obviously, it blows your mind.

There’s huge costs here. And like you say, and nobody wants all that flowing into the lake. That’s right. The analogy that comes and everything. Right. So, yeah, exactly. Okay. Let’s look forward what does farming look like in, let’s say 10 years? As we see more. Purpose. Robotics and machinery.

Autonomous, autonomous machinery for the farm. What does farming start to look like?

Kevin Dowling: I think it’s certainly an extension or an extrapolation of what we see today. There’ll be more machines and more fields doing these types of operations at the beginning. Of course, there’ll be the much larger farms which can afford this machinery or perhaps there’ll be some interesting, ways to financially incentivize the farmers to lease these things, the capital lease, operating lease all kinds of financial ways to manage that to further it. And then I think you’ll begin to see taking that same technology which will get cheaper, but once, once it’s become. Solid and robust in terms of use.

You’ll start to see that in smaller farms where if you have not 20,000 acres but 2000 acres, you can also make use of that. I don’t know how much smaller it will get if you have 20 acres or two acres of farm. Will it have. Autonomous vehicles. Yes. Eventually, but, and you asked about the next five to 10 years.

I’m not sure that it will, but I hope that it will. And so that, those are the field operations, and then there could be things related to fertilizing and inspection. It could be drones and other things to monitor crops, look for disease basically remove those that are diseased from the field.

And all of that could be done in an automated fashion. What’s interesting to me too, a lot of what we learned was post farm. How did, when you go from harvest into the processing chain, there were a lot of interesting, both economics and potential automation. So, for example, we’ve seen many across the country, I’m sure where you are and certainly where we are in Pittsburgh.

The rise of many small breweries. But very often they can’t scale well because for a small amount you have the machinery and the existing industrial tools to be able to satisfy a brewery’s output, which might be, who knows, thousands, a thousand cases a year, something like that. But when you start to scale beyond that, there’s a gap actually in the market between what a Nestle can do or a very large food company can do, and what a small growing.

Bakery or other things. So there’s an interesting gap in there. We’ve been talking to contract manufacturers who work in that space about how to improve that. And there may be room for another summit related to post-farm activities and food processing.

John Koetsier: So my hope is also that in whether it’s 5, 10, 15 years, as this becomes more available to farmers, this technology we also have a better product.

Yes, we also have better food. We use less fertilizer because it’s, as you mentioned, injected right where it needs to be, not just carpet bombed over the farm. Right, right. We use less pesticides because we have machines that either mechanically or with lasers, remove them stuff that we’re seeing right now but isn’t necessarily widespread.

Right. And maybe we have less waste because we’re smarter about how we harvest. Maybe we lose less to crop failure because as you said, there’s constant monitoring, perhaps drone based or something like that so that we can identify, oh there’s. There’s something attacking the corn here.

Let’s, get rid of that. Let’s treat that right away and it won’t spread. So my hope is that farming becomes safer for humans, but also better for the environment and also better for people who.

Kevin Dowling: Yes I certainly have the same hopes and dreams around that. Organic farming, for example.

There may be a lot of ways where machines can help grow, not, no pun intended, to grow that part of the market as well. Where we could have healthier food that’s grown. And here’s another aspect of it, is, everybody sees farmer’s markets. You might see the honest stands where you’re, you basically put money in a jar and you take a, a dozen years of corn.

There might be ways to do monitoring of that to reduce the need for people to be there all the time. Mm-Hmm. And then you have companies like one here in Pittsburgh called Harvey, H-A-R-V-I-E where they’re going directly from the farmer to the customers. So they’re able to navigate, they work with something like 300 to 400 different farms.

So that’s another link in the chain of being able to provide good food for people on a timely basis where they can directly they’re essentially directly paying the farmer. There might be a middleman in there doing some of the distribution, but, I think there’s I think the, I, hesitate to use this term since it’s used so often, but the democratization of farming and how to get it so that the farmer can benefit more directly from it without a lot of processing and other things going on in between them and the consumer.

It’s really opened my eyes to see all of this. It’s amazing.

John Koetsier: It is amazing. And like you, I also hope that these innovations will come down to the smaller farms because we’ve seen some interesting, amazing, incredible things in small farms right now. Even as small as one acre on a city lot or something like that.

Where with intensive farming techniques and there, there’s really a lot that can be done. Wanna thank you for taking this time. Do appreciate it.

Kevin Dowling: You’re welcome. Thank you, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

World’s first micro lunar rover

first micro lunar rover

In January of this year, Peregrine Mission One launched with at least 22 payloads. One was intended to be the first American made rover to land on the moon since the Apollo days: 1972. It happened to be the world’s first micro lunar rover.

It was called Iris, and it was also the first lunar rover constructed with carbon fiber. It was designed and built by students at Carnegie Mellon University.

Today, we’re going to chat with them …

(consider subscribing on YouTube?)

Despite a mission failure due to the lander experiencing a propellant leak and missing its lunar target, the Iris team achieved significant milestones. They successfully demonstrated that student-made rovers could survive space conditions, including the Van Allen Belt’s radiation, and maintain communication and functions in space.

This project, despite its setbacks, marks a significant achievement in democratizing space exploration and contributes to the broader vision of establishing moon bases and Mars bases as stepping stones for further space exploration.

Subscribe to the audio podcast

 

Transcript: the world’s first micro lunar rover

Lunar Rover

Carmyn Talento: I think it’s just gonna be just incredible. I see moon bases in the near future. I see Mars bases in the near future, and I like to think of those as like your gas stations.

In some cases they might be, if we are able to develop that technology, if we’re able to use the resources on these, other planetary bodies and once you get to that little step away from earth, whether that’s the moon, whether that’s Mars, you start opening up more accessible areas of our solar system and of space

John Koetsier: What can you learn from building the first micro lunar rover? Hello and welcome to Tech First. My name is John Koetsier. In January of this year, Peregrine Mission one launched with something like 22 different payloads. One of those was intended to be the first American made rover to land on the moon since the Apollos 1972.

It was called Iris. This was the first lunar rover constructed with carbon fiber. It was designed and built by students at Carnegie Mellon University, and today we’re gonna chat with them. Welcome, Harsh, Kevin, and Carmen.

Super pumped to have you guys. Maybe let’s start here, and I don’t even know who to talk to about this, but maybe Kevin will give it to you first.

What was Iris intended to do?

Kevin Fan: Our goal was to launch what would be the smallest and lightest lunar rover to ever go to space. which is something that at the time we did succeed in doing. And our goal was to first and foremost show that students. Are capable of achieving such a task and do so at what is a relatively low budget compared with typical space missions.

And our goal is to pave the way for future space programs at Carnegie Mellon University, as well as other universities and organizations, and just take these little steps towards democratizing space for all.

John Koetsier: Cool. Carmen, this is not easy stuff. I know because my son is an engineer and he participated in some of these types of challenges, including going to Shanghai for some robotics challenges when he was in university.

And I it’s easy to make something that sucks. It’s hard to make something that is good and especially for a harsh environment like the moon

Carmyn Talento: for sure. Yeah. And. that is definitely a challenge that we had to face. We did a lot of different testing on our rover to make sure that it could, deal with the difficult. Nature of space. There are a few tests that have to happen that, all rovers and all, everything that goes to space has to pass these tests.

That could be like a vibration test. ’cause when the powerful rockets or the powerful engines on a rocket go off, you have to be able to survive that. And that’s true of anything that has to go to space. And then being a lunar rover we had to survive we’re also the first lunar rover to be dropped as our deployment from the lander as opposed to, rolled out or lowered down.

So that was some testing that we had to do and, we succeeded in doing that here on earth. We unfortunately weren’t able to test at least our deployment mechanism in space, but we were able to, survive. The launch. We, our systems survive some of the harshest places in space, at least between here and the moon including the Van Allen belt.

And, a lot of the things that we used in our rover a lot of our electrical components and stuff are not technically space grade. They’re not like industry standard. They’re all custom and just normal stuff. And it survived and it did well.

John Koetsier: super interesting and maybe harsh will bring you in here because yours was the first lunar rover designed with carbon fiber.

That’s really interesting because. I know a little bit about carbon. A little bit, right? A little bit. But I did talk to Nathan Ald, who’s a former CTO of Microsoft, and he talked about building cameras to photograph snowflakes. I. And the key challenge with that is that the camera has to be very cold because guess what?

You’ll evaporate the snowflake. You’ll melt it very quickly. And so he ended up building much of his assembly outta carbon fiber because it doesn’t contract or expand in cold or heat, was that one thought? With using carbon fiber as well as the weight for you guys.

Harshvardhan Chunawala: Our mission was to be the ultra low cost and carbon fiber being.

a strong material which could withstand all of the conditions that you just spoke about. That was one of our decision to make it off carbon fiber. So, yeah it takes off the low weight which ultimately reduces the overall. Cost of the launch which was one of our aim as well to be an ultra low cost rower so that decision supported our overall goal.

John Koetsier: Cool. What did you guys do? What were you guys roles? Kevin, maybe we’ll start with you and we’ll just go around in the same order.

Kevin Fan: Yeah, sounds great. I had a variety of roles on the project. When I first came in, I was helping out with mission operations. At the time we were building out our mission control center, which is actually located in our gates School of Computer Science.

And so we were remodeling that and really adding in. The screens, the computers and figuring out how we’d integrate that with the mission simulations that we’d been running in order to make sure that for the actual mission we’d be able to have all the procedures and designs in place in order to make sure we can run a very smooth mission akin to what you would see in the movies for like NASA’s control center and the sort.

And after that I helped out with the media team as well, representation, and when I was there, I helped with. Designing and maintaining our Shopify store. And later on during the mission itself, I was in Florida with the mission team as we went through what would be a very long series of events and we’ll definitely get into that.

It’s amazing how many jobs

John Koetsier: there are. There’s a lot of different pieces. Carbon, what did you do?

Carmyn Talento: Yeah, on a similar vein I wear multiple hats in this organization as well. Where I. Similar to Kevin, have worked as a mission operator joining in a big wave of recruitment for the project once we were getting close to, a potential launch date and a mission date after that.

So I joined as an operator and again, developing kind of what operations would look like on the surface of the moon because we have this incredible. Work of engineering, but we need a team to run it. So joining, working that out, coming up with some constraints that we might put into place, some pipelines of communication within the team, what kind of roles do we wanna have, et cetera.

And then on top of that, I worked my way up to representation, team lead. So working on public relations things information being the face of. Leading the face of the project to the general public helping, with Kevin us working together, social media interviews, talking to, reporters after our mission happened.

John Koetsier: Cool. For the first part of what you were saying the picture that’s coming to mind my mind was one of my favorite movies is The Martian. And it’s that team that they had to pull that old team out, how do we talk to this thing?

How do we communicate, what process, what commands do we send? All that stuff. So. Very cool. I just noticed you and Kevin. Are wearing a particular kind of jacket with some patches and stuff like that. It looks pretty cool. It looks really pretty official. I don’t know if you’re in a motorcycle gang or maybe it was from, it’s probably the mission jacket.

Is that correct, Carmen?

Carmyn Talento: Yes, it is correct. All of our our mission operators who are there for all of our training and the mission, we’re able to get an official operator jacket here.

John Koetsier: Sweet, sweet, harsh. What was your role? What did you do?

Harshvardhan Chunawala: Sure. So I was an elderly contributor along with Kevin Carmen for the Carnegie Mellon Mission Control. And then I joined with the team for our launch and was one of the mission operator. I was also the.

Practicum leader for space Mission engineering,

John Koetsier: let’s go back in time and let’s assume that it’s gonna work and it’s successful and it lands and it’s on the moon and it’s traveling around.

Your rovers actually functioning. You’re an operator, you’re a controller, you’re telling it where to go. You’re checking out craters and boulders and all that stuff. What were you hoping to learn? What were you hoping to accomplish?

Carmyn Talento: Yeah, great question. So some of the scientific goals of Iris as a rover were to just test the terror mechanics of how tiny nano rovers work on the moon, because, one of the next big steps in space travel is how can we use the moon?

How can we use that to our advantage where NASA’s gonna be sending. More astronauts there. There’s talks of setting up bases there. We need small little devices there that can move around. Maybe future transportation of small goods et cetera. So this is like the first big step in that is testing how small scale can we get and still be effective.

So Iris was only about 20 centimeters in length. Very small shoebox size little rover. And the wheels, comparatively and the wheels are made of carbon fiber as well for that flexibility aspect. So it was really testing that how do these, what we call nano rovers, work with the moon.

Yeah, that was the biggest thing really, is testing that, capturing images on top of that. And just kinda, learning our environment. What’s the threshold of a rover this size? ’cause that’s a lot of what we found in our research as well is, some of the sizes of obstacles that we were considering.

Like nobody had ever had to, think about those before. ’cause the next smallest lunar rovers were like SUV sized. So our little thing like that would be a huge obstacle. A smaller rock that car size rover would have to deal with is nothing. Where our rover, it would be a pretty big deal.

So. Yeah, just testing how all that works, capturing some images on our way, and ultimately also proving the feasibility of having students be able to do this. And as Kevin was saying earlier, open this up to more than just, government and long longstanding professionals.

John Koetsier: Super interesting. And what’s really cool to me is like before we landed on the moon, there was a lot of talk about what we’re gonna find in terms of how we’re going to get around. Some people thought, Hey, there’s a huge layer of dust there, perhaps, especially in the mirror area areas, right?

The lunar sees, quote unquote and you could just sink, right? And so others were no you won’t. But with such a tiny machine. You wonder, did you ever consider something like grasshopper mode? Like extend the wheels really quickly flick yourself over a big obstacle?

You can have a lot of fun with different options, right? Especially when you’re building small, it’s not necessarily insanely expensive to try stuff.

Carmyn Talento: I know earlier in CMU robotics there are some some rovers that have like a spider effect where they have legs that will pick up and move. But we really wanted to just test out a more standard method of transportation, just wheels. It’s a, keep it simple, stupid, Hey, they’ve been around, around a long

John Koetsier: time.

They have a long track record. They’re pretty effective. So Kevin, let’s bring you back in here. We, Carmen talked about what you would’ve accomplished, what you hoped to accomplish. Obviously the mission failed not through any fault of your own, but the craft that you were operating on, that you were, that you’re a passenger on, had a fault and did not ultimately make it to the moon.

They had to crash, land it in an ocean somewhere. But your mission was not entirely a failure. There were not, it is not like you got no results. What results did you actually accomplish even though you didn’t land on the moon?

Kevin Fan: The amount of data scientifically that we collected was very significant given what probably most people would expect given what ended up happening to the rover itself.

I’d say in terms of the largest technological achievement, I think we accomplished was firstly just the fact that the primary systems during launch and transit survived extreme temperatures, high radiation, specifically through. The Van Allen belt and we verified that all the systems were operational and for a nano rover of this size, that is quite an accomplishment.

And these were done in space during transit to the moon, we were able actually to, connect and have two-way communication between the Landlord Rover, where we ran test commands and telemetry and we transmitted a large amount of data, including down linking a large file that actually included the names of everyone who was actually involved in Iris over the years.

So that was one of the largest achievements we managed to have. I myself did manage to send some messages of my own to the rover in space and receive them back, which was very cool.

John Koetsier: So that’s interesting actually, more than it’s, it that’s crazy because often we think, okay you’re sending up craft it’s got an autonomous mode, semi-autonomous mode, and then there’s a command mode where you tell it where to do stuff.

And often when you send a craft, then you know, you hear, okay, NASA’s lander or craft. Landed it on Mars, now they’re sending commands, now it’s responding like it, it unfolds its antenna or its dish or something like that, and start sending. But you were able to do that while your craft was packed away.

It was in the box, it was in inside the spacecraft, and you were still able to communicate with it, and it was able to communicate back.

Kevin Fan: That’s correct. Our rover it, fortunately it’s not we didn’t intend for it to be a satellite, so it doesn’t have, for example, significant solar panels that need to be unfolded in order to receive power.

Actually the modules inside of the lander, which required power, were connected with power during the flight in order to make sure that the telemetry could be verified and that the batteries would remain charged. Up until the moment of. Landing. And so we were fortunate in that case to actually have access to the rover and be able to run these commands during flight once we learned that we maybe didn’t need the battery power necessarily for the mission itself on the moon.

Cool. Cool.

Carmyn Talento: And actually,

Kevin Fan: go ahead.

Carmyn Talento: Could I actually add to something that you said there? Something unique about our rover and the way we were connected to our landers. We actually were not inside of it. We were attached to the outer part of the lander, so, you were a hitchhiker. Yeah, exactly.

And I think that adds another layer of just excellence to what we achieved is that we were, or our rover more like, was exposed to the pure vacuum of space during transit and our system survived all of that. And we were still able to send these massive files that Kevin had just mentioned. And so we weren’t tucked away.

We were actually on the outside, which is why we would’ve had the drop deployment if we had made it to the moon and stuff. But Very cool.

Harshvardhan Chunawala: The photos that astrobotic the land company which is also a CME spin out, uh. They tweeted and we could see our rover’s wheel in space along with stars in the background.

John Koetsier: , how close did the craft get to the moon?

Did it circle the moon once or was it still orbiting the earth? So we

Harshvardhan Chunawala: reached the lunar distance but because of the of the propellant leak from the lander we missed the moon. So where we were supposed to see the moon and entered the orbit we could not do that, but we certainly did reach the.

Lunar distance. And at that point of time, we we saw the dust that was collected. And since we missed the moon we had to come back to earth. And while it was coming back the photos were transmitted.

John Koetsier: So you were the Apollo 13 of lunar landers essentially? Return to sender. Okay. Super interesting.

Harshvardhan Chunawala: And also also America’s, first commercial lunar payload service mission. And I think our last America’s last lunar mission was Apollo 17.

John Koetsier: Yes, exactly. Kevin, talk about Astro Biotic, which is the company that was running, the Peregrine missions.

They are trying again. They realize that they failed. They’re redeveloping learning from their mistakes. They’re trying to get, I think in November of 2024. Are you guys on it? Do you have another prototype? Do you have a working copy of what you build? Are you just, gonna like bolt this one on too?

Kevin Fan: As of currently, I don’t believe we have plans to have a copy of Iris sent out in the future. One of you guys, harsher, Carmen, you can correct me if I’m wrong about this, but I don’t believe we have a ticket on their 2024 A mission currently, but.

I, I can definitely see in the future that we would love to write on them in the future. And definitely in terms of future payloads, we actually do have several different lunar missions at different stages from proposal to design to. Actually finished rovers as well that are just waiting for a ride to the moon.

John Koetsier: Okay. Okay. So there’s no refund on that that lunar ticket, huh? And there’s no, like, okay. Yeah, we’ll replace it with their second one. That’s a little unfortunate. Too bad.

Kevin Fan: There’s no no insurance companies that I would say would be willing to take on a risk at this high.

So unfortunately, we don’t have a policy claim on it.

John Koetsier: Okay. Okay. But Carmen, do you guys still have, like, you, you don’t just make just one? Did you make two of your rovers? I’m sure you had. So many prototypes. Is there one that’s almost exactly identical to what actually got sent up?

Do you have another one hanging around somewhere in your back pocket?

Carmyn Talento: So we don’t have a, space grade one line around. We have what we call our earth model, which is an earth replica meant for earth conditions such as. Earth gravity, where the moon’s gravity is one sixth of earth’s. This one is meant to withstand, normal gravity, what we would call normal gravity.

For example, one of the differences you would see is that the wheels are thicker on our Earth model. Whereas the wheels on our flight rover were like quite literally paper thin and to the point where we wouldn’t leave it. Sitting in earth gravity for too long, for fear of the wheels collapsing, but it would work just fine up on the moon.

So we don’t have anything that’s identical. Mostly for cost purposes. As well as students still have to go to classes and stuff, whereas an industry professional, would. Have a full-time job dedicated to these kinds of projects. So, yeah. There’s a few reasons for not having a carbon copy of our rover.

John Koetsier: I get it. It’s understandable. It’s interesting though, ’cause you mentioned, like the wheels on the earth version are tougher and stronger. You guys have seen some of the Mars rovers that NASA sent out there, you’ve seen their wheels. Obviously they’ve got what they put a 30 day lifespan intended lifespan on them.

Right. I’m sure they underestimate so they can beat it, then some of they’re going for a year and a half, and their wheels are B up, talking holes in them all over the place and you almost wonder it would be. Ah, to have something up there that would be working and running for a long time.

Anyways, let’s get onto the future. What are you guys doing now? Are you guys still associated with the project? How, what impact has it had on your careers, your school and what you’re doing? Maybe Carmen, let’s start with you.

Carmyn Talento: Yeah, sure thing. Me personally, I am. Actually continuing on with one of the next rovers that Kevin had mentioned.

While we don’t have an Iris 2.0, we do have some more rovers in development, including Moon Ranger, which is a rover also develops entirely at CMU that is planned to look for lunar ice. Near the south pole of the moon. So that’s a rover that’s getting to the end of its development and hoping to hit your ride in the near future to get a real lunar mission on that one as well.

So currently I’m working on the mechanical team for one of the, one of our test rovers designed for our earth conditions here. So that’s what I’m currently working on.

John Koetsier: Awesome. Anybody listening who’s might maybe working on Artemis or something like that? We’re looking to hitch a ride here.

Prior experience as a hitchhiker, very low mass. Not a problem. Very cool. Awesome stuff. Kevin, what are you working on?

Kevin Fan: Yeah. Well, as Carmen mentioned before I was also on the Moon Ranger team for a little bit before I joined Iris. And I can also agree on that point that we definitely are looking for a ride in the new future.

So, if anyone just happens to have a rocket lying around that, they haven’t been using in a while we’d love to, to pick one up. But definitely jokes aside, I really have found Iris to be the defining. Event of my time at university, being able to work on a project of this scope and of this importance at such a relatively young point in my career, I feel like has really changed my mind on what kind of effect I can have in industry as well.

Although I don’t believe I’m going directly into the space industry upon graduation, just knowing that I have had. Something like this already on my resume or already on my profile, allows me to confidently say that anything on earth. I could probably do pretty well if we are already doing things that are going to the moon, right?

So how hard could it be?

John Koetsier: I’m signing you up for deep sea exploration development and you have to personally test it. It’s the only thing that’s tougher than space. I’m not sure like you’re harsh. What are you working on right now?

Harshvardhan Chunawala: Right. So I transitioned from a student to an alumni. So, now the involvement which I had with Ires for the next mission I’m just working with the practicum leaders and the director of INI, to open up opportunities for our future students.

John Koetsier: Cool. What’s really cool about what you guys have told me is that you did something innovative. You created something, you also used off the shelf components in a lot of cases. And what’s interesting about that is, the surface of the moon what percentage have we touched is gotta be a 1e-06% or something like that, right?

If we can drop. A bunch of these on a lot of different places, especially if we’re looking for water. We need some combustibles. We need to, we need some oxygen. We need some hydrogen. We wanna fuel future rockets. We want to provide breathable air for lunar or whatever we’re gonna call them, right?

People are gonna live on the surface. We’d find that if you could, we’re just looking one place at a time that take forever. If you can have these cheap things that can last four weeks, two weeks, whatever it might be, and you can drop. You increase your chances. So, so that’s really cool. Carmen, maybe let’s end with you a little bit.

We’re in, we’re really in a golden age of space exploration. We it’s incredible, right? SpaceX is obviously leading that, but there’s many other companies involved. We see the constellations of satellites that are just mindboggling. 10 years ago, even five years ago, and you’d say, there’s thousands of privately owned satellites in space.

People would laugh at you. They’d think you’re insane. If you’d say, Hey there’s a launch module that is bigger than anything that’s ever happened before. It’s designed to go to Mars, and they wanna build like a thousand of ’em and actually get, immigrants to Mars and stuff like that, that people would laugh at you as well.

There’s a lot that is going to be possible perhaps in the next three, five years, 10 years, something like that. How do you see the future, the aerospace, you’ve worked on it a lot, you’re still working on it. How do you see the future of development and future of humanity and space?

Carmyn Talento: Yeah, you’re absolutely right and I think, as you mentioned, all of these projects working up, I think.

More and more people are gonna start to get involved. You’re already seeing it. We’re proof of concept of that, and I think it’s just gonna be just incredible. I see moon bases in the near future. I see Mars bases in the near future, and I like to think of those as like your gas stations. In some cases they might be, if we are able to develop that technology, if we’re able to use the resources on these, other planetary bodies and once you get to that little step away from earth, whether that’s the moon, whether that’s Mars, you start opening up more accessible areas of our solar system and of space and whatever.

Whatever contribution that we all can make to that, whether it’s our first attempt at a nano rover and being able to be one of the earliest concepts of a, as you say, like a little device to maybe transport some goods or search for materials, whatever, that we can be in that larger just journey through space.

We’re happy to be it. Awesome, and we’re gonna keep working towards it.

John Koetsier: And who’s signing up for a ticket to Mars?

Carmyn Talento: Maybe!

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Billions of robots in 10 years

billions of robots

Billions of robots within a decade? A similar growth curve to smartphones?

We currently have about 30 million robots on the planet, not counting Roombas and similar small bots. RobotLab CEO Elad Inbar says that will hit BILLIONS with a B within 10 years.

(consider subscribing on YouTube?)

We discuss the exponential increase in commercial robots globally and predict billions of robots integrating into daily activities, from service industries to personal assistance, over the next decade. We chat about the evolution of robotics from novelty items to essential aspects of business operations, highlighting the role of robots in automating mundane tasks and their future potential in enhancing customer service and living standards.

Inbar also emphasizes the importance of service infrastructure to support the widespread adoption of robotics technology, drawing parallels with past technological advancements like mobile phones and cars. And we dive into specific applications of robots in restaurants, cleaning services, and healthcare, particularly for dementia patients, and the franchise model RobotLab is adopting to expand its reach and capacity to deliver robotics solutions.

Billions of robots: zoom to the section you’re most interested in

Zoom to the topics you’re most interested in …

  • 00:00 The Dawn of the Robot Decade: Envisioning a Future with Billions of Robots
  • 01:02 The Big Picture: Robots Transforming Business and Society
  • 07:10 The Current State of Robotics: From Hospitality to Manufacturing
  • 09:50 The Future of Work: Robots Filling the Gaps in the Workforce
  • 12:40 Enhancing Customer Service: How Robots are Changing the Game
  • 13:31 The Restaurant Revolution: Robots Taking Over Service Roles
  • 16:35 Exploring the Role of Robots in Restaurants
  • 16:47 Adapting Robots to Different Restaurant Environments
  • 18:18 Growth Areas Beyond Restaurants: Cleaning and Retail
  • 22:47 The Future of Customer-Facing Robots
  • 24:00 Robots in Assisted Living: A Compassionate Solution
  • 27:09 Unlocking the Potential of Robotics in Business

Subscribe to the audio podcast

 

And … a complete transcript of my chat with RobotLab CEO Elad Inbar

RobotLab

Elad Inbar: We are going to see billions with the B of robots out there.

So think about this way, there are 8 billion people on this planet, right? And, you know, 30 million robots, as big as it is, it’s not even half percent of half percent. Hmm, the, the population. Mm-Hmm. So we are, uh, you know, in, in this exponential growth right now

John Koetsier: Our robots soon to be part of our everyday life in hotels, restaurants, airports, everywhere. Hello and welcome to Tech First. My name is John. Er. First time I saw a robot in a hotel. It was in Japan. It was more than a decade ago. It might’ve been 15 years ago. It was mostly a toy. Uh, but it was cool.

Question is, is that changing and where are robots close to actually delivering business value? Jack, we have a lot in bar. He’s a CEO of Robot Lab. They’ve deployed tens of thousands of robots and businesses over the last 15 years. Welcome a lot. How are you? 

Elad Inbar: Yeah. Thank you, John. Thanks for having us. 

John Koetsier: Hey, super pumped to have this conversation.

Let’s start with a super big picture. We’re gonna get into the details, where you deliver robots, what they do, how much they cost, payback periods, all that stuff. But let’s start with the really, really big picture. It’s really early days in terms of robotics. There’s cool stuff, there’s amazing stuff. What kind of future do you envision when you look forward, maybe it’s 10 years, maybe it’s 20, I’m not sure, but what kind of future do you envision robots and humans working together?

Elad Inbar: Yeah, that, that’s a great question. And um, I always love to start from, you know, the big picture because, uh, especially in technology, it’s very hard to, uh, you know, comprehend the pace. Changes. And, uh, you know, we, we all, you know, remember the, you know, the first, uh, you know, mobile phone, the Motorola with the shoulder strap, right?

Uh, this was launched somewhere in the, I dunno if we all remember that. Okay. It all okay, but we know of it. Okay? Yes. But, uh, but anyways, it, it was launched roughly in the late, uh, eighties, um, and a decade later, by the late nineties, everyone already had a mobile phone. Uh, the same happened with a smartphone right from the time, you know, the, uh, you know, Android, the iPhone was launched.

Uh, you know, until everyone has, uh, a smartphone, it’s roughly a decade. Mm-Hmm. Um, same with internet penetration, the same with laptops and so on and so on. And I believe today we are roughly in, uh, the second or third year of the robot decade. Uh, so although we, you know, we kind of like not really see that everywhere.

Uh, I can tell you based on the, uh, international, uh, robotics association, there are around 30 million, uh, commercial robots out there. I’m not talking about the Roombas and, you know, the residential robots. I’m talking about commercial cleaning, delivery, you know, factory robots and these kind of things, and 30 million robots.

It’s a pretty. You know, large number by itself. Mm-Hmm. But if you consider, again, the, the, you know, the, the decade that is, you know, upon us, that we are going from, uh, basically zero, you know, products out there to almost a hundred percent penetration. Okay? We are going to see billions with the B of robots out there.

So think about this way, there are 8 billion people on this planet, right? And, you know, 30 million robots, as big as it is, it’s not even half percent of half percent. Hmm, the, the population. Mm-Hmm. So we are, uh, you know, in, in this exponential growth right now, and that’s actually, uh, you know, something that we see in our numbers.

That’s, you know, why, uh, we decided to grow in a certain way. Um, you know, using a franchise model, we can talk about that later, uh, because we get demand from so many different places that need to have robots today. Right now, and we just can’t be everywhere all the time. Mm-Hmm. Uh, so when, when we look at the, you know, the horizon, what’s coming?

Um, I can just refer to one, you know, uh, you know, person, Elon Musk. Uh, a couple of weeks ago we had, um. Uh, earnings, uh, you know, call with, uh, you know, the Tesla shareholders and he said that they’re going to deploy 1 billion with a B 1 billion of their Optimus, uh, robot, the, the human rich robot, uh, by the end of 2030.

So this is something that, you know. Just one company with one single robot is going to do 1 billion products. Okay, 

John Koetsier: well I’ll believe that when I see that Elon Musk also told me that I could buy a Tesla and it would be making money, self-driving as a taxi about 2015 or something like that. So I’ll believe that when I see that.

But I have talked to, I, I agree. I agree. Investors behind figure ai, which is, you know, a humanoid robot company that will compete with Optimus, and I’ve talked to the executives. Apron and many others. Yeah. Figure as well. And many of them are saying, Hey, you know what, uh, we expect the numbers of humanoid robots to be roughly equivalent to or surpass the human population at some point.

Exactly. So that that point in general is not tremendously controversial in the robotics community, at least. 

Elad Inbar: Exactly, and, and it’s important, you know, for I think people that are not from the industry to understand, you know, what’s coming. Because again, we, we can’t even comprehend that within, you know, a decade and let’s say Elon Musk is, you know, wrong by five years.

Okay? We’re talking about, you know, again, it’s not like, you know, 50 years from now, a hundred years from now, it’s in our lifetime. We are going to have, you know, as you said, more people, more robots and people, right? And you know how our lives are going to change, you know, as a result of that, how, you know, the service industry, the, you know, entertainment industry, the, you know, manufacturing and so on, are going to change.

And this is something that, again, we have to take a step back and look at, you know, this overarching, you know, uh, uh, trend that is happening in the industry. 

John Koetsier: It raises so many questions and people let, let’s say normal people, people who aren’t in technology right, are already dealing with so much future shock, right?

You know, they’re already dealing with so much change and, and change of this scale and change that impacts potentially their wages, how they make their livelihood, uh, all that sort of thing. Is going to be a tsunami. Absolute tsunami. Okay. We’re gonna get into where you’re delivering them now, what verticals, what you see growing, all that stuff.

But kind of let, maybe let’s, let’s continue at the high level. ’cause it’s been interesting. Yeah. I wanna talk about as we. Enter that reality of robots entering the workforce and maybe their, uh, hospitality, maybe their delivery, you know, and maybe eventually they’re humanoid and, and they’re actually doing some interesting things in a factory or in a warehouse or something like that.

How do you see robots and humans working together? I mean, there’s lots of options, right? Um, one human, one robot take, take the job. One human, one robot help with the job. One, one. Make something, do something so that you could do the job with significantly greater quality. Right? Yeah. And there’s many more scenarios as well.

What, how do you see 

Elad Inbar: it? So let’s start with where we are today. Um, today, uh, you know, we at Robert Club and, you know, other, uh, you know, companies, manufacturers, and everything, uh, we are helping, uh, mainly business owners, uh, to automate tasks that people don’t want to do anymore. That’s where we are today, right?

I mean, uh, you know, people don’t want to, uh, clean floors, don’t want to, you know, vacuum corridors for eight hours a day. People don’t want to, um, you know, run, uh, uh, you know, dirty dishes. Back to the dish washing station. Uh, these are tasks that still need to be done. Uh, you know, we are hearing even from, uh, school administrators, the janitors are retiring.

And, you know, there is no new generation that wants to come and, you know, clean the, uh, school, you know, facilities every day, but they still need to clean the floors every single day. So this is where, you know, where we are today. And you know, my observation is that, you know, it happened, you know, for many years, even before Covid, but Covid kind of like accelerated that, um, in which.

If you think about it, uh, COVID happened almost four years ago, right? I mean, I believe in, in a week or two from now, uh, is is four years, uh, you know, celebration, uh, since we went Mm-Hmm. We were all sent home to the flatten the curve, right? Um, right. It was mid-March roughly. Um, and what happened is, if you think about it, people that were.

At these entry level, uh, you know, jobs four years ago they moved on, right? They are four or five years into their careers. They want, you know, higher paying jobs. They have more, more responsibility. They want to build the family and, you know, start, uh, you know, buying their home and, and so on. They, they don’t want to do the entry level jobs anymore.

And the new generation that’s supposed to, you know, just step into these entry level jobs. They were in eighth grade when. So, Mm-Hmm. They never experienced, you know, childhood, the, the, the way that we experienced that, right? They never worked at a, you know, burger King, flipping burgers or, you know, doing all these, uh, summer jobs.

Everything that they know is, is around, you know, being online, being at home. So DoorDash and Uber Eats and trading crypto on, uh, on Robinhood. This, this was their, you know, their, their entire, you know, growing right, growing up right As, as high schoolers. So, you know, they, they finish, you know, high school.

They, you know, I’m ready to go to the workforce. Okay. I’m going to the, my first job. And, uh, you know, the business owner is just like, oh, that’s your first job. Awesome. 12 to an hour, take this broomstick and start, you know, working on there. Mm-Hmm. And they’re like, no, I’m not gonna do that. Right. I never did that.

Why would I need to do that? So this is the gap that that happened. These people moved on. These people are not willing to step in. Okay? Mm-Hmm. So robots today need to help business owners operate. Where people don’t want to do these jobs anymore. And that’s where we are today. We have, uh, you know, delivery robots that are helping in restaurants, in hotels, uh, you know, room service, robots and so on.

Uh, we have, um, a cleaning robots that are designed to clean, uh. Uh, you know, large spaces. Think about, uh, you know, ballroom in a, in a hotel, all the corridors across, you know, multiple floors, uh, you know, uh, uh, warehouses and car dealerships and supermarkets and so on. Um, and we have customer service robots because that’s also one of the largest challenges right now is people don’t want to talk to people.

They don’t want to do customer service because everyone has attitude from both sides, customers and their representatives. Right? And, you know, robots can actually offer the way of, um, you know, providing customer service, providing information in a consistent way. Uh, so these are, you know, the, the robots that we see today.

We have also cooking robots and these kind of things because the same problem happened in the back of the house as well. Line cooks are in great shortage right now. 

John Koetsier: Um, I mean, I had a conversation. I had a conversation. It’s gotta be half a year ago now, actually. It was December 8th, 2020. I just looked it up on my own website here with Measle Robotics, the founder, buck Jordan, they make flippy the, uh, the the burger flipping robot.

Yep. And, and he was saying that he was talking to, uh, restaurant owners and they had nobody who would come in. And flip the burgers, uh, flip the fries, all that stuff. White Castle. Mm-Hmm. Big chain in the US for burgers and stuff. Did a big pilot project around this. I didn’t hear how it went or they kept it or anything, but they had similar issues bringing in low cost labor to 

Elad Inbar: make fast food.

Yes, exactly. So, so this is where, you know, robots are today, right? Uh, we have a need, uh, you know, businesses still need to operate. Um, and, you know, people are not willing to do that. So robots step in, uh, to help with the, uh, with the few that do show up to work. So that, that’s kind of like where No, it is. I mean, think about yourself.

Yeah. How many times you sat in a restaurant, okay? And you ask for the check. Right. And you raise your hands like, Hey, can I get the check? And the server was just like, just a second. I’m just running over there. I’ll be back in a second. And you know, it’s been 5, 7, 10 minutes and you know, she keeps running back and forth and you know, she didn’t forget you, but she’s very, very busy.

And why is that? Because the few that do show up to work are overworked. They cannot provide the level of service that we as customers expect them. And for every minute that’s, you know, pass on, you know, when we wait for the check, their tip amount is, you know, shrinking, right? Because why are we giving tips for the level of service, right?

And if we don’t get service, you know, this gets smaller and smaller. And what we are seeing in every restaurant that we deployed, uh, uh, service Robot, a delivery robot, the servers can stay in the dining room. And these robots save them all the running back and forth to the kitchen. So with the surface staying in the dining room, their tips actually increase.

Because, you know, if I run out of water, they refill it immediately. If I drop my fork, they bring me a new one immediately. If I need the check, they’ll be there in two minutes. Right? And, and this kind of service is what we are looking for. You know, we have, you know, we live in a first world, right? We have enough food at home.

We go to the restaurant for the service. Right. Mm-Hmm. Not for, you know, uh, uh, real need, right. For food. So, you know, if we get the service, we will recommend the place. We’ll, you know, come back. So it’s, it’s a positive cycle that just contribute to the business. 

John Koetsier: So that’s a good segue then to talk about where your delivering robots the most right now.

Uh, what, what are the biggest sectors? Uh, and then we’ll talk about the growth rates as well. Yeah. 

Elad Inbar: So restaurants by far right now. All over the place. Uh, it doesn’t matter if it’s a, you know, full serve restaurant, full service restaurant or quick serve or whatnot. Any, uh, restaurant owner, uh, is struggling with the same thing.

Uh, finding people and making people show up, because even if you have people, they just don’t show up. 

John Koetsier: Right. So the robot that you’re selling into a restaurant is what you mentioned, it’s delivery, right? It’s bringing stuff out. What’s that look like? What’s the form factor? I’m guessing it’s on wheels. I’m guessing the top is a tray.

I’m guessing it has some kind of screen and maybe some kind of talking capability. Yeah. 

Elad Inbar: So Lab is a unique company in which, uh, we don’t manufacture the robots. Uh, we used to manufacture robots, uh, in the past, but we stopped doing that today. We are partnering with the largest, uh, manufacturers from all around the world.

So, for example, we are the exclusive partner for lg, uh, robots, uh, you know, across the, uh, north and Latin America. So we, uh, bring their robots to the market, but, uh, you know, we are, you know, partners with other manufacturers as well. Typically these robots will have shells, you know, couple of wheels, sensors that they can navigate around obstacles, around people, um, and these kind of things.

But not every robot is the same. There are different, uh, you know, use cases which will require different types of robots. I’ll give you a couple of examples. So let’s say you are, uh, you know, a small bistro kind of restaurant, right? And you know. Everything is packed. You know, people are sitting, you know, close to each other.

You’ll need a smaller robot with a smaller footprint that can navigate, you know, between these, uh, you know, different chairs and and so on. Uh, compared to, uh, let’s say iHope, you know, type of restaurant where you have, uh, you know, large parties, typically it’s, you know, parents and grandparents and kids and everything, and large plates.

So a small robot that can, you know, uh, navigate between chairs is not gonna fit enough tables, sorry, enough, uh, plates, uh, to bring to the table, uh, in, in a iHope type restaurant. So not every robot can do the same thing. Another example, if you are, um, a Vietnamese, uh, fall restaurant. So you serve giant balls of boiling, you know, liquid, right?

You need superb suspension. We actually have, uh, you know, one of the manufacturers have a drive pattern that, you know, when the robot drives it actually, it has great suspension, but it also, as it slows down, it actually leans backward. It. Making sure that liquid wants spilled, you know, out of the bowls because they are focused on this type of, um, of, on this type of, of food.

So not every robot is, uh, you know, the best fit for every restaurant. And that’s what we do. Our team actually partner with the customers and we ask them a lot of questions about, you know, their environment, their food, their kitchen, their floor, their, you know, uh, width between the tables, the between the tables and so on.

And based on all of that, we recommend the right product that will be successful in their environment. So these are, you know, generally speaking, delivery. How does that 

John Koetsier: work? Fatigue. How does that work at the table? Um, does the robot come up and say, Hey, here’s your food and you have to grab it off the robot.

Does a server come by and put, put, put it on the plate? How does that work? 

Elad Inbar: So that, that’s, that’s a great question because again, not every, uh, restaurant is the same, right? If I go to a Chick-fil-A for example. Um, you know, it’s, it’s a fast restaurant like McDonald’s. For those who don’t know, uh, you know, it’s okay if the robot comes next to me and I take the tray and I put it on the table.

Actually, I don’t expect any more service than that. And it’s really cool that I was served by a robot, right? Uh, but if I go to a high end, you know, uh, you know, steakhouse or something, you know, uh, you know, fine dining restaurant, I don’t want to take the plate by myself. Because again, we are paying for the service.

I want the server to present the food to me and say, Hey, Mr. John, here’s your steak. Medium rare, the way, the way that you ask for it. Would you like some extra salt and pepper? Right? So in that case, the robot needs to be in a holding position, okay? Just like where they put the foldable tables with the giant, you know, trays that they bring.

The robot is in a holding position. The server takes the food from there and present to the guests. Because, you know, in that type of environment, it’s not acceptable that the guests will, you know, take the food. Yeah. So we are working with the restaurant owners. We have different, uh, standard operating procedures for different types of restaurants, uh, just to match the, you know, the level of service that, you know, they want to provide to their, um, uh, to their guests, uh, with the right technology, but in any, in either way.

Okay. The server stay with the guests, and that’s the purpose. Because the servers need to serve the guests. Right. Not to run back and forth. Yeah. This can be automated. Yeah. 

John Koetsier: That makes ton of sense. So restaurants is a big growth area. What other verticals are significant growth 

Elad Inbar: areas right now? Um, the other, uh, vertical that we see a lot of demand is in cleaning.

I. Um, the, the, the entire clinic industry, it can be hotels, again, restaurants, assisted living facilities, very, very, uh, you know, in high demand right now. Uh, they all, you know, suffer from labor shortages. Uh, and they can’t clean, you know, enough, uh, of, you know, their facility because they don’t have people, they don’t have the manpower to do that.

Um, so, you know, we have, for example, some of our customers assisted living facilities with 200 rooms. They have one cleaning lady. One. Wow. That’s possible. Okay. It, it, it’s hard. It’s hard on her because she can’t do everything. Okay. She can’t Mm-Hmm. Touch the entire facility every day. Right. And, and mm-Hmm.

You know, it affects, it affects their, you know, cleaning lab, you know, score and all of that. Uh, so, you know, when we introduce robots, uh, today, by the way, cleaning robots are mainly for, uh, public spaces. We don’t have yet robots that can clean rooms or, you know, bathrooms and these kind of things. Uh, but, you know, even helping with public spaces, all the corridor, the ballrooms, the, you know, dining area, reception area, and all of that.

We’re talking about, you know, hours per day. Uh, we’re talking about, you know, tens and, you know, hundreds of thousands of square feet at the end of the day. Um, yeah, so, so this is a great help for 

John Koetsier: them. I assume retail is another interesting area as well. I think it was Walmart that recently did a pilot project with a stock sensing robot.

Yeah. Going down the aisles, what’s low, what needs to be replaced, all that sort of thing. 

Elad Inbar: Yeah. So there were few attempts, uh, in the past few years by, by different companies to do the stock keeping, uh, robots, um. As far as I’m aware, most of them are still in the pilot phase or, you know, were discontinued.

Mm-Hmm. Uh, the reason is that, uh, the way that it works, uh, the, the role basically has a, like a long, uh, you know, post with cameras and, you know, uh, just take pictures of all the items on the shelves. And in many cases, uh, you know, the, the associates at the store bring, let’s say, you know, there is only one box of cereal, you know, left, right?

So the associates bring that to the front of the, of the shelf. Right. Um, and it’s full, no problem. Exactly. So, you know, the robot can see behind that. Uh, so that’s a problem that, uh, was still not solved. Um, so, you know, most of these are still in, uh, in early phase. Uh, I I, I’m not familiar with any successful deployment of these type of, uh, robots.

Um, but when it comes to cleaning, you know, Walmart is an example or you know, any supermarket, right? I mean, cleaning their floors, uh. Either on a consistent basis or, you know, it just goes up and down, uh, to clean. Or, you know, milk was peeled in, you know, aisle four, right? So someone needs to get there.

Mm-Hmm. Send the robot, let the robot clean that, um, you know, instead of, uh, you know, taking a cashier or someone else, uh, that is doing something more important. Um, so this, this kind of thing, automating, like 

John Koetsier: actually helping a customer Exactly. Like actually helping a customer. I mean, trying to get help where in one of these stores is really, really challenging.

Yeah. Most of these. Most of these jobs that you’re talking about are jobs that don’t require. A significant amount of engagement with humans. They require safety protocols. They need to get around humans. They need to be careful where humans are, but they don’t involve a lot of interaction with humans.

Where do you see that coming in? Is it coming in already? Where do and if, if so, where? And do you see technologies like LLMs, GPT four, that sort of thing being important in that? 

Elad Inbar: Yeah. So, uh, on the. These types of robots that we discover, that we discussed so far, uh, you know, they have sensors and everything.

They’re safe around humans. Uh, you know, every time they, they have laser sensors and ultrasonic sensors and cameras and so on. So every time they detect motion or something, first, you know, rule is stop. Let you know the humans, you know, pass. And then, uh, you can keep going. If you know the person stays there or the obstacle stays there, then you know, replan uh, a path around that and, and try to find another way around that.

So this is, uh, what, um, uh, you know, these robots are doing today. And again, these are what, as you said, I mean, they’re not. Talking to people, they’re not engaging, they’re not entertaining. Yeah. Some of the delivery robots have a, you know, birthday song, so, you know, if they deliver the cake they’ll, you know, sing Happy birthday.

But that’s fine. That’s not an interactive kind of, uh, uh, you know, uh, you know, a way to, to talk to customers and all of that. We have another, uh, section of robots that are designed. To be customer facing, to designed, to answer questions, to, uh, talk about, you know, product services, about the location. Um, so I’m sure you’re familiar with Pepper, uh, humanoid robots from SoftBank Robotics.

So this robot is designed to be, you know, a customer facing to answer questions. It can actually speak and understand 26 different languages. Uh, so think about it as a concierge at the hotel. Okay. Answering all these repetitive questions like when the airport chat leaves, uh, what are the hours of the gym?

Can you recommend restaurants around here? Can you recommend attractions for the kids? Right? So all these repetitive questions can be answered, you know, in a, in a, you know, a, a coherent way, constantly. It doesn’t matter if it’s 6:00 AM in the morning or, you know, close to midnight. Uh, you don’t have, uh, you know, um.

Either missing employees that don’t show up in the morning, or you know, employees that talk back and, you know, bring, you know, attitude in a way, or, uh, I mean, you’re laughing, but that’s, you know, reality or people that just sit on their social, you know, media on their phone, on the back office for two hours and they actually don’t even, you know, occupy their, their station.

So these are, you know, real stories that I’m hearing from, uh, you know, from business owners. Uh, we even have these robots in, uh, assisted living facilities where. Uh, they’re helping, especially with people with, uh, dementia, um, that, um, mm-hmm. You know, is, is a big issue right now because we don’t have assisted living facilities don’t have enough therapists to work with, uh, you know, the, the residents with dementia.

Yeah. And, you know, these people wake up in the morning. They don’t know where, why they’re there, why they’re not home, where is their spouse? Where are the kids? You know why I’m being held here? And you know, they’re not stupid, they just forgot, right? They just don’t remember what’s going on. So, you know, we need to go to them and just, Hey, Mrs.

Jones, you are here because you have a memory, uh, issue. We are here to help you. Your spouse will come, uh, you know, later this afternoon, your kids will be here, and so on. Just try to ease them back, you know, into the daily life. We don’t have enough therapies to do that. And robots are great, you know, assistance because, you know, this type of robot pepper, for example, um, you know, can come over, can show on the tablet, it has tablet on the chest, uh, it can show pictures of their wedding day.

We work with the families, we work with the, uh, you know, facilities, uh, you know, picture of their first child that was born and so on. Um, and basically lower the level of stress because. You know, think about yourself. You wake up in the morning, I’m being held here, the door is locked. Why am I here? I want to go home.

I want to go to my family. You know, you’ll not hold me here. And you know, because we cannot talk to them. Okay? They get frustrated, they get upset and, you know, sometimes violent. So they lock them in the room. We just accelerate this time and this is not the way to treat, you know, the elderly and, and robots are not judgemental.

No. The robot will do that 50 times a day. Right. Without any problem. It’s not like, Hey, I told you five minutes ago, can’t you remember it? It’s not going to have this attitude or issue, uh, because it’s a robot. And, and that’s kind of like where customer facing robots can, you know, help, uh, you know, different types of people, right.

From hotels all the way to assisted living facility. 

John Koetsier: I saw a lot of that kind of robot at CES in January of this year actually. And at the time I was. Super skeptical and I was like, okay, you’re gonna put this like one that I saw sort of sits on a desk or a table or a counter or something like that. Has a screen, not really a face, but sort of glowy eyes and sort of a mouth smile.

And it’s like, come on. But I can totally see it. My mother has been diagnosed with dementia. And I can totally see it as something that would be incredibly helpful. And people like this come to need 24 7 care. ’cause they get up in the middle of the night. Yep. And they go somewhere and they don’t know where they are and they get lost and they get upset and they get scared.

And we simply don’t. Even if, even if you’re paying $10,000 a month for a super amazing care facility, uh, there just aren’t enough people to be with them all the time. And you know what? Our, as their kids, we’re working, we have jobs, we have our own kids, we have our, we can’t be there 24 7 either. So I think there is huge opportunity there.

Yeah. Okay. Um, we’ve talked about a lot of the stuff that, that I wanted to get into. Um, I, I wanted to maybe ask one question before we start to come to a close. Where do you see the biggest opportunity over the next few years? We’ve talked about a couple different verticals. You talked about.

Follows the path of the smartphone, follows the path of other technology that becomes ubiquitous in basically 10, 15 years or something like that. Uh, where do you think are the, the biggest areas of opportunity? 

Elad Inbar: So the biggest opportunity is actually, uh, in the service side of that, how do we enable business owners okay.

And, and, you know, build their trust in this type of technology. That, that’s the biggest opportunity. And this is something that we are working really, really hard, uh, to solve. Uh, you know, our goal, my goal to the team is basically to have robot lab offices in hundred metro areas by the end of next year.

Okay. Because when, when we think about it, um, let, let’s take, you know, the car industry as an example, uh, when, you know, everyone knows Henry Ford’s biggest invention is the production line, right? The ability to, you know, produce the model, ty and low cost, and, you know, high volume and all of that. But everyone forget another invention that probably was more important, uh, from Henry Ford, which is the car dealership model, because let’s say you are in, you know, Los Angeles, in San Diego, okay?

This guy, Henry Ford is building, uh, cars, cheap cars, okay? In Michigan, right? I’m not going to buy it from Michigan because if it breaks in Los Angeles, what, what am I going to do? Am I going to, you know, carry it with a, you know, tow it with a horse, you know, all the way to Michigan to fix it. Right? I need someone in my backyard that I can go down the street from me that can take care of me if something goes wrong.

Right? And this was the mm-hmm. In my mind, this, this, this was the pivot point that basically unlock the potential because people want this technology, they want the benefit of the technology, but they are afraid from, you know, putting their money into this type of thing if they won’t get service. So this is something that we are really, really focused on because I, I, I can guarantee our product portfolio in five years will look totally different than what it is today, right?

With the, you know, the pace of innovation, the pace of changes, everything will change. Okay? Look at. Our phones, our laptops, I mean, we are keep upgrading them every other year, right. So these kind of things will change and we’ll have new robots and, you know, Optimus eventually in 15 years, not 10, uh, will be there.

Right. But, uh, you know, we will get these type of technologies that will enable things. But again, no one will buy Optimus. Okay. In, uh, Reno, Nevada. Well, there there is a Tesla arena, Nevada in Tampa, Florida, uh, right. No one will buy that if there is no service available. Down the street from me. So that’s the biggest opportunity.

How do we unlock that, uh, you know, model where everyone feels comfortable enough to step into this, you know, new adventure, right? Of putting a robot in my business and relying on a robot, you know, for my business. Um, you know, and, and to do that, I, I, I have to have someone to take care of me. So that’s where, in my mind, at least the biggest opportunities.

John Koetsier: That’s, uh, super interesting. I was gonna bring up a counterfactual. I was gonna say, Hey, Tesla doesn’t have dealerships, but you know, essentially they had to build dealerships because they needed to solve the service issue and so they don’t have enough. I know, ’cause I drive a Tesla. Yes, it’s sometimes harder to get to them.

Uh, but uh, yeah, they had to essentially build that entirely themselves. And the cost of that, of course, is significant. Now you talked, you mentioned earlier, you’re doing a franchise model. That’s a new one. Most people, they think about franchises, they think McDonald’s, they think Wendy’s, they think something like that.

Right. You’re doing a franchise model that should help you grow quickly, 

Elad Inbar: I assume. Yeah, exactly. So that, that’s, you know, you know, when we look at the, at our company, we grew year over year. We’re doing it for, uh, you know, 16 years. We, year after year, we, we grew. Um, but we got to the point, you know, especially when you look at exponential growth, right?

That the next step is so much, you know, bigger than where we are today, and we can’t do it with our resources. We can’t open, you know, 50 service centers around the country tomorrow. Uh, and we looked at different, you know, uh, growth strategies and, you know. Time and time again, the franchise model came, uh, to be the right, uh, model because we are talking about business owners okay.

That are passionate about, you know, the, the market, the about the industry, about servicing, uh, you know, customers and, you know, they, they’ll do everything, uh, in their, uh. Power to be successful because again, it’s their business. We train them on everything that we know. We have all the training materials.

We have, you know, hundreds and hundreds of hours of, um, uh, online learning system on every product that we, uh, we provide. We have, uh, you know, on site training, we have all the SOPs, the standard operating procedures and everything, but we just missing the people on the ground. And, you know, marry, marry these two together.

Okay. This is basically an unstoppable, uh, uh, machine. And I think this will, what will unlock the biggest potential here because again, business owners are waiting to have someone down the street from them.

John Koetsier: Super interesting. I wanna thank you for taking this time and, uh, for having the interesting conversation.

Elad Inbar: Yeah. Thank, thank you for  having me.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Here’s an all-wheel drive e-bike … with ChatGPT

Do you need ChatGPT integrated into your new bike? How about an all-wheel drive bike? (OK: a 2-wheel drive … but yeah, that’s all-wheel drive on a bike!)

In this episode of TechFirst, host John Koetsier chats with the CEO of Urtopia about their new AI-integrated ‘smart bike with a mind.’

(consider subscribing on YouTube)

The e-bike market is predicted to grow to about $26 billion by 2028, but Dr. Owen Zhang explains how Urtopia is taking a different approach by developing most parts in-house to create a fully integrated, software-enabled product. He says their AI features, like ChatGPT integration, makes e-bikes safer and more personalised. It can also provide assistance including directions, making the ride safer and more enjoyable.

We also chat about the world’s first e-bike that has drive motors on both wheels, providing more power and better traction. Want to skip to a section? Go for it:

  • 00:00 Introduction and Welcome
  • 01:06 Exploring the Fusion GT Bike
  • 01:47 The Design and Development Process
  • 03:53 The Power of Dual Motor and Dual Battery System
  • 06:51 The Future of Bikes: ChatGPT Integration?
  • 07:12 The Role of AI in Utopia’s Bikes
  • 07:38 The Vision of Utopia: A Bicycle with a Mind
  • 16:48 The Future of Smart Devices and E-bikes
  • 25:30 Conclusion: The Bike as a Wearable Device

Audio podcast: a bike with ChatGPT

Perhaps you prefer the audio podcast? Be my guest …

Get it on your favorite podcasting app:

 

Intro and overview for this episode: the bike with a ‘mind’

Here’s an AI-generated blog post about this episode. Appropriate for a show on ChatGPT, no? Note: the AI engine that my podcasting software uses is very exuberant and kind of “extra” … so everything is a little over the top 🙂.

In the ever-evolving world of technology, innovation knows no bounds. The latest groundbreaking development that has caught our attention is the Urtopia smart bike. This incredible creation is set to revolutionize the e-bike industry with its unique features and state-of-the-art technology.

In this blog post, we will delve into the fascinating details of the Urtopia smart bike, including its dual motor and dual battery system, as well as its integration with ChatGPT.

Let’s explore the future of cycling and how the Urtopia bike is pushing boundaries.

The Urtopia smart bike was unveiled by the visionary CEO, Dr. Owen Zhang. With a background in mechanical engineering and a passion for innovation, Dr. Zhang set out to create a bike that would not only redefine the e-bike market but also bring a fresh perspective to the concept of “smart” devices. The fusion of cutting-edge technology with sleek design resulted in the birth of the Urtopia smart bike.

One of the standout features of the Urtopia smart bike is its dual motor and dual battery system. Designed to provide an unparalleled riding experience, this all-wheel-drive bike ensures maximum traction and control, even on challenging terrains. Dr. Chang’s vision for a bike that could tackle steep slopes and rough trails became a reality with the fusion GT bike.

To create a product that would truly stand out in the e-bike market, Dr. Zhang collaborated with renowned designer, Hartmut Esslinger, who previously worked with Apple’s Steve Jobs. Their partnership resulted in a unique design concept named Snow White, which marries aesthetics with functionality. Urtopia’s fusion GT bike embodies the perfect blend of powerful performance and stunning design.

The integration of ChatGPT into the Urtopia smart bike takes the concept of smart devices to new heights. Riders can now enjoy a conversational experience with their bike, receiving real-time feedback, navigation assistance, and much more. The ChatGPT API enables seamless integration with popular apps like Strava and Apple Health, enhancing the overall riding experience.

Dr. Zhang envisions a future where bikes become an integral part of our daily lives, seamlessly connected with other smart devices. With ongoing developments in AI technology, Urtopia plans to introduce UrtopiaGPT, their proprietary AI model based on GPT-5. This advancement will further enhance the capabilities of the ChatGPT system, allowing riders to interact with their bikes effortlessly.

Urtopia’s smart bike represents a paradigm shift in the e-bike industry. Driven by the vision of creating a truly intelligent bike, Urtopia has combined cutting-edge technology, sleek design, and seamless integration to offer an unparalleled riding experience. With the dual motor and dual battery system, as well as the integration of ChatGPT, the Urtopia smart bike is paving the way for the future of cycling.

Stay tuned for more exciting developments from Urtopia as they continue to push the boundaries of what’s possible in the world of e-bikes.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

App store for your brain: reading brain waves to fix sleep, pain, learning

App Store for the brain

Can you deliver medical treatment by changing brainwaves instead of injecting drugs?

Elon Musk has recently implanted his first Neuralink into a human patient. But can we get neurotech medical treatment without drilling holes in our skulls?

Maybe …

According to Elemind, a startup with roots in MIT, we can. And they say they can read your brainwaves, manipulate them, and fix issues like sleep disorders, tremors, pain, as well as speeding up learning.

Watch here:

Subscribe to my YouTube channel here

Today we’re chatting with Meredith Perry, the CEO and former NASA astrobiology Researcher, plus Dr. David Wang, co-founder and CTO, who has a PhD in AI from MIT. This technology could potentially treat medical conditions ranging from sleep disorders and tremors to learning difficulties. We also discuss the future of medtech, envisioning an ‘app store for the brain’ where individualized treatments can be downloaded like apps, focusing on promoting the most optimized state of health for any given individual through real-time detection and diagnosis.

Check out the story on Forbes …

Get the audio podcast: neurotech startup building an app store for your brain

 

AI summary and transcript

Summary:

This podcast is a conversation between John Koetsier, the host of TechFirst, and Meredith Perry and David Wang, the CEO and CTO of a neurotech health company called Elemind. They discuss the company’s wearable device that uses neurostimulation to read and stimulate the brain in real time. They talk about the potential of using this device for medical treatment without the need for drugs and its effectiveness in improving sleep, reducing pain, and enhancing learning. They also discuss the future plans of the company, including the development of an app store for the brain and individualized treatments based on AI and machine learning. Overall, the script highlights the innovative technology and potential benefits of this neurotech device.

Transcript:

Meredith Perry: We use a wearable neurotech device to read the brain in real time and intercept it in real time with something called neurostimulation. And so that’s using sound or light or vibration or electricity to stimulate the brain. when we do that, we can actually guide the brain precisely, and that leads to a behavior change.

So like a drug, but much smarter and without the side effects.

John Koetsier: Hello and welcome to TechFirst. My name of course is John Koetsier. Elon Musk has implanted his first Neuralink into a human patient.

Maybe according to LMI to startup with roots in MIT, we can and they say they can read your brainwaves, manipulate them, and fix issues like sleep disorders, tremors, pain, as well as speeding up learning. Today we’re chatting with Meredith Perry, the CEO and former NASA astrobiology Researcher plus David Wang, Dr.

David Wang, co-founder and CTO, who has a PhD in AI from MIT. Welcome both of you.

So

Meredith Perry: nice to be here. thank you so much for having us.

David Wang: Thanks for having

John Koetsier: Super pumped to have you. Meredith, are you sure you didn’t bring something back from outer space?

Meredith Perry: I’ve never been to outer space. but I’d love to bring something back for you if I ever get the privilege to go.

John Koetsier: Give us the big picture. What does lm I do.

Meredith Perry: So M Mind is a Neurotech Health company, and we’ve developed a wearable device that’s going to allow you to optimize and improve your health without the use of drugs. So let me give you some context. So as we know, healthcare today is reactive, it’s blunt, it’s largely dependent on pharmaceuticals.

And while pharmaceuticals can often be effective, we also know that they can sometimes be addictive and they can have negative side effects. So, uh, our co-founders and I have spent the last four and a half years developing a way to achieve the same effects of drugs. Without chemicals and without side effects.

And this approach we call electric medicine. So with electric medicine. We use a wearable neurotech device to read the brain in real time and intercept it in real time with something called neurostimulation. And so that’s using sound or light or vibration or electricity to stimulate the brain. And when we do that, we can actually guide the brain precisely, and that leads to a behavior change.

So like a drug, but much smarter and without the side effects. And there’s a lot other cool things that we can do. Once you have the, uh, once you can insert AI and machine learning and use sensors to be able to even see whether or not something like this is working in the body.

John Koetsier: So when I’m listening to what you’re saying, the picture that comes into my mind is something like some deep sea helmet that I wear on my head. It’s got sensors, it senses my brainwaves, and then it’s got all kinds of equipment on there to influence them.

What is it actually?

Meredith Perry: it is a very simple form factor. When you think of an EEG device, what comes to mind might be some sort of helmet with tons of wires. We have made a very sleek. form factor of something that looks somewhat like a headband, uh, that has a suite of sensors that’s reading what’s going on inside your brain and inside your body.

Um, and we use all of that information to let us know when to stimulate and how to precisely guide the brain to achieve the desired outcome.

John Koetsier: Is that something like a muse headband maybe?

Meredith Perry: today we’re not going to be talking specifically about our other form factor but it is a headband, wearable, and, um, and it will be comfortable and flexible and low profile and easy to move around or sleep with

David Wang: Interesting hint, you can sleep with it. That seems to indicate it is soft or at least somewhat soft. Okay. I won’t dig too much farther on there. I, I appreciate you’ll show it to me when you’re ready. That’s great. how it works has been the focus of our work for the past five years or so.

at the core of LM i’s technology is the ability to predict, When a biological oscillation is going to reach its peak or reach, its troffer, any phase in between. And with this ability to predict we then have the ability to intercede. And Meredith just listed all the different stimulation strategies we used, but fundamentally, you can go back to maybe freshman physics.

we are doing. Constructive and deconstructive interference on your brainwaves. What makes our technology interesting though, is the method in which we do stimulation doesn’t need to be the same electrochemical language that your brain speaks. Fortunately, your brain is wired to sensors all around your body, so if we stimulate those sensors through light, sound, touch, we can drive your brain to.

impact and neuromodulate other brainwaves

John Koetsier: First of all, you’re a funny guy. But secondly, it’s really interesting.

That to impact the brain, you don’t need to make electrical changes upon the brain per se. You simply need to present stimuli to the brain via our normal natural senses and the brain then somehow reacts in how it interprets and that makes a change. The desired changes precisely, precisely.

Meredith Perry: So John, you can think about it kind of like noise cancellation for the brain.

So you know, to, to give you more context, the brain is an electrochemical organ and we can measure brainwave activity on the outside of the brain using something called an EEG. So David talked about biological oscillations. A brainwave is a biological oscillation and different brain states are characterized by different.

Frequencies of brainwaves. And what we’ve learned is that by stimulating at certain parts, uh, at certain. Times relative to the brainwaves.

We can speed up certain, frequencies. We can slow them down. We can amplify or suppress them. This is what neuromodulation is, and we’ve found that by changing the brainwaves themselves, we can actually change the state that someone’s in.

John Koetsier: Again. Fascinating. Really, really cool. You’ve really got me wondering about the form factor here.

I know you’re not talking about that. I’m wondering if it’s like a muse or maybe the neuro from the crown, uh, neuro device. We’ll see when that comes out. I. Uh, talk a little bit about how effective it is. You’ve done, I believe, three studies on it, three published studies. You’ve probably done a lot more work on it.

How effective this if, if my pain is a 10, can you reduce it to a three? Uh, if, if, if I’m having trouble learning, how, how much can you speed that up? What kind of effectiveness data do you have? Yeah, that’s a great question.

David Wang: we’ve done studies in a variety of different areas, which is really cool about our technology.

It is quite broadly applicable. within the area of sleep, we’ve, studied over a hundred subjects. We’ve recorded over two and half years worth of, sleeping data and we’ve shown in the particular case of sleep, that we can help people fall asleep significantly faster for about 73% of our subjects, about 30% faster, which is huge.

we’ve also been working with, All sorts of fantastic academic collaborators. from the University of Washington, we have really amazing work that shows that we can, increase someone’s tolerance to, pain in this case a temperature threshold. quite significantly.

if we stimulate at the correct time, work from Louvin and McGill University has shown that, we can improve, learning and response times, by, stimulating at the correct time. we have a fantastic paper in, nature about two years ago that shows a different form of simulation.

So in this case. electrodes placed on top of the scalp that we have the ability to reduce tremors as well. So for people with physiological shaking, by about 50%, with less than a minute of stimulation.

John Koetsier: That’s huge because, uh, for people who have that, that can be completely debilitating, uh, not able to do anything.

It’s really interesting to me personally about the pain thing. I’ve been told by three doctors that I have a high pain tolerance and I don’t know, I just have this switch I can flip on my brain. I still feel the pain, but I can stop caring about it. I don’t know if your device works that way or how it works.

I mean, you talked about in terms of sleep, you talked about speeding up by 30%, the the time that it took for them to get to sleep. Typically, if there’s something like tremors or other things, and it may be different for different conditions, what’s the treatment time period like?

Meredith Perry: with sleep we see people fall asleep up to 76% faster. the average for that group is 30%. it really can make an enormous difference, the timing of the effect. Depends on the application with tremor, you can actually see a video, of somebody, using LMI to suppress their tremor and you can see the impact instantaneously.

after 30 seconds of stimulation, we see the effect actually grows. But it’s instantaneous. Sleep, you’re not falling asleep instantly. We are accelerating the time that it takes for you to fall asleep. Um, and so, you know, if you normally take 30 minutes to fall asleep, we might help you fall asleep in 15 minutes.

with the pain in that example, the study that we did, we were amplifying, the delta waves that are associated with deep sleep and we could amplify the sedative effect of, an anesthetic. And so we saw that instantaneously too. we were able to effectively, put higher temperature against someone’s skin.

And when you were able to combine Ella mind with the anesthesia people, from that pain. which indicates that we would be able to actually give people less drug, if they were undergoing anesthesia.

John Koetsier: Love it, love it, love it. The scientist is telling me the 30% average, the CEO, who’s already planning the brochure says up to 70%.

Makes total sense. And of

Meredith Perry: we can’t leave money on the table, John. The facts are facts.

John Koetsier: Absolutely. Now, is this gonna be a medical device? Is this gonna be something that you have to. prescribed for, and maybe used in a medical scenario under supervision, or will it be off the shelf?

Buy it at a drug store, use it at home.

Meredith Perry: the first product we go to market with is a consumer wellness product. It’s not a medical device. not FDA approved, and focused on a wellness application. moving forward we will have form factors and models that will be medical devices and will treat medical issues.

John Koetsier: Now you’ve been in stealth since 2019. that’s a decent length of time. It’s a hardware startup, so that’s not easy and takes time. And it’s also cutting edge. It’s new technology that you’re inventing. And by the way, there’s been immense advances in AI in that whole period of time.

And you guys use AI to understand the brain and then, implement your treatments. That’s interesting. It’s also been a crazy period to be installed. You pick the covid and lockdown and shut down and then return and all that stuff. What’s the next few steps? What’s the next couple years look like for you?

 

Meredith Perry: So, in a number of months we’re going to be announcing our first consumer product. it’s going to focus on one application that we’ve conducted, clinical trials on that we’ve been successful with. a large focus of the company is going to move in that one direction. But after that, our grand vision for this company is to ultimately be an app store for the brain where you can effectively download a treatment, like you download an app from an app store or download a brain state like you download it.

an app from an app store. our technology has capabilities that go far beyond just treatment. So when you, when you’re wearing something, uh, for a significant period of time that has biosensors on it, we can detect or diagnose in real time with the assistance of AI and machine learning.

Whether you have an issue, you know, perhaps you’ve had a stroke or perhaps you’re having a seizure, or we see that you are anxious or potentially have, um, you know, signs of mild cognitive impairment. That’s something that we’re going to be able to tell people. And I. With the use of ai, we’ve also developed a tool that’s going to allow us to learn over time what stimulation protocols will optimize, uh, the, the person’s state, um, the fastest.

And so the vision here is to be able to develop individualized treatments for different people for their different conditions, to allow them to be the most optimized versions of themselves, um, at any given time for whatever those disorders are.

John Koetsier: the mind kind of, uh, explodes here because of course, the whole quantified self comes into play here a little bit.

and when we saw, uh, fitness trackers, the Fitbit was one of the first big ones that came out there. I’ve interviewed somebody who says they’re injuries in the Fitbit for your blood, and you can test like a drop of blood. And it’s not, it was not the one that was a skin. It was more recent than that and see, you know, just what’s happening in terms of what changes, uh, or your diet is impacting everything that you almost think of this as like a Fitbit for the brain or for the mind.

And you almost wonder, will the wearable become something that has fashion aspects so that you could wear it full-time if you wanted to. Perhaps if you’re a high risk individual, you’ve had strokes in the past or something like that, and you want instant awareness of when you’re going to have very early signs of one so you can do something about it.

Uh, wow. Maybe you can build into something like a, a headband or, or glasses or something. Very interesting.

Meredith Perry: Absolutely. and John, I think a key distinction between some of these wearables that you mentioned is that. Almost all of the wearables that exist today just read. They just track. They tell you, things that are happening, but they don’t change anything.

So you can think about us, not as just a tracker, but we read and kind of write. So when we see that a problem that there is a problem, we can also fix that problem, or at least try to, as opposed to telling you, Hey, you should, go for a run or you should do this. we try to meet people where they are and be as simple as a pill.

And a pill doesn’t judge you. A pill doesn’t tell you anything. It just, it does the thing passively for you to get you into the state that you wanna be in.

John Koetsier: I’m returning the Apple watch. It. Can’t do the workout for me. How lame is that? Right? Wow. So lazy, interesting stuff. Very, very cool. I look forward to seeing the device, what it looks like.

I almost wonder if you’d partner with other people. There are so many cool startups in hardware and brain interfaces. I mentioned muse, uh, generosity, uh, is another one. Uh, but this app store for the brain concept. Wow. Uh, and then ideally with some kind of sort of private application of AI to understand my brain, and I’m sure there’s many similarities to everybody’s.

And then there’s some similarities in different types of people whose brains work similarly. But being able to handle that, manage that. and treat that in some level, is very cool.

David, I don’t think you got the off switch yet, do you? I mean, sometimes you just wanna turn the switch off and wake me up in 10 hours and I’ll deal with life at that point.

David Wang: I like to think of the technologies we create as guiding or directing the brain. it gives an individual, more control over themselves, than they would have otherwise.

sometimes we think it’d be great if I just had the willpower to work a little harder, do something more. What if there was a device that gives you that willpower? That’s what we’re trying to enable.

John Koetsier: there’s so much potential for a device that, is not just read, but also write, we give drugs to people who have anxiety.

Right, and it calms ’em down and it has side effects and sometimes their emotions, just flatline. I know somebody very, very well, who has had some schizophrenic episodes and so he has to have treatment for that in the form of pharmaceuticals. And that has almost destroyed his personality and ability to be a functioning spouse and father.

And obviously this is super futuristic, but if you can come up with something that does that non-invasively and very. Intelligently, selectively doing only the things that are required for the condition. And not just totally obliterating somebody’s personality or willpower or motivation

Um, that’s huge. And I’m not even getting into addiction issues where people are addicted to certain feelings or drugs or different things that they get. And being able to address those, is incalculable.

Meredith Perry: We agree and certainly hope so, the way that chemicals work means that they have to go through your entire body to be able to hit its target, which is why there are off target effects.

And with alim mind, we’re interfacing with the brain and the nervous system directly, and we’re not, we don’t have to go through the bloodstream. So we don’t have any of those off target effects. And that’s, and it makes, you know, improving your health, something easy, low effort, and without compromise.

with a drug, we are deciding, okay, I’m gonna take allergy medication, but I know that I’m gonna get groggy. you shouldn’t have to make that trade off.

David Wang: If I could add, we’re all different. When you take a drug, it is like a one pill fits all type of solution, but that’s not actually how humans work.

We are all unique and so there’s an aspect of tailoring and customizing, the neuromodulation, the brain modifications we’re trying to do in order to help some, in some way, that really tailors the effect and minimizes the side effects.

John Koetsier: thank you both for your time.

Thank you so much for having us.

David Wang: Thanks, John.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Hacking reality: Apple Vision Pro and security

VR 2024 inflection point

Can someone hack your reality if you’re wearing an Apple Vision Pro?

Apple Vision Pro is launching and the reviews are amazing, even from VR unbelievers. But we’ve barely begun conversations around its privacy and security implications. Think about it: it has literally dozens of sensors, cameras, mics. It maps your home, bedroom, kitchen, living room, and its video is so good, it looks like the real world.

This is not something you want bad guys controlling.

In this episode of TechFirst, I discuss Apple Vision Pro privacy and security concerns with Synopsys principal security consultant Jamie Boote.

Subscribe to my YouTube channel here

Listen: hacking reality with Apple Vision Pro

Subscribe on your platform of choice:

 

Hacking Apple Vision Pro

(AI-generated overview)

The launch of the Apple Vision Pro has taken the digital world by storm. With its advanced technology and impressive features, it promises to revolutionize the way we experience virtual reality (VR) and augmented reality (AR). However, amidst the excitement, it is crucial to consider the privacy and security implications that come with such a powerful device.

The Apple Vision Pro offers a whole new level of immersion with its array of sensors, cameras, and microphones. It can map your surroundings, making the virtual world seem like a part of your reality. But what happens if someone manages to hack into this technology?

“If somebody could hack that, what could they inject into that experience? The implications range from inducing fear or influencing brand preferences to potentially mapping your entire living space, says Jamie Boote, Principal Security Consultant.”

Every new device brings with it the potential for vulnerabilities. Jamie Boote sheds light on the challenges of securing a device like the Apple Vision Pro: “Anytime software can do something new, there’s a chance it can do something wrong. Adding complex sensors, interfaces, and computing power can create unforeseen security risks.”

The big question: can we trust Apple more than other tech giants when it comes to privacy and security? Of course, Apple is primarily in the business of selling hardware and building trusted computing platforms. While no company is immune to vulnerabilities, Apple’s focus on data protection does set it apart from data-harvesting companies.

Considering the history of software vulnerabilities, it is important to acknowledge that new technology will have its share of security issues. Old vulnerabilities can resurface in new ways. The paradigm shift in hardware and software calls for continuous vigilance and proactive security measures. As virtual reality devices like the Apple Vision Pro become more prevalent, the value and innovation in products increasingly reside in software and AI.

However, this also expands the attack vectors for potential hackers. The hardware and software’s continuous growth makes it an attractive target for attackers. The more we add layers of intelligence and data to what we see, the more avenues there are for potential security breaches.

The Apple Vision Pro is an impressive device with remarkable capabilities. While it offers an immersive and unparalleled experience, it is essential to be aware of the potential privacy and security risks that come with it. As with any advanced technology, ongoing efforts in securing these devices will be vital. With Apple’s track record and focus on data protection, the Apple Vision Pro is a step towards a more secure future in the realm of virtual reality and augmented reality.

“The more things that people want to do with software, there will be more ways to abuse that,” says Boote.

By addressing the privacy and security implications, we can ensure that the benefits of the Apple Vision Pro are enjoyed without compromising user safety. As the world continues to embrace VR and AR technology, it becomes crucial for industry leaders to prioritize robust security measures to safeguard user experiences.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Smart buildings 2024: an essential part of a smart grid

smart buildings 2024

How smart will our buildings become in 2024?

It’s early January and I’m in Vegas at CES where everything is smart, everything is AI, everything is cutting edge … apparently. But how smart will our buildings and homes get in 2024? Where are we headed, and where are the gaps?

Our guest for this episode: Dan Hollenkamp, CEO of Toggled, which makes sensors, software, and appliances for smart buildings.

Subscribe to my YouTube channel here

Subscribe to the audio podcast

Find your favorite platform:

 

Smart buildings 2024: AI summary of podcast

Here’s an AI summary of the podcast:

  • How smart will our buildings become in 2024?
  • The goal is to have a building that responds to our needs
  • Confusion between smart devices and remote controllable devices
  • Buildings that are assisting us and not taking away from our tasks
  • Data sharing and decision-making based on data
  • Flexibility in working times and locations
  • Buildings becoming active participants on the grid
  • Using AI to understand building usage and optimize environments

Summary blog: smart buildings in 2024

Note: this is AI-generated

The podcast episode titled ‘Toggled’ explores the intriguing world of smart buildings and the significant role of AI in shaping their future.

The episode kicked off with a question – how smart will our buildings become in 2024? Predicting the progress of technology is no small feat, but the guests provided their fascinating insights based on current technological advancements.

The dream of creating smart buildings is more than a looming reality; it’s an intention, a goal, and a process. The ultimate aim is to establish buildings that respond to our needs without adding any unnecessary complexity. This incorporates building structures that are intuitive, adaptable, and capable of providing the necessary support for a plethora of activities.

As the discussion carried on, a crucial differentiation was made between smart devices and remote-controllable devices. This segment emphasized the importance of going beyond developing technology that enables control to developing technology that predicts and anticipates our needs. This would pave the way for genuinely enhancing living and working spaces without introducing added work or confusion.

The benefits of smart buildings extend not only to structural design but also the function and feel of the space. The episode painted an image of the future where buildings are equipped to support and even enhance our daily routines. With smart buildings, you won’t have to fuss around adjusting the ambiance whenever you walk into a room; instead, the room would already be tailored to your preferences, ensuring a comfortable and productive environment.

The conversation in the episode highlighted the undeniable importance of data. In an era where data is king, building systems need to not just collect but also share and analyze this data to make informed choices. By leveraging the power of data, intelligent and efficient building management becomes a reality.

A significant chunk of the discussion revolved around the changing trend of workspaces due to remote working. The shift towards remote work has ushered in an era of flexibility in working times and locations, which in turn, affects our buildings’ energy usage patterns. Smart buildings must transform and adapt to these changing patterns to ensure an optimum level of comfort and productivity.

As the future unfolds, buildings are expected to evolve from mere energy consumers to active participants on the grid. With the increasing shift towards renewable energy and microgeneration, buildings will have a crucial role in balancing energy demand and supply, actively managing energy consumption, and aiding the integration of various energy sources.

Last but certainly not least, the podcast highlighted the immense potential of AI in smart buildings. Leveraging AI, buildings will be able to analyze user behaviors, adapt to personal preferences, and make automated adjustments to the environment to create optimal conditions. In essence, AI will help buildings become smarter, more adaptable, and energy-efficient.

To sum up the podcast, it offered a captivating glimpse of the future of smart buildings and AI’s transformative potential. Through the integration of cutting-edge technology, AI, data, our everyday living and working spaces have the potential to become intelligent, adaptive, and energy-efficient.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

VR in 2024: inflection point with Apple Vision Pro?

VR 2024 inflection point

Where will VR go in 2024? VR is in an interesting space right now: doldrums in the consumer market, but lots of cool new tech in Meta Quest Pro, Quest 3, and some App Store download data indicating Quest 3 is doing unexpectedly  well.

And of course we have Apple Vision Pro coming out SOON … which I hope to buy in Q1. But will likely be low-volume, as will other very high-end headsets intended for professional and business use like the Varjo XR-4 (which I recently demo’d in Las Vegas at CES).

So what can we expect in 2024 for VR? In this TechFirst, we chat with Rolf Illenberger, the founder and managing director of VRdirect.

Subscribe to my YouTube channel here

Subscribe to the audio podcast

 

AI summary of this episode on VR in 2024

John Koetsier and Rolf Illenberger discuss:

 

  • the future of VR in 2024, focusing on its potential in the enterprise space
  • the current state of VR technology and its use cases in areas such as training, safety, internal communications, and virtual tours
  • challenges and opportunities for Apple’s upcoming Vision Pro headset
  • the importance of creating immersive experiences and intuitive user interfaces
  • Rolf’s prediction that 2024 will be a pivotal year for VR with increased adoption and integration into business operations

VR in 2024: a GPT-4 blog post based on this episode

As we enter 2024, the world of virtual reality (VR) is at an interesting crossroad. In this episode, we dive into the current state of VR and discuss the future and potential inflection point for this technology in 2024.

The Consumer Landscape
In the consumer space, VR has seen ups and downs. However, there are some exciting developments on the horizon. The Meta Quest Pro and the unexpected success of the Quest 3 App Store downloads indicate that VR is still capturing the interest of consumers. Additionally, the much-anticipated Apple Vision Pro headset, set to release in Q1, offers the potential to redefine the VR landscape with its innovative features.

The Enterprise Adoption
While the consumer market might still need a few more iterations to gain widespread success, the enterprise adoption of VR is on the rise. VR is proving to be a valuable tool for various industries, particularly in training and skill advancement. Surgeon training, for example, is being revolutionized by immersive VR experiences, allowing doctors to practice procedures multiple times before performing them in real-life situations. Other applications include safety training, internal communications, virtual showrooms, and simulations of real-life processes.

The Impact of Apple Vision Pro
One of the most highly anticipated VR devices is the Apple Vision Pro. While not initially targeted at the mass market, the Apple Vision Pro aims to make a significant impact on both the enterprise and consumer fronts. With intuitive user interfaces, facial recognition, and the ability to integrate with existing Apple content, the device offers a unique VR experience. However, its success may heavily depend on the development of a robust content ecosystem.

Challenges and Considerations
The successful adoption of VR in enterprises relies on overcoming several challenges. User experience is key, and companies need to carefully introduce VR to their workforce to avoid overwhelming initial experiences. Additionally, the ever-evolving hardware landscape poses a challenge for IT departments, as they navigate the numerous options available and ensure compatibility and security.

The Future of VR
Looking ahead, VR holds tremendous potential for immersive entertainment and unique storytelling experiences. The combination of AI and VR will play a crucial role in the development of captivating content and enhancing user interactions. As the technology continues to evolve, we can expect VR to become an integral part of many industries, creating new opportunities and transforming how we perceive and engage with the digital world.

Conclusion
As we move into 2024, VR stands at a crucial inflection point. While the consumer market continues to evolve, the enterprise adoption of VR is already making significant strides. The upcoming release of devices like the Meta Quest Pro and Apple Vision Pro will further shape the VR landscape. As organizations recognize the value of VR in enhancing training, communication, and productivity, we can expect VR to become an essential component of their digital strategies. With the right approach, VR has the potential to revolutionize industries and provide immersive experiences that were once only imagined in science fiction.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Reinventing speakers: replacing 100 year old tech with MEMS chips

reinventing speakers with MEMS chips

Can a new innovation upend a $50 billion industry that uses 100-year-old tech … and that is probably in your ears multiple times a day?

Pretty much all of us use earbuds, often wireless ones, and they all rely on old tech … literally hundred-year old tech from the 1920s with roots in the 1800s. There’s a new option based on silicon: a microchip means of creating sound. And, it could be better while also being cheaper. In this episode of TechFirst, we chat with  Mike Housholder, a VP at xMEMS Labs.

Here’s our chat:

(Subscribe to my YouTube channel here)

 

Subscribe to the TechFirst podcast

 

MEMS and speakers: replacing 100-year-old tech

(Note: this is an AI-generated summary of this podcast. As such, it will take guests’ statements as undisputed fact. This should not be taken as been written by me, John Koetsier.)

The world of audio has come a long way since the invention of the coil and magnet speaker in the 1800s. This century-old technology has served as the foundation for sound reproduction in nearly all our devices, from headphones to speakers. However, it’s time for a change. Enter xMEMS, a company at the forefront of innovation that is set to disrupt the $50 billion speaker industry with their revolutionary solid-state semiconductor alternative. In this blog post, we’ll explore the technology behind xMEMS, its benefits, its current market presence, and the company’s ambitious plans for the future.

The Problem with Legacy Speakers
For decades, we’ve relied on coil and magnet speakers for our audio needs. While they have undoubtedly improved over time, they still suffer from limitations. These mechanical devices are prone to wear and tear, lack uniformity and consistency across speakers, and can be easily damaged by external factors such as water or drops. Moreover, they struggle to provide the level of audio detail, separation, and precision that consumers desire in today’s fast-paced digital world.

Introducing xMEMS
xMEMS is here to change the game. They have developed a solid-state semiconductor alternative to traditional coil and magnet speakers. By leveraging the advantages of semiconductor technology, xMEMS can produce high-quality sound with precision and reliability. Unlike their mechanical counterparts, xMEMS speakers are more robust, resistant to damage, and have a longer lifespan. They also eliminate the need for magnets, reducing weight and electromagnetic interference in wireless devices.

The Benefits for Manufacturers and Consumers
The shift to xMEMS speakers offers several advantages for both manufacturers and consumers. From a manufacturing standpoint, solid-state components are easier to test, scale, and integrate into products. They offer greater uniformity and consistency in loudness and phase alignment, ensuring a well-balanced audio experience. Additionally, these speakers are more reliable, surviving drops, water, and dust better than their mechanical counterparts.

For consumers, xMEMS speakers deliver superior audio quality. With a faster mechanical response, they can reproduce complex and dynamic sounds with exceptional detail and separation. Imagine hearing every instrument in a song or distinguishing between background and foreground vocals with utmost clarity. Furthermore, xMEMS speakers boast a flat phase response, which means that audio is faithfully reproduced without the typical phase disturbances found in conventional speakers. This results in a more accurate sound representation, akin to that experienced at live music performances.

Current Market Presence
xMEMS has already made waves in the audio industry, with their speakers featured in premium-grade in-ear monitors and hearing aids. However, their most significant market breakthrough is on the horizon. In November, the company will launch its first true wireless stereo (TWS) earbuds with xMEMS speakers in collaboration with Creative Technology. This consumer-friendly product will offer the benefits of xMEMS technology at an affordable price point, making it accessible to a wider audience.

The Path to Market Dominance
xMEMS has ambitious plans to dominate the speaker market. While their initial focus has been on personal audio devices, such as earbuds and headphones, their ultimate goal is to reinvent loudspeakers in every form. From smartphones and smart speakers to cars and home theater systems, xMEMS aims to revolutionize sound reproduction across the board. While the physics challenges are significant, the company is already working on pioneering a transduction mechanism called ultrasonic amplitude modulation. By moving outside the conventional auditory frequency spectrum, xMEMS hopes to bring full bandwidth audio to even the largest speaker systems.

Patents and Future Competition
xMEMS understands the importance of protecting their technology and has secured over 110 patents to safeguard their innovations. While they expect competition to arise in this growing market, xMEMS’ strong intellectual property portfolio provides them with the necessary freedom of operation. Although they may not be the sole source for this technology, xMEMS is currently leading the market and anticipates significant growth as they expand into various segments.

Conclusion
With their solid-state semiconductor alternative, xMEMS is shaking up the speaker industry. By delivering high-quality sound with precision, reliability, and innovation, xMEMS speakers offer significant advantages over conventional coil and magnet speakers. The company’s current market presence, including collaborations with renowned brands, sets the stage for a full-scale disruption in the industry. As they continue to push boundaries with their ultrasonic modulation demodulation scheme, xMEMS is well on its way to revolutionizing sound reproduction in everyday devices. Get ready to experience audio like never before!

Full transcript: reinventing the $50 billion speaker market

Note: this is an AI-generated transcript.

John Koetsier: Can a new innovation upend a 50 billion industry that uses a hundred year old tech, still uses a hundred year old tech, and it is probably in your ears multiple times a day. Hello and welcome to tech first. My name is John Koetsier. Pretty much all of us use earbuds, maybe wireless, maybe wired. They all rely on.

Pretty old tech. It’s a hundred year old tech. Now there’s a new option that’s based on, of course, silicon. It’s a microchip. It’s a means of creating sound that is very different. It could be better while also being cheaper. Here to chat and give a big product announcement at some point is Mike Householder, a VP at XMEMS.

Welcome Mike. Thanks for having me, John. Hey, super pumped to have you. Let’s start at the beginning. One of the things that I saw in the pitch when you said, Hey, let’s chat about this, was that headphones rely on a hundred year old tech. What is this a hundred year old tech? 

Mike Housholder: Yeah it’s the speaker that we’ve all been using are pretty much our entire lives.

So the coil and magnet speaker, has been our only means of experiencing sound our entire lives. And it was invented way back in the 1800s, perfected in the 1920s and just slowly improved ever since then. But it’s a mechanical structure. It’s got. First of all, a coil and magnet for actuation.

You drive a current through that coil. It moves the magnet that then pushes through various layers of suspension, a paper or plastic diaphragm that Moves air and generates sound fundamentally unchanged for a hundred years. Wow. So what we’re replacing it with is a solid state semiconductor alternative.

So now instead of a complex mechanical system, we can produce very sophisticated, higher quality sound. With a tip. So this is a speaker on a chip that can produce very sophisticated audio. So you’re getting really all of the benefits of semiconductor technology paired with sound generation. 

John Koetsier: So let’s unpack that a little bit and talk about what those benefits are.

And it’s really interesting because totally different industry, but about a year ago I interviewed somebody and they’re making a solid state silicon based fuse box for your home. And nobody thought it was possible. Nobody thought it could be done. And they did that. And you’re talking about something that is a totally different space, but is somewhat related.

It’s solid state and using silicon chips to recreate this. Why is that a good idea? Why is that important to do? 

Mike Housholder: If you look at the consumer electronics industry, there’s just a natural gravitational pull.

To solid state components, they’re scalable, more reliable, generally faster, better performing. And really, if we look at a, an average consumer electronics product today, be it a phone a TV or whatnot, there are very few non solid state components remaining in those devices. The one remaining one.

For the most part is the speaker coming after the last survivor, exactly. There are so many examples in the industry of a traditional mechanical device being replaced by a solid state semiconductor variant. I can walk through a couple of examples. If we go to the opposite end of the audio spectrum, the microphone, most people don’t know the microphone is mostly.

Semiconductor MEMS technology today for a majority of the microphones. The mechanical microphones still exist, but the majority of the unit volumes are MEMS, MEMS semiconductors. For those who, we all have phones and PCs today, the. The spinning mechanical hard drive for the most part has been replaced by a solid state drive.

Why? It was more reliable. It’s faster. You get your data faster than a traditional spinning mechanical variant. You brought up the fuse box. You look in automotive now. Everyone’s pushing towards solid state batteries. Once a solid state variant exists, it will, over time, take the majority of unit volumes for that function.

John Koetsier: Okay so that’s a little bit about how it’s constructed, how it’s built and this overarching flow of… The world of technology towards solid state. Why is this a good thing for speakers? Why is this a good thing for headphones? Here’s some headphones that you sent me that I’ve been testing and trying out.

Why is it a good thing to apply here? 

Mike Housholder: Yeah, good question. So I’ll I’ll answer that question from two angles. One is benefits to the manufacturer of the product and then benefits to the consumer. All right. So to the manufacturer they want a solid state component because again, of all the quality and reliability advantages versus their mechanical variant, easier to test, easier to scale.

They’re more uniform than a traditional coil based speaker. And what I mean by uniformity is each speaker. Loudness level is equivalent. Their phase alignment is more consistent. So that’s what you want out of audio. When you’re trying to match a left and a right, you don’t want them louder or quieter than the other side.

You want them perfectly phased aligned. So you don’t muddy any of the the audio response. So semiconductor is just going to be more uniform and consistent. It’s going to be more reliable. It’s going to survive a lot longer. It’s going to survive drops and, water and moisture and dust better than a mechanical variant.

John Koetsier: it’s one of my original AirPods I dropped and the audio was never the same after that. 

Mike Housholder: Yep. Yep. So more robust to drop, one of the things we get asked by a lot of our customers is could it survive a washer dryer cycle? Hey, who’s left their AirPods in their jeans and run them through a washing cycle.

So the answer is yes, you can run our speakers through a washing machine and dryer and the speaker will work. We can’t guarantee the rest of the electronics is going to work, but we can say confidently the speaker is going to survive. And we also remove the magnets. So we’re taking out weight.

We’re taking out a source of electromagnetic interference with the wireless antennas in a wireless earbud. so Those are really all the benefits to a manufacturer. They like solid state. So now, but the consumer may care a little bit less about whether it’s easier to test or this or that. So really what the consumer cares about is audio quality.

Are they going to, get better music quality by putting a solid state speaker in their ears versus a conventional speaker. There are three unique aspects to our sound signature that a conventional coil based speaker doesn’t do and can’t do. So the first characteristic, is is really in the mechanical response of the speaker.

The speaker actually moves up and down. Its mechanical response is about 150 times faster than a legacy coil variant. So what does that mean to the consumer? A fast actuation means that you’re going to pump air and then you’re going to recover and be ready for that next audio stimulus a lot sooner. So as the music gets really dynamic, really complex, lots of instrumentals, multiple voices coming in, you really care about detail and separation.

You want to hear one instrument clearly delineated from the next instrument. You want to hear that background vocal as well as the front vocal, and you want to. Believe that they are all separate and unique and the slower the speaker gets, speaking of legacy speakers, some of that detail starts to muddy together and you lose that precision, that sense of separation.

A faster speaker will present that detail in all its glory. So the speed gives you that detail and separation. So that’s one unique aspect. 

John Koetsier: It’s almost like having a higher resolution display in other words. 

Mike Housholder: Exactly. So it’s the equivalent of, HD video. Now you’re really stepping into, there is HD audio.

There’s high res audio out there. Can the speaker truly resolve that content? Are you taking HD video and running it over a CRT monitor or are you pushing that HD video over a high pixel density? HD screen, same equivalent, same analogy on the audio side, the content may be HD, but can the speaker truly resolve that content and present all of that additional detail?

That’s what you can do with this speaker. So the speed is number one for the detail and separation. The second aspect of the sound signature is really flat phase response. Most it’s really not known and really not well known, but conventional speakers have a phase disturbance, a phase shift in a 500 to 2 kilohertz region, which is a really sensitive region of the human ear.

But I think the human brain has adapted to it because it’s the only way we’ve ever experienced sound. That we, we just don’t hear that phase disturbance. 

John Koetsier: What exactly is a phase disturbance in sound? 

Mike Housholder: So it’s a, it, the shift in phase will basically alter the original recording of the music.

But again, I think the human brain has been tuned to just. Ignore it, as you saw from that Brian Lucy video, he’s a mixing and mastering artist in the music industry, his argument is, so these conventional speakers have a 180 degree phase shift at the resident frequency, which is 500 hertz to 2 kilohertz.

Typically, our phase is flat. Out to 10k. There is no phase disturbance. There is no shift in phase. And, for a professional ear like Brian’s, it was immediately apparent to him that, what he is mixing and mastering, how he wants the consumer to hear the audio. He typically doesn’t find a consumer audio product that renders it as cleanly as he wanted you to hear it.

And what he heard from our speaker with the kind of the pure phase response, he finally heard a speaker that presented the audio in the way he wanted it presented to the consumer. 

John Koetsier: Interesting. So that would be more aligned with how you would hear live music, for instance. 

Mike Housholder: Correct. And the and while we’re on the phase discussion, this gets back to the uniformity and consistency out of the semiconductor process, chip to chip, our phase consistency is there.

So this really steps into spatial audio. So as you’re moving audio from the left to the right, up and down, you want perfectly phased match speakers. To render that spatial content accurately and not muddy the spatial response. So having perfectly matched left, right speakers that are perfectly uniform, you’re getting more crisp and clear spatial audio.

So the phase is in two aspects, the phase shift and the phase consistency. The third characteristic to our our sound signature is the fact is really in the materials. So most speakers today use a diaphragm material that’s, paper or plastic, there’s certainly a lot more exotic variants as you get into really expensive speakers.

But the majority of consumer low, reasonably placed speakers, paper, plastic diaphragm, what you want out of a speaker diaphragm is something that’s stiff, rigid, but lightweight, because you want the material of the diaphragm to be stiff enough that when you drive it really hard, it doesn’t go nonlinear.

You want that whole diaphragm to just stay consistent to not muddy the audio, but what you’ll see in paper or plastic, because they’re very pliant materials. When you drive it really hard, part of the diaphragm goes up. Part of the diaphragm goes down. This is a concept called speaker breakup, and that will muddy the audio and it’s most present in the mids and the highs.

Not so much in the lows. So we, being a monolithic semiconductor speaker, our speaker diaphragm is silicon. And silicon is 95 times more stiff than paper or plastic. So you have a stiff and rigid material pushing up and down, generating sound. It does not go non linear, it does not muddy the sound. So you’re getting more pristine mids and highs that you wouldn’t get from a conventional speaker.

John Koetsier: Interesting. Amazing. Very cool. One of the things that the audio engineer that you shared, he was talking about it. He talked about pistonic pressure which is something that he said you typically get with many different headphones or earbud solutions. What is that? Why don’t you have it? 

Mike Housholder: Sure.

So a conventional speaker, that paper plastic diaphragm is typically isolated or sealed front to back. The front of the speaker and the rear of the speaker are not exposed to each other. So in the front chamber of that speaker, the part that’s connected to your ear. You have that pistonic motion of that speaker diaphragm, that coil and magnet pushing up and down and it’s generate, it’s pushing air to generate sound, but it’s also creating, in a vacuum, it can create some pressure buildup in the ear.

That’s why some earbuds typically have a spatial vent in the front chamber to vent out some of that pistonic pressure. But even though they have that vent um, it can still lead to fatigue over time. If you’re going to. Watch a movie on a plane for two hours or be on your earbuds for multiple hours. It will lead to fatigue.

I think everyone’s felt it. What’s unique about our speakers again, we’re dealing with micron level precision of a semiconductor process. We can actually do. What conventional to what conventional speakers is a no, we can actually integrate micron level slits in our speaker diaphragm, because again, we got that micron level precision that as we are doing that pastonic pressure, there are little vents that are opening up and any pressure that’s building up leaks out the back.

So what we’ve observed in our own testing, our testers are working with our speakers every day. We’ve got them in our years. We’re just sensing that we can keep these earbuds in our ears longer without having those senses of fatigue. 

John Koetsier: So sounds super cool. What’s the path to market dominance and is it related to the announcement that you’re going to make?

Mike Housholder: We’re we’re at the the leading edge of that right now. Yeah, we’ve been in the market with. Product since 2020, we’ve been in, we’ve completed our production qualification with our fab and our manufacturing partners since 2021 2022. so now those production speaker chips.

Are now in the process of being integrated into consumer products, and those consumer products are now reaching production. There are a few products out in the market today using our speakers more on kind of the niche market side. We’ve got some hi fi audio earbuds, probably thousand dollar products.

Believed enough in the sound quality of our speaker to say that this is different and it’s worthy of 1, 000 price point. Those are in the market now, so you can buy in ear monitors for hi fi audio with our speakers in them. There are some hearing assistance products on the market, hearing aids with our technology, but in November, there will be our first TWS customer true wireless stereo earbuds.

Reaching the market with our speakers. So we’re at that kind of that cusp of, getting into high volume, mainstream consumer earbuds that is happening in very short order. 

John Koetsier: So if somebody wants to check it out, wants to try it, maybe they want to buy a thousand dollar pair of headphones, or maybe they just want something that’s a hundred bucks, 50 bucks, whatever.

Are there any brands that, that, that are having this in the market right now that they can check out? 

Mike Housholder: Yeah, absolutely. So on the, the hi fi audio side there, there are two, premier grade in ear monitors one from a U S company called singularity audio is a very high end in ear monitor, about a 1, 500 price point.

And then there’s another Asian in ear monitor company called Ceramic that also has an in ear monitor with our MEMS speaker in it. So those are premium products, high price point really for the audiophile who invests in audio equipment, but it really, our interest as a semiconductor company is high volume business and reaching that mainstream consumer.

There’s a forthcoming product announcement for middle of November. From Creative Technology, a well known brand in both consumer audio and PC and gaming audio. They are releasing a true wireless stereo earbud with our MEMS speakers. They will be the first MEMS speaker TWS. Brand on the market and that is coming to market at a very consumer friendly price point.

John Koetsier: Very cool. Interesting. And do you, have you patented this technology? Is it possible for others to do the same or if this takes off and everybody starts demanding this and Apple wants it in their AirPods and everything else did they have to get it from you? 

Mike Housholder: Yeah. So we we’ve been very aggressive in patenting all of our innovations we have well over 110 patents granted.

We’re a five year old company and we already have 110 patents granted covering all aspects of our technology, our process, our manufacturing methods. So we’ve gotten good freedom of operations for ourselves and for our customers with that said, there are different ways to, to design the product.

And this is a large enough market where we would expect to have competition. So I wouldn’t say that we’re going to be the only source in the world to get this stuff that’s not going to be true. But we, we have certainly, we’re certainly ahead of the market and we’ve got sufficient protection to, to have freedom of operation.

John Koetsier: , you’re obviously looking at earbuds and headphones, , first, but there are speakers all over the world. There are speakers all over the place. And as we, , add intelligence to just about every product we have, , sometimes the ability to speak to it is handy. And certainly with Alexa devices and Hey Siri at risk of getting Siri upside here, or other things like that.

, there, there are speakers all over the world. , do you have grand visions of being. In everything. 

Mike Housholder: Yes. So the North star of the company was to not just reinvent personal audio speakers. The North star of the company is to reinvent loudspeakers. So it’s the speaker in your phone, the speaker in your watch the speaker in your TV, your smart speaker.

Your car, your human retainment system. We want to touch every corner and nook and cranny of the speaker market. The easier lift for us, the fastest path to market is getting close to the year. So starting in personal audio was that easier lift to get to market, get the revenue base going.

And to continue to fund R and D for the bigger lift, which is to produce full bandwidth audio in free air and free space in a little thin semiconductor chip. So you can imagine the physics challenges that have to be overcome to achieve that. So we’re taking things in a logical, staged approach to open up different corners of the market as the technology matures.

John Koetsier: As you do that looks interesting, and I know that’s not your initial focus, but if you look at a Sonos or you look at some of the other big stereo brands, I’m assuming that you can scale up the physical size of your chip, not the chip per se, but the bit of silicon that’s doing the resonating and make it work in a larger environment.

Mike Housholder: Yeah. So this is where this is where the fundamentals of semiconductors. Instead of being a benefit, as we’ve talked about actually become more of an inhibitor, if you want to, the logical approach would be if I’ve got a six inch mid range or a six inch woofer, okay, I’ll replace that with a six inch semiconductor speaker.

And pretty soon you’ve consumed an entire semiconductor wafer, and, just financially and fiscally, that just won’t make any sense. Semiconductors are good when they’re small so that presents a physics problem, which is how do you produce sophisticated sound in free air in really tiny packages?

And again, if you look at the side profile of a conventional, say tower speaker in a home entertainment system, that speaker has some depth to it. Why does it have depth? It needs… Displacement to push air. Okay, so fundamentally, a semiconductor will always be at a disadvantage from a physics perspective.

You don’t make semiconductors sticker. They’re always going to be really thin. So you will never have that displacement of a big, a big free air speaker. This kind of gets to a forthcoming product announcement that we’re going to be making in November, uh, around our Cypress technology, which we are reinventing the transduction mechanism.

John Koetsier: I would like to know what the transduction mechanism is. 

Mike Housholder: The conventional speaker, coil based speakers, even the speakers from XMEMS that exist today, work fundamentally on a push air transduction mechanism. You have an actuator. That pushes a diaphragm, moves air and generates sound. But that displacement is fundamental to your ability to generate sound at distance.

More distance, more displacement. We’re never going to have that displacement advantage. We need to find a different way to generate sound. And, in a hundred years, there hasn’t been a product that has generated sound in a different way that could produce equal or better sound. So we are moving to a methodology or a principle of ultrasonic amplitude modulation.

So we are using all the advantages of MEMS semiconductors, which is speed, which is uniformity, consistency, to basically build an ultrasonic modulation demodulation scheme to move outside of the audio frequency spectrum, operate in ultrasonic regions to modulate and demodulate, and then extract. You basically modulate a series of air pulses to the original audio signal, then you demodulate the ultrasound and extract the audio.

So it sounds 

John Koetsier: very sci fi. 

Mike Housholder: It sounds sci fi and actually sound from ultrasound has been in research mode since the 1960s. And no one has been able to achieve a performance. significant enough to commercialize this in a broad way until now. So what this ultrasonic modulation demodulation scheme gives us that our current generation speakers don’t is additional, we already, you’ve already listened to our first generation speakers.

That’s full bandwidth audio. You heard. The lowest frequencies to the highest frequencies. But if you want to move into freer air applications or leaky applications, you need to displace more energy. In the low frequency. So by moving into ultrasonic modulation demodulation, we are now putting ultrasonic air pulses into the audio envelope.

And low frequency energy is wider. It’s wider wavelength. Yes. So we can put more air pulses into a wider wave, wavelength low frequency energy so that more air pulses generates more air pressure, which generates deeper base. 

John Koetsier: It sounds amazing. But I’m totally missing how the ultrasound, which I cannot hear because it’s above the hearing frequency that my ears, especially my ears I’m, not 22 anymore, but above the frequency that my ears can hear how that gets translated in my open air environment, maybe my home theater, whatever, to something that I can hear.

Mike Housholder: Sure. So we’re getting an input audio signal from the sound source, whether it’s your phone, your laptop, or a receiver at home. We get the input audio signal and really our controller and driver will then basically implement. A ultrasonic air pulse scheme to map each ultrasonic air pulse to the frequency of the audio.

. . So we’re gonna use that ultrasonic modulator to generate air pulses, but then we’re going to demodulate that ultrasound. To then extract that, that audio signal. So it’s just another way to generate air pressure to create sound, but outside of the audible spectrum. 

John Koetsier: Are you almost virtualizing the speaker?

Is that a way that you could describe this? Is it is the actual sound production almost happening outside of the speaker assembly, the new speaker? 

Mike Housholder: No no. There’s definitely a, a control and amplification in a separate controller chip, there’s still, our MEMS semiconductor that is, all the MEMS structures are generating the ultrasonic pulses.

Through MEMS, demodulating through valves and letting the audio flow through. So there is still fundamentally a speaker that has moving parts in it but it’s now a semiconductor and not a mechanical device. 

John Koetsier: Fascinating. I look forward to seeing it. Thank you so much for this time, Mike.

Mike Housholder: Thank you, John.

 

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Can immersive storytelling via VR change history? Maybe 1 mind at a time …

immersive VR

In this episode of TechFirst, I chat with Emmy award-winning XR director Michaela Ternasky-Holland about whether immersive storytelling via virtual reality can change the course of history.

Documentaries already have.

The Day After aired in November 1983 and is credited with changing U.S. president Ronald Reagan’s pursuit and signing of a nuclear non-proliferation treaty with Soviet premier Mihail Gorbachev.

Can it happen again?

Using her VR documentary project, On the Morning You Wake, as a case study, Michaela explains how the deeply immersive nature of VR can change the audience’s perception of a global threat – nuclear weapons. She compares the engagement and impact of VR experiences to traditional 2D experiences, highlighting how the narrative and the audience’s sense of agency play key roles in creating quality engagement. The discussion further explores the future of immersive storytelling, addressing their potential and challenges in the technology field.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast

 

Episode synopsis: the power of immersive storytelling

(Note: this is AI-generated)

Immersive storytelling is revolutionizing the way we consume narratives, blurring the lines between fiction and reality. In this blog post, we explore the fascinating world of immersive storytelling through an interview with Michaela Ternasky-Holland, an Emmy award-winning XR director who specializes in creating experiences in VR. We dive deep into her acclaimed project, “On the Morning You Wake to the End of the World,” a three-part VR documentary about the threat of nuclear weapons. Join us as we uncover the power of immersive storytelling and its potential impact on audiences.

The Project and its Inspiration
The interview begins with Michaela providing insights into the genesis of her project, explaining how it originated from Princeton University’s Science and Global Security program. The goal was to create a world-changing documentary that could shed light on the effects of nuclear weapons. Michaela reveals how immersive technology, especially virtual reality, was essential in making the audience feel a sense of intimacy with the subject matter. The project aimed to activate the audience and take them on an emotional journey rather than simply providing facts and information.

Overcoming Accessibility Challenges
Accessibility is a crucial factor in the success of any VR project. Michaela discusses the challenges of making VR experiences comfortable and approachable for users. She mentions the need for carefully managing the logistics of VR spaces, ensuring the audience’s comfort, and minimizing waiting times. Training docents or volunteers to properly communicate with users and create a welcoming atmosphere was also essential. Michaela emphasizes the importance of ensuring that users feel safe and comfortable not only with the technology but also with discussing intense topics like nuclear weapons.

The Impact of Immersive Storytelling
One of the core topics of discussion is the impact of immersive storytelling. Michaela shares fascinating insights into her research, revealing that users who experienced “On the Morning You Wake” through VR were more likely to engage with the topic, explore further information, and feel empowered to take action. Comparing the effectiveness of the VR experience with a 2D film version, Michaela explains how VR evoked more positive emotions, instilled hope, and made the audience feel like they could make a difference. The immersive nature of VR created a stronger emotional connection and increased engagement.

The Future of Immersive Storytelling
As the interview progresses, Michaela and John Koetsier, the interviewer, speculate on the future of immersive storytelling. They ponder the challenges of mass distribution and accessibility, acknowledging that VR technology is still evolving and perfecting its form. Michaela highlights the importance of integrating VR into people’s lives in a productive way, similar to how smartphones became integral to our daily routines. They discuss upcoming technologies such as smart glasses and immersive projections, which may shape the future of storytelling.

The Versatility of Immersive Storytelling
In the final section, Michaela and John explore the diverse possibilities of immersive storytelling. They agree that the ultimate expression of storytelling depends on the purpose of the project and the intended audience. Michaela draws parallels between interactive games and linear experiences, highlighting how each medium caters to different emotions and objectives. She emphasizes that there is no single perfect apex for storytelling, but rather a wide range of possibilities that can be tailored to create specific impacts.

Conclusion
Immersive storytelling is an ever-evolving field that holds immense potential for engaging audiences on a deeper level. Through our interview with Michaela Ternasky-Holland, we gained valuable insights into the power of immersive storytelling and its ability to evoke emotions, drive engagement, and effect change. As technology continues to advance and accessibility improves, we can look forward to a future where immersive storytelling becomes a mainstream medium, enriching our lives with new perspectives and unforgettable experiences.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Meet the man who made the first cell phone call ever in the wild

The first cell phone call all started with a stolen car.

In 1983 Chicago resident David Meilahn’s car was stolen. He bought a new one, a Mercedes Benz 280SL 2-seater. But then he needed to replace his old radio-phone … and the sales rep told him there was something new: a cellular phone. He was one of the first few to be selected, then won a race to place the very first cell phone call by a customer, which ended up being from Soldier Field in Chicago, IL, to Alexander Graham Bell’s grand-daughter in Germany.

This is his story, along with the story of Stuart Tartarone, the AT&T engineer who helped build that system and still works for the company to this day.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast

 

AI summary: the first cell phone call

The script titled “First cell call” is a conversation featuring the first commercial cell phone user, David Meilahn, and an engineer who helped develop the technology, Stuart Tartarone. The conversation is hosted by John Koetsier, and captures the historical journey of cellular technology from its birth in 1983 to modern times.

They discuss early mobile technology like the radio telephone, the development of cellular technology, and David’s experience as the first person to ever make a commercial cellular phone call.

AI transcript: chatting with the man who made the first cell phone call as a customer, and the engineer who made it happen

The year was 1983. President Ronald Reagan proposed the Star Wars initiative. Mario Brothers was just released in arcades. Rent was $330 a month. Ford Mustangs cost $6,500. A gallon of gas was 96 cents. And you could buy a brand new Timex Sinclair color computer for just $179 and 99 cents. It was fall in Chicago, October 13, the setting was Soldier Field, the stadium that the NFL Chicago bears had just started playing out in 1971. 14 cars were lined up for a very unusual race: the race to make the first commercial cell phone call ever in the history of the world.

One man won that race and he placed a call from Soldier Field in Chicago to the granddaughter of Alexander Graham Bell in Germany. 

This is his story. Along with the story of the engineer who helped make it all happen.

John Koetsier: What would it be like to be the very first person in the entire world to use a new technology that would end up utterly revolutionizing everything? Hello and welcome to TechFirst. My name is John Koetsier.

Today is a super special day for TechFirst.  We’re literally going to speak to the first person who ever made a cellular phone call.

The very first cell customer in the world. It was a car phone, of course, and he still has that phone, by the way. We’re also going to chat with an engineer who helped build and commercialize that very first cell phone service. Joining us are David Malon, the very first cellular customer and Stuart Tartarone, who grew up taking phones apart and eventually built and launched AT& T’s global first cell phone service.

Welcome David. Welcome Stuart. Thank you. Thank you. Good to be here. Good to be here as well. Thank you guys so much.

I got to say, the average age of TechFirst guests just rose a little bit.

Stuart Tartarone: So I’m glad. we Don’t want to talk about that, but I guess it must have.

John Koetsier: My friend, you were building cellular networks in the 1970s, so you’re not 25 anymore,

Stuart Tartarone: but I guess not.

Maybe not.

John Koetsier: David, I want to start with you. How did this come about? What happened? Did you see an ad in a paper? Did somebody talk to you? How did you learn about this opportunity?

David Meilahn: It all started with the theft of a car . I had my car stolen. And for business, I had a radio telephone, a good old fashioned radio telephone, which was very expensive to buy and very expensive to pay for minutes, and not the easiest thing in the world to use, but it was extremely efficient, all things considered.

So my car got stolen in 1983, and I bought a new car. I immediately wanted to get a phone because I really missed it. So I went in to purchase one and they said, we can do one of two things. You can do a radio phone again or you can get what’s called a cellular phone, which I had never heard the word before actually.

And that’s a brand new system that’s going to be coming up and they’re hoping to put it on line. In the next three months. So this was the middle of the summer of 83. So I made the decision that I’d rather be more on the cutting edge than on the back end of an old system. So I said I will do that.

They said, we’ll install the equipment. It’ll sit in your car for three months and then we’ll turn the system on. And I said we’ll see if it happens in three months, knowing what normally happens. Unbelievably, within about a month or two, they called and said, How would you like to participate in the kickoff of the cellular system?

We happen to be the first place that this is going to be kicked off for the nation. And I jumped at it because it sounded like a lot of fun. They said, We’re going to have it at Soldier Field, which is perfect because I lived in Burnham Harbor on a boat in Chicago, which shares the same parking lot with Soldier Field.

Nice. I could literally walk to the event, so to speak. So anyway, so on, on the day of the event, it just so happened it was my birthday, October 13th, so that gave me like a doubly blessed day. The result was that they ended up having a race to, with, I believe it was 14 cars, in order to kick off the first cell phone, official cell phone call.

The race had the 14 cars lined up side by side. And they also had the technicians, each technician that actually installed the equipment in each person’s car was lined up to run a 50 yard dash. When they ran the 50 yard dash, they had to get the keys from the owner of the car, unlock the trunk.

And put in the final chip that I’m calling it a chip. I’m probably mislabeling it … that activated the system and it was a cell phone. It wasn’t like a cell phone of today. It was a big box, just like a radio telephone in the trunk of your car that powered I kept calling it a princess phone, but a little phone in the trunk.

And the car. So my technician lines up and he says, Dave, I’ve got some bad news and some good news. What’s the bad news? The bad news was I’m going to be the last guy to the car. He was in his mid 30s. So he’s an old man for technicians and all the rest were young 20 year olds. And he said, but I’m going to have the chip in first, I’ll be the first one to install it and then he held up the chip and I believe Stu’s going to correct me. Probably. I believe that it had about 20 prongs on it and they were about 3 quarter inches each. He said they are going to bend them and they’re going to make it impossible to get it in efficiently.

John Koetsier: Was this a SIM card? The very 1st SIM card?

Stuart Tartarone: It was it was called the number assignment module, but as David said,

John Koetsier: a lot bigger. How big was this? Oh, that’s so big. Not huge, but not

Stuart Tartarone: like a SIM card today.

John Koetsier: Now we have micro SIMs and we have eSIMs, which … Back to you, David. Did he win the race or did you win the race?

David Meilahn: So anyway as he said, he was the last guy to the car. And he was the first guy to get this in, to get this plugged in. And he had to give the keys, they had to give the keys to the owner of the car. The owner had to unlock their car door, get in, start their car. He gave me a great piece of advice.

Jeff told me, When you get in the car and you stare at the car, just sit there and look at the phone because it’s going to light up like a Christmas tree. So once all the lights stop flashing, make your call, and don’t do it before or you will trip the system up, it’ll have to reset itself. So I listened to him, did exactly what he said.

And our call made it to what was a head car that was bridged across the other 14 cars. That’s where the first phone call went, and then from there… It was forwarded to Alexander Graham Bell’s, I believe it was his granddaughter, in Germany. Wow. So that was the official, technically the official first call for a commercial cell.

John Koetsier: So David, I have to ask a question. You talked about having a radio phone. I have no idea what that means. I understand the concept. It was perhaps a phone system that went over radio waves, perhaps to some central switching station that then interfaces with the landline system. But what is a radio phone?

David Meilahn: I think Stu’s going to know more than me but it was literally radio waves going to us. I think a central, there were operators involved in it and it’s so long ago that it’s hard to exactly remember it, but they converted it basically to the land to a land system.

John Koetsier: Wow. Stuart, what is a radio phone?

Stuart Tartarone: basically, long before cellular , probably dating back to the 1940s, there was mobile phone service.

And there was, a transmitter in people’s cars in the trunk, a handset, big handset and device in the passenger compartment. And as, as David said, it operated almost like broadcast TV or radio. There was one big antenna, in metropolitan areas that broadcasts over the entire area. But the big deal about it…

is that there are only 10 or 12 channels. So think about metropolitan areas like Chicago, and after 10 or 12 calls were made, the system exhausted. Wow.

John Koetsier: Wow. Okay, so would they, David, if you tried to make a call on your radio telephone before you had the cellular call, before you had the cellular phone, then, would it sometimes just fail because the channel was occupied, or would you talk over somebody?

David Meilahn: It probably had some of the characteristics of a party line um, from the standpoint that it had limited use, but it wasn’t so limited that it was a frustration. You just, you lived with the way the system, you understood worked, you understood it, and it really worked fine.

John Koetsier: I want to stick with you, David.

We’re going to go to Stuart in a moment and talk about the technology, the process, building the project, coming up with it, all that stuff. David, did you have a sense at this point that you were doing something that was world changing? That was revolutionary. That would literally culminate in what we have these days.

These tiny little devices in our hand doesn’t have to be in a trunk of a car. Did you have that sense?

David Meilahn: Not at all. I had a sense that, it’s just amazing what has happened, because my sense back then was, here’s the newfangled phone system, technology go, back then now, technology moves on with the speed of light, so we’ll see how long cell phones last and one of my thoughts was, my gosh, they’re going to physically put towers, dot the United States in towers, and that’s how we’re going to talk to each other, so it was a little unusual that I thought, Compared to satellite technology it, I’m not no expert, I said, it seems like you should use satellites, but I understood there’s a whole another layer of difficulty there.

So I just thought it was going to be another method of telephones, and in 10 years, we’ll be using something different.

John Koetsier: Really interesting. You took the first step on the moon and it was just the next day.

It’s interesting right now. Actually there’s a lot of people who are trying to do cell phone service, quote unquote, cell phone service via satellite. So maybe that is the next step, but it’s 50 years later. Stuart, let’s turn to you. You grew up taking phones apart. You’re the typical tinkerer.

How does this work? Take it apart. Your parents must have loved you especially for that, but you became an engineer and you started work on this project. What did you think about it when you first heard about it?

Stuart Tartarone: Going back to what you said yeah, I did take phones apart, except we weren’t supposed to do that.

And because, it was back to the old Bell system where everything was controlled by the Bell system and by the local telephone companies. And I fully expected, I went to an engineering school in Brooklyn, close to where I grew up in Queens. It’s now part of NYU. It’s called, it was called the Polytechnic Institute of Brooklyn.

And in those days, recruiters came to campus. And the Bell system would always show up with a recruiter from the local telephone company, New York Telephone from Western Electric, which was our manufacturing wing and from Bell Laboratories, which was later to become AT&T Bell Laboratories, which was our technology, our R& D organization.

And I fully was expecting to talk to someone from New York Telephone because as a New Yorker, I didn’t expect I was going to move out to New Jersey. But lo and behold, I was only given the opportunity to speak to someone from Bell Labs. And afterwards, I was unhappy about that and spoke to my advisor who said words to me that many people of my generation heard, and those words were, if you were given the privilege and used the privilege of working at Bell Labs, you have no choice but to accept.

So I said, Whoa, I said, yeah, I said, I can listen to that went off in those days to what was called a plant interview and drove down from from New York City to what was Holmdale, New Jersey, not too far, went off from Middletown, got off the Garden State Parkway, and I felt like I was in farmland and this was the sticks to me and many ways.

It’s not too different today. If you do have occasion to come, to come here and made a turn and drove down to the road, and there was this tower that was coming out of nowhere. And I was later to find out it was modeled after a transistor. It was the water tower that would supply water to the Bell Labs complex at Homedale.

And that complex was just this beautiful building which was designed by Saranin, who also did major architectural Things like like the TWA terminal in the Great Arch in St. Louis, he designed this building and you walked into it and you looked around. It’s just amazing. And how the interviews were set up at Bell Labs at the time, we got to talk to four different organizations.

And the first organization I got to talk to was an organization called Mobile Systems Engineering. And the interview in those days were a lot different than today. You weren’t put through tests, you weren’t put through having to, design something on the spot. It was a conversation. To me, it was probably a lot refreshing, as opposed to what we do today.

And I spoke to all the people there, and when I got done, the last person I spoke to, a gentleman by the name of Joel Engel, who said to me, Now you’re going to talk to three other organizations, and left this subliminal message in my head. Nowhere will you ever get the opportunity to work on something brand new, something that doesn’t exist today.

And he held up a book, which I… I can’t find my copy of it, was the technical report that AT& T presented to the FCC and what they would do if they were given the opportunity to create a new cellular communication system. They said, this doesn’t exist, and if you join us, you’ll have the opportunity to work on this.

Wow.

John Koetsier: So you joined, you had that privilege, you took that opportunity, you came on board, did you start working on this project immediately or out of college or what did it take a couple of years till you got embedded in it? No,

Stuart Tartarone: How it worked then and how it works today. And you, yeah. You walk in the door as I did in the end of July, 1972, and you’re right into doing something.

And the opportunity I was given at a school was to work with at and t Marketing on doing a market survey of the opportunity for cellular communications so I could apply some of my statistical background in looking at data and working with this company. And they did. This is like 1973 by the time we got out there, very professional survey survey questions went out, there were focus groups and major markets, and I got to sit near the side of the glass and listen to customers and talk about what they might do with it.

And the conclusion from that survey was there was really no market for such a survey. This was 1973. No market for such a survey, for such a service.

John Koetsier: Amazing. And not necessarily shocking or surprising because when you come out with something entirely new, entirely different, you have to invent the market, right?

You have to show what is possible and you have to say, Oh. Interesting. I didn’t know I wanted that. In fact, I didn’t want that until I understood what it was. So you’re in a big organization, you’re in a massive organization that basically preeminent in its day. They have this new idea, this new technology, but the market surveys aren’t promising.

They aren’t saying, wow, this is a multi billion dollar opportunity. Jump on it right now. How did it actually start? Did somebody take a big risk?

Stuart Tartarone: Simple answer is yes. But at the time we were this very large company called the Bell system. A million employees strong and there were pockets of revenue to invest.

That was the great thing. If you look back over the history of Bell Labs. And a lot of, and if you think about it, if that didn’t exist, a lot of the technologies, if you think about the digital and wireless age was invented by AT&T Bell Laboratories, it led to the transistor, to digitization, information theory, solar panels, charge couples, devices. And cellular technology. All of those were invented by Bell Labs and all of those were invented in New Jersey. Amazing. And there was this thought, what would need to be done, and investments were put in place. Think of the transistor.

The transistor, which is the basis of everything. Millions of transistors in this device today. But the technology that was created by AT& T was given to the larger industry to build on and use. Think about what was the big first thing transistors were used for, transistor radios that came out in the 1950s.

John Koetsier: Okay, so you’re working there, you’re bringing out this technology, you’re about to launch it David is unaware at this point but, he’s, you’re starting to work with the network and the installers and everything like that. Talk about the technology. You what kind of, I came on to using a mobile phone when we had 3G.

 And, then LTE was a big deal, right? Which is essentially 4G, I believe. And now 5G is the thing, right? So give us a sense of where the technology fits on that scale.

Stuart Tartarone: Yeah, so I go back to what David and I talked about. The concept of one big transistor. The big underlying.

Concept of cellular was to be able to use low power transmitters and take the frequency spectrum. This is scarce commodity. It’s a scarce commodity back in those days, scarce commodity today and but able to reuse that many times. If you’re broadcasting at low, at low power, you can reuse that.

That’s the whole basis. Cellular technology. And this concept was brought forth by two people, Doug Ring and W. Ray Young, back in the 40s. They actually wrote a paper that talked about this. And Ray Young actually became my first department head at Bell Labs when I started, but he went back to the 40s and this.

So you have this concept, and how do you implement this concept? And back in those days, we had You know, cell sites, we call them base stations, originally cell sites, to, to provide the signal, you needed a smart controller, a central controller, and Bell Labs had invented electronic switching, came out in the 60s to be able to have stored program controls, control switching machine.

And the other element of it was a device in people’s cars. But one of the big deal things that just happened as I joined to talk about enabling technologies is the microcomputer was born by Intel. Without that, none of this would have been possible because that was the enabling technology, the game changer.

That made us possible to develop and deploy the system.

John Koetsier: Talk a little more detail over what the innovation was in cellular networks that we heard David talk about he had a radio telephone and you said, Hey, you could get like. 10 or 12 conversations going at the same time. And then you were out of spectrum.

You’re out of bandwidth. What was the key innovation in cellular technology? You mentioned the low power. So you’re it’s low power. It’s local. You’re talking to a local cell tower. So somebody else could be maybe on the same frequency, but five miles down the road, 10 miles down the road, they’re talking to a different tower, was that the only innovation or were there other innovations allowed thousands, millions to use phones?

Stuart Tartarone: So related to that is, think about it, it was a vehicular service at the beginning. And as cars drove around the city you had to track where those cars were and to be able to recognize that they were driving at an area that they were in and needed to be served by a cell site from a, from another cell site.

This is the concept called handling, handing off from one cell site to another. And the ability to do that, to track that, to receive the signal, to look at the algorithms by which you’re going to tell of, this vehicle, this device sitting in someone’s vehicle, to switch from one channel to another channel, that was a huge innovation and all the and part of it has to do with the distributed nature.

And one of the things I got to work on very early was all these things, were coming out is the distribution among. Different elements of the system from the switch to the cell site controller to the mobile controller and how to optimize that in the best way so it would support this growth.

John Koetsier: So landline phones were analog the signal was, transmitted as analog and recreated as sound in somebody else’s ear.

Were the first cell phones digital? Did you send the voice digitally?

Stuart Tartarone: No, because again, let’s roll back to the 70s. Just based upon what you said, it was analog voice. And behind that was a whole digital logical processing that went through with commands coming from the switch coming from the cell site to direct the mobile what to do.

So you had this, voice analog and you had the control structure, which was all digital.

John Koetsier:  Is that control structure what opened the door for SMS for texting?

Stuart Tartarone: It’s, we’re talking. We’re talking many years later. We’re talking many years later. So I’m not sure I would say that. Yeah, it was that basis.

But, by the time texting came around, I think we were into 3G uh, when it came around and lots of changes from 1G, the 2G, the 3G. That was all brand.

John Koetsier: Would you have characterized the transmission speeds or the transmission technology in the first days that David had his cellular car phone as 1G?

Stuart Tartarone: Yes, it was. It was 1G. And, but the quality of it, the voice quality, because we’re talking about someone in someone’s vehicle with a high power transmitter in their trunk. It was exceptional. It was interesting. It was as good. David can, can provide the feedback on this. It was good as a landline.

David Meilahn: It was crystal clear. It was excellent.

John Koetsier: Wow. So it was better in some ways than what we have now. Certainly in the days of 3G, voice quality wasn’t amazing. Maybe 4G as well. Huh?

Stuart Tartarone: And we knew it at the time because, think about it. This is a low power, this is low power, and you just could not get the same quality that you could with a high power transmitter in someone’s car and a similar receiver.

John Koetsier: So David, you were in at the very beginning of a revolution did you keep upgrading? Did you stay on the cutting edge and you still have that phone today, right? You still own that first phone? Yeah I

David Meilahn: actually there was an event, I think it’s about 10 years after 1983, after that went on and my apologies.

John Koetsier: It’s all good. Everybody,

David Meilahn: what can you do? Let me, it’s a new phone system too, I gotta find how to.

John Koetsier: I think if you hit it with a hammer, then it will stop.

Stuart Tartarone: That’s a necessary device.

David Meilahn: Those darn landlines. Anyways. About 10 years after the cell phone uh, started being used commercially in 1983, they went to the digital.

And they had an event to go to digital and they dragged me out for this next event. And I actually consigned my phone as a donation to the museum of science and industry. So I still have a car that I, that the phone was, a phone call was made in but the equipment is now at the museum of science and industry in Chicago.

John Koetsier: Amazing. Amazing. What was the car by the way?

David Meilahn: It was a 1983 Mercedes-Benz three 80 sl. Nice fun little car to run around in now. Not as easy to get it into as when I made the first call. .

John Koetsier: Excellent. David, if as you look back so you talked about at the time. You didn’t have the sense that this was revolutionary, that you were the first person to make a cellular phone call that this would take over the planet, like literally but as you look back and as you use your mobile phone today what’s it mean to you?

David Meilahn: I think it’s amazing how it started, what the average person thought about it. And the different, I’ll call them milestones as, for the user. And it, and whereas it was an instrument for business basically only, or the wealthy it has over the years progressed to, bag phones, the brick phone the then a handheld cell phone, flip phones, and then all of a sudden they became smart phones.

And the actual phone call part of a telephone is, Not necessarily the most important piece. It’s that everybody’s glued to their smartphone. And it’s able to be bought by everybody in the world. It does not have to be only for the business or the people who can afford it. Everybody can afford a cell phone.

And they use them like crazy,

Stuart Tartarone: right?

John Koetsier: Amazing. Amazing. Stuart, maybe some closing remarks from you, because you’re still working AT&T. Amazingly, I’m not going to ask your age, but you’re no spring chicken. This has been the work of your life in a lot of sense, and I’m sure you’ve done a million other things as well, but this is this the biggest thing that you’ve done in your career is launch this and being part of this?

Stuart Tartarone: Most definitely. So talking about first cell phones, this was. One of the very first cellular phones that existed. This was the control unit that went into these vehicles. I’ve had this with me all these years and it really was to come out of school. A lot of people don’t have this opportunity to come out of school and back to what I said earlier, to work on something brand new that didn’t exist, that people questioned the market for.

And then here we are today with the proliferation that’s occurred with, with the, with going from here to here. So what a huge transition. And yes I’ve got to work on lots of exciting things in my career from there to personal computers to lands and today involved with with as we look at our network as as we visualize, virtualize our network and, working on.

Today, tools to improve how we develop software, platform engineering, but, and, and even got to work on one of the first internet banking applications, but there would be nothing like. I got to work on my first 10 years with the company.

John Koetsier: Amazing. And what a privilege, as you were told by your advisor, your student advisor, way back when in the early 1970s.

And I have to echo that today. What a privilege to chat with you, David and what a privilege to chat with you, Stuart. I thank you for your time and thank you for sharing your story. It’s fascinating. It’s part of history and I really appreciate it.

Stuart Tartarone: Thank you so much. Thank you. Good to see

David Meilahn: you. Thank you.

Thank you. Good to see you.

 

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Can generative AI make rockets launch faster?

generative AI in the enterprise

Generative AI won’t be building Falcon 9s or new space shuttles just yet (wait a few years!). But can it help with all the work that goes into running an organization that builds the future?

According to Kendall Clark, CEO of Stardog, yes.

Generative AI that democratizes access to data and insight and knowledge speeds up organizations can help with launching space ships, or anything else. For NASA, a generative AI solution is apparently helping the team to do in days what used to take weeks.

(Subscribe to my YouTube channel here)

Subscribe to the TechFirst podcast

 

AI summary: using generative AI to speed up the enterprise

The script is a conversation between John Koetsier and Kendall Clark, CEO of Stardog, during a technology podcast. The discussion revolves around the role of generative AI in speeding up complex processes within large organizations. Kendall Clark discusses how Stardog leverages generative AI for data management, differentiating their approach from other companies like Salesforce by focusing on on-premise and hybrid multi-cloud environments. He also explains their strategy to prevent hallucinations or errors in AI-generated responses. The conversation concludes with the significance of creativity in AI applications.

AI transcript: generative AI and institutional knowledge systems

John Koetsier: Can generative AI make rockets launch faster? Hello and welcome to Tech First. My name is John Koetsier. I’ve done a ton of TechFirsts on generative AI. It’s getting pretty good. OpenAI just announced it can see, listen, talk back. What about the enterprise? What about companies, big organizations?

Specifically, what about NASA? A generative AI solution is apparently helping NASA do what used to take weeks, take it down to days. Thanks, of course, to generative AI. 

To dig in, we’re joined by Stardog CEO, Kendall Clark. Welcome, Kendall. Hi, John. Thanks for having me. Hey great name, Stardog. That’s an awesome name for a company, My first question is, so is NASA going to Mars next month , thanks to generative AI? 

Kendall Clark: when NASA returns to the moon and goes to Mars for the first time is more a function of the U. S. Senate to be honest, and, and those budgets and that sort of thing, it’s obviously an expensive endeavor.

 But, to answer your question, I think generative AI can help speed up a lot of complex things, largely around I think in this first wave, there’s going to be a bunch of waves, and I think most of the interest among the, Educated public, or people who are paying attention, right? It’s not everybody.

We should remind ourselves. The people who are paying attention largely because of its impact in what we call B2C space. Help me make a invitation for my children’s seventh birthday party and make it have dragons and fairies and something else. And I’ll pop these amazing photos.

We’ve all played with that and I love it. I’m addicted to it. Frankly, I can’t make a deck now without having some generative AI images so much so that. My employees now tease me about it. Stardog is a great brand partially because it, it lends itself so well to, I have a folder called astronaut dogs on my computer.

That’s got God knows how many variations I’m obsessed. In the enterprise. I think the first impact from generative AI will be in what we can call question answering. The movement from query writing to question answering. I would say that something like, and I’ll just make this up, 60 percent of the value, more than half of the value that enterprises get from technology information technology at all is in the area of answering questions of data.

And the dominant way that’s happened to date is there is some data in a database somewhere. There’s a lot of those, in fact. And someone who’s either very smart or someone who spent a lot of money on a product like Tableau, maybe, it’s got a BI tool, does something, manipulates an interface in some way.

And that results in a simple to very complex query, often SQL query, that SQL query goes to a system, gets executed, and answer comes back and value is achieved because the answer says false instead of true, or it says, 17, 000 instead of minus 17, 000, or it says false. John, instead of Kendall or whatever.

John Koetsier: Or more likely you discover, Oh, shoot. That was actually not the right question to ask the data. I should have asked this question. And then you send that question back to the data analyst. The data analyst goes, why did this idiot tell me the wrong question in the first place and then runs that query and there takes another level of time.

And then you go back and forth and finally dial in five or six levels down. You have what you think might actually be the real answer. You want it. 

Kendall Clark: You’re a hundred percent on it. And what that means is there is space organizational friction. Now, people were employed because of this friction to make it work anyway, but there’s space between my intent of wanting to ask a question of the data through the process that gets translated into a query, either by a piece of software by some people or both typically, and then it gets executed and then it comes back to me and, oh, it wasn’t quite, and this is a little bit like, for the folks in your audience who are it minded or history of it minded, this is a little bit what it was like to write code in the sixties.

So Or even in the seventies, like you’ve got two chances to ask questions, to make changes to your code base per day, because the compiler and the tool chain and the nature of the languages and the slowness of the computers back then, you start your morning run. It took four hours to compile.

You went and found something else to do. You came back at noon day. Oh shit. I forgot a semi colon or something like that, right? Oh, run it again. The day is shot, but now we use these, super fast computers, high level languages. Dynamic compiler, stuff like python, I can ask a million questions.

So the iterative cycle has sped up tremendously for programmers for generative AI and the enterprise. The first big impact, my prediction is not even a prediction. What I think is we’re already seeing is the movement from query answering query writing, which is this process we’ve been talking about, which works, but it’s slow and it’s error prone and it’s got a lot of extra people in it.

There’s a lot of people between the data and my intent. That’s going to get all compressed. And so question answering is. The LLM takes the place of all that space, right? And it says, Oh, if you’re asking what is the assessed or what is the test readiness of this subcomponent of the humans, of the man’s space capsule returned to moon thing.

And, give me the full lineage and traceability of all the. thAt’s a complex question that’s literally rocket science, right? And the answer, now if people can, not just technical people, but something like everyone, knowledge workers. Can interact with the data. I will say directly from their experiential point of view, although there’s a lot of stuff in between, obviously a big stack, uh, that’s going to compress all those cycles, much like the rise of dynamic languages did for programming, uh, programmer productivity.

We’re going to see that in what you would call just a, we used to call general office worker, people, but now business analysts, knowledge, or people who need whose job depends on. Interrogating the data where, you know, now, according to normal technology, interrogating is a metaphor. It’s like a fancy word in the LLM era question answering era.

Interrogating is not a metaphor anymore. We’ve concretized the metaphor. We’ve made it literal. You’re literally like, what about this? What about this? What about this? Firing questions at the data by typing them out as you say, opening extend this to this kind of verbal thing. But, That’s a fun thing, but it’s not going to really make a difference to the questions you get back.

I don’t think, uh, but yeah, that’s what’s good. That’s the first thing we’re going to see. I think that’s the original nub of your question. So in that sense, it can help everything that NASA does, everything that all of our customers, everything that everyone’s who’s engaging this technology. Can help them make their jobs go faster, better, because effectively, I like to say, democratizes access to data.

John Koetsier: So it’s interesting because when I got the pitch for you to jump on the podcast, what I immediately thought of was enterprise knowledge management and there’ve been huge. Products and projects that enterprises have been going on for, I want to say decades, and I don’t even think that’s an exaggeration of enterprise knowledge management.

What do we know? How do we categorize what we know? How do we put it in a place where people can access it? How do we search it? How do we surface it? And as we’re pre chatting before we start recording saying, hey, that’s not our space. We’re not about that sort of static data, that static knowledge and documents and stuff like that.

We’re about data. Talk about the evolution of knowledge management or how you fit or contrast to it. 

Kendall Clark: Yeah, it’s a fair question. I feel like I’m cynical about knowledge management, lots of but that’s the average view. I think, as we were talking before you, you said, you signaled some of that in your, in yourself.

I think the, let’s start with the fair thing to say. I think the fair thing to say is it has made sense at all points since, let’s say, I don’t know, 1970. For big companies to make some investments and really what we should call it as library science because that’s really what it is And I don’t mean that in a I guess I played it for a joke a little bit But I mean it I mean seriously librarians Serve a really super useful function in our society by organizing knowledge, right?

Like I like and I know you know, no one in your audience or who’s listening to us who’s under What would you say? 40? Certainly no one under 30 will know what these next words mean. But remember you used to be able to go to a library. It was a place right in the world. It’s not the mall and it had knowledge in it.

Primarily in the form of books, but other forms as well. And there were these people there who basically organize that knowledge and they sat there and waited all day for you to come in and ask a question. And they love to help you answer that question. That was a way that our society worked, right? It was this kind of a socialist vision, frankly, strictly speaking that knowledge was a common asset that we had created as a civilization.

We should all have equal access to it. You could even use to be able to call them on the telephone and say, I’m, I’m writing a story about. The winter migration of carrier pigeons in Finland, I don’t know, something, whatever. And they’d go, okay, there’s a book about that, come get it, they’d be very excited.

And then you would read it and you would know stuff. The web ruined that, or destroyed it, or changed it, altered it forever. But I still think there, but, and in some ways what’s happening with the web, what’s Google has done is find a way to make the machines do a lot of that work. And so with respect to documents, websites, web, it’s just a collection of documents.

After all, we mostly self serve. We go into the search bar. This is what people, everybody knows how to do. That’s replaced that call a librarian, but it’s the same. We’re satisfying the same human need. sO with respect to knowledge management, doing that for large bodies of information inside of a big enterprise, my hat’s off.

It’s on the side of the angels, right? I’m thinking, be cynical about that. What I think we can be cynical about is there were, there was always this obvious. Obvious to me, collision course between what we typically call data management, enterprise data management, ETL databases, data warehouses, and knowledge management.

Then those needed to as I like to say, be on a collision course, smash together, mutate into some new thing. And I think that’s been obviously, it’s been obviously, it’s obviously been the case for the last, say, three or four years that’s happened. LLM. In particular makes it, I think, I don’t think you can deny that anymore, that this question answering capability we were talking about previously, and then you can extend question answering to all the other traditional jobs to be done in data management, data modeling, data mapping, data quality discovery, metadata management, inference rules.

And then the tradition, and then the, the traditional realm of data science. That machine learning has eaten all those things. In a way that’s now really accessible to everyone, not just to Google. And that’s going to forever change the practice of data management, just like the web forever changed the practice of.

Library science and knowledge organization. That’s my non cynical take on what’s happening. 

John Koetsier: It’s really interesting, actually, if you could somehow study and understand what percentage of human knowledge, let’s say, resided in dead trees. And then how that moved to documents, electrons in hard drives, and how that transition is happening as a greater and greater percentage of our operational knowledge.

Transitions to maybe more dynamic forms of knowledge that are in databases that are measuring processes that are ongoing, that are real time in a lot of senses. And that’s an interesting transition of what percentage of the world’s knowledge is in different places. Certainly the percentage in databases that is a live database, that is growing, that is that is measuring live activity.

And that you want to query because you want to know the status of that live activity is certainly growing and being able to access that easily is really impressive. 

Kendall Clark: Okay. So this is a super interesting question. You forgot, or you didn’t mention, I should say a third important source, which is the knowledge that only resides.

In and between people. Yeah, exactly. That for a variety of reasons, people didn’t need to write down. They haven’t had time to write down or it’s just too fluid and it doesn’t fit in a database. The thing about, you don’t really put stuff into a database until it has this kind of a particular kind of ossification of form by which I mean, in a database of traditionally what we mean by database is a relational database, a particular data model.

It’s not the most agile, flexible thing. In fact, it’s rigid. Relational databases were typically intended for basically accounting data and accounting data, whatever else it is. It’s not dynamic and fluid and creative. The values may change. You have a. A status of the, of an account that changes, but the rules of structure, right?

The gap rules are pretty, it goes back to what? Seth 16th century, Florence, double entry accounting. A lot of that stuff is really old and well understood. Then you jump ahead to somebody like NASA, literally trying to do rocket science, get humans. Back and forth across the solar system and they’re learning new things every day.

They’re right on the cusp of the barrier between or the boundary, the border between knowledge and ignorance, on the other side of what we know is the black, scary, nothingness of ignorance. And if you’re trying to peck away at that, push that border out a little bit every day, you may need different techniques for data management.

What I think is interesting, and then you add to that the fact that while you say that the big historical trend is to go from books. So electronic form, all of the growth in the next 10 years forecasted for enterprise data is not in what we call structured data databases. It’s in semi structured and unstructured data.

So like this conversation 20 years ago. Was two guys talking right this conversation five years ago was a thing you could watch on YouTube this conversation Now or any time in the future, I push a button at the end You push a button out pops a transcript We stick that into some kind of knowledge platform and all the entities and everything we mentioned, you know I mentioned migratory patterns of birds and Finland and We mentioned libraries and you mentioned, like those accounting rules, those all pop up as nodes in a graph, the knowledge graph, right?

And connections between them and. Then, okay, this is a conversation of a different kind, but if we’re doing this for work, we, this might be work product, right? So that transition about where the knowledge is, what they call in the academia, the sociology of knowledge, the production of knowledge, thinking about knowledge, getting produced, like thinking about cars, getting produced.

It’s an industry and there are processes and there’s inputs and outputs, and you can measure it. And. There’s this whole big field since probably the eighties and academia and studying the output of knowledge, what we’re talking about. And when we were talking about knowledge management and data management, colliding and fusing into a new thing, taking a lot of those techniques, a lot of the new algorithmic insights and helping big companies.

Manage the data that they produce better. I will say, I’ll stop with this. I’ve been, I don’t want to give you a filibuster here. It’s interesting to think about company’s competitive landscape vis a vis one another, like you take two big global pharmaceutical companies. Maybe the most differentiated assets they have are their data sets.

Like you cannot, I mean you could maybe swap all the people and the people at one pharma can do the jobs and there’s some, winners and losers at the margins, but if you took the easiest way to destroy a company is a thought experiment is to take all their data and swap it with their nearest competitor.

Just so you come work on Monday and let’s say all of Glaxo GSK’s data belongs to, uh, Nova Nordisk and vice versa. Like you haven’t destroyed anything. Every bite’s preserved, right? But you just swapped like what happens the, they’re destroyed, right? So it’s difficult to, I think it’s difficult to overemphasize or exaggerate the importance of managing data and managing knowledge and yeah, generative AI, given that it produces text and all of this knowledge and data management ultimately more or less ends up in text.

Let’s say images or a form of text, right? Close enough. The applicability of these techniques to this area is pretty endless. 

John Koetsier: Yeah, it’s an interesting space, and I just came back from Salesforce’s big Dreamforce conference in San Francisco, and they’ve added a ton of generative AI to Tableau, which you already mentioned, other products as well.

Their vision also is that you can query your data. Natural language, anyone can do it. Everyone is a data scientist, all that stuff. And so that’s super powerful. Of course, if you are going to buy Salesforce, I’m pretty sure you’re in for significant charges for each user. And significant challenges there, but it is a compelling vision that all the data.

That company produces is at your fingertips that you have control over, that you have access to, that you have permissions for, and you can query it, you can know what’s going on, you’re a sales rep, you’re a sales manager, you’re a product manager, you can instantly know all this stuff. How does your vision differentiate from that?

Kendall Clark: Well, at a high enough level, it isn’t. It’s, that’s exactly what we want to do. I think there are differences that matter. First off, I would say Stardogs really focused, we’re focused on financial services, farm and manufacturers. Of a certain size and unlike Salesforce, now it’s interesting.

Salesforce is an interesting example because they did not start off as a data management company, enterprise data management company, become an enterprise data management company because strategically they decided to move in that direction because they make a lot of money and they got, frankly excess capital they need to deploy and they could have done many things, but moving into data management makes sense because they do control a critical.

Strategically critical corporate data asset in terms of the CRM. And that gives you some leverage. And so it’s clear with the acquisition of Mule, MuleSoft a couple of years ago, six years, whatever that was a big signal. Hey, we’re going to be a data management company, but I think it’s important.

I, the big, I think probably the biggest differentiation between our vision and theirs is Salesforce is really a cloud company. And they’re really best at managing and connecting data that exists in the cloud. But companies still have a lot of data in what we call on prem, not in the cloud. And our focus has always been on that data that either hasn’t gone to the cloud or we’ll never go to the cloud.

So Stardog is a cloud platform, but it also can operate on prem. It’s a Kubernetes platform, which. Technical folks in your audience just means, Kubernetes basically replaced the Java virtual machine as the enterprise delivery mechanism. The dominant one. But that just means you, we can operate our platform.

Our customers operate it both on prem and in any cloud environment. Excuse me. That just means startup could be adjacent to data no matter where it is, not just the part of the data that’s in the cloud, even if in the end, in the next 10 years, let’s say, 80 percent of all corporate data resides in the cloud, 20 percent of all enterprise data is still a very large amount of data.

And it needs to be connected. And what we’re really focused on is connecting data and then making it accessible with this LLM technology we’ve been discussing in what I like to call the hot everyone calls the hybrid multi cloud. So that part of the data that’s on prem hasn’t moved to the cloud yet.

Or again, my favorite statistic, 85 percent of all businesses, irrespective of size, have data assets at more than one cloud. Now for most businesses, that means they have Salesforce and HubSpot, right? Which is fine. And those are different solutions, different clouds, but really, that problem is going to get solved for SMB and small businesses by those vendors.

But it’s true of big businesses, like our big banking customers have data everywhere in every conceivable. Location format. And 

John Koetsier: it’s not comforting that the financial industry has this data everywhere. 

Kendall Clark: That doesn’t mean they’re not controlling it, but I just mean by everywhere.

Like what’s the newest, like most globally significant banks, unlike say Facebook and one important regard, Facebook is like a teenager of a business and globally significant banks are like grandparents there. On average, what, 75, 100, 150 years old. So they’ve existed longer than computers have existed, which just means you take a cross sectional slice of a big bank, you’re looking at the archaeology of the last 70 years of IT.

They started with mainframes or what was even before mainframe, boxes of punch cards or whatever. And they’ve had, they got one of everything, and there’s legacy all over the place. They have data everywhere in that sense. That may also not comfort you, which is fair. But that’s a tough problem to solve.

You’ve got a system that’s running. It works. It meets requirements. It’s just old. And you meet some it people in the smart, their smart view is don’t mess with that. Leave it alone. It’s running. Why mess with it? And then, somebody else equally smart says no, we need to modernize that they’re not, nobody’s right or wrong there.

It’s just, it’s a hard problem. Not to come on your show and make a brief for banks, but you get my point. Like they’re in a tough spot when your organization’s 150 years old. They are in a 

John Koetsier: tough spot. And that’s why we see the rise of neobanks, but we are straying way far afield, so we’re going to, we’re going to pull it back here.

So you’re building your solutions so that organizations can query their data. That’s great. NASA is using it. Others are using it. How do you solve hallucinations? That’s obviously a challenge with generative AI, and that’s the problem you cannot have in your scenario. That’s a problem I’m okay with. If I talk to open AI, I can, does it pass the sniff test?

I can double check it. Bar, Google just added or bar just added some double checking of what it says as well. How do you solve hallucinations? 

Kendall Clark: Yeah, look, there’s a cheating answer to this question, which is what we’re doing. And then there’s the hard research question of making the LLMs stop hallucinating.

I won’t address that. That’s a research question. It will get solved, I suspect, to some appreciable level of, quality, precision and recall. I think the first thing to say is LLMs are not databases, and they should not be treated like databases. The way we solve it, which is like somewhat cheating, is we don’t use LLMs and Stardog as a source of data.

We use it as a mechanism to discern human intent. Which is a kind of a fancy way of saying, what is the person talking about in their natural language? Whether that’s my, my, one of my co founders is my CTO is from, born in Istanbul. So he speaks Turkish and English. And I said to him at some point, I said like, how’s the LLM working in Turkish?

As as a joke, I knew they worked for many natural languages, but not necessarily for all of them. And he just showed me a demo. This was this summer. And it was just straight up Turkish and it just worked. It was amazing. We use the LLM as a way to figure out what the person is talking about.

Translate that into a query or a search or a hybrid query search or a data modeling, piece or a data mapping piece or a rule or something that then gets executed by our platform. And that cuts out the chance for hallucination. What it means is sometimes the LLM will get the human intent determination wrong.

But then that just means a query. Now we’re just back in the case. You said earlier, someone expressed the need for a business question. Some other mechanism translated that into some query and didn’t get it right. Yeah. Frankly, relative to the status quo. So what happens … you just redo it.

John Koetsier: And I’ve been in that scenario quite frequently, and usually it’s. Not the data analysts who got it wrong. Usually it’s me who asked the wrong question. 

Kendall Clark: Not specifically, but yes, almost always. It is just frankly, and I tell the team this all the time, an LLM is not magic. It cannot determine intent when no crisp intent exists yet, but that’s okay.

People often find, we find our way by. Asking a kind of, frankly, not very good question, and then we ask a slightly better question, and then we iteratively improve, and then we discover what our intent was all along, more like probably we create the intent in this iterative process, and then we retroactively attribute it to ourselves, and that’s a, it’s a psychological thing we do, and that’s fine.

Oh, what I meant all along was this probably not. It’s what you mean now, and here’s the answer. Fine. Nobody’s throwing rocks at those two. It’s just normal human stuff. Yeah. So in our approach, we don’t treat the L we don’t ask the l m any questions that, where it’s hallucinations can bother us, right?

. . . Cool. And so in the near term, that’s the best solution. It’s not su, this is use case specific in context relative. That’s what you want to do in a regulated industry where the questions really matter. But, if I want another cool picture of an astronaut dog and, mid journey or something, I want that slightly random quote unquote wrong component because that’s really the source of creativity.

John Koetsier: Absolutely. And creativity is a wonderful thing. Just not always when you’re querying data from your own company. That’s exactly right. Kendall, this was interesting. It went places I didn’t expect it would. Thank you for taking the time. 

Kendall Clark: Thanks, John. I appreciate it.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

DEI in AI: Is diversity, equity, and inclusion a solved problem in AI?

dei in AI

Is diversity, equity, and inclusion (DEI)  in AI a solved problem?

I’ve written a lot of stories lately about AI. AI is critical to our future of automation … robots … relf-driving cars … drones … and … everything: smart homes, smart factories, safety & security, environmental protection and restoration. A few years ago we heard constantly how various AI models weren’t trained on diverse populations of people, and how that created inherent bias in who they recognized, ho they thought should get a loan, or who might be dangerous.

In other words, the biases in the people who create tech were manifesting in our tech.

Is that solved? Is that over?

To dive in, we’re joined by an award-winning couple: Stacey Wade and Dr. Dawn Wade. They run NIMBUS, a creative agency with clients like KFC and featuring celebs like Neon Deion Sanders.

Enjoy our chat with full video here, and don’t forget to subscribe to my YouTube channel …

Or, listen on the podcast

DEI in AI: solved problem?

Not a subscriber yet?

Find a platform you like and let’s get connected.

Transcript: fixing biases in AI

Note: this is AI-generated and likely to contain errors.

 Is equity, inclusion, and diversity in AI a solved problem? And are there new challenges with generative AI? Hello and welcome to TechFirst. My name is John Koetsier. I’ve written a ton of stories lately about AI because AI is critical to the future of automation. It’s critical to the future. Robots, self-driving cars, drones, and everything else.

John Koetsier: Smart homes, smart factories. Safety, security, environmental protection, restoration. A few years ago, we were hearing a lot about how various AI models weren’t trained on diverse populations. It created inherent bias in who they recognized, who maybe a model thought should get a loan, who might be dangerous.

 

In other words, the biases in the people who created the tech are manifested in our tech, what a shock. 

 

Is that solved? Is that over? To dive in, we’re joined by an award winning couple, Stacey Wade and Dr. Don Wade. They run Nimbus. It’s a creative agency with clients like KFC, featuring celebs like Neon Deon Sanders.

 

If you’re not into football because you’re into tech, he’s a Hall of Fame football player. And they won multiple awards over the past few years for their agency. They’re in a unique position to see the impact of AI on everyone, not just white ass English speaking tech workers. Welcome. How are you guys?

 

Hey, so pumped to have you guys, thank you for taking the time. Let me just start here. Why did you guys start digging into AI?

 

Dawn Wade: From a multifaceted perspective, um, I’m a researcher by nature. So when new things come out, I always look for gaps within that. And it was just a personal interest to me. But then as you see how our country is moving, it always makes me weary when things aren’t created in a.

 

And I’m going to use a wholesome way, but created by an individual, because having a computer engineering degree, I’ve recognized you’re only as good as the data, or you’re only as good as the input that goes into a system. So, it made me question, how is this being developed and how does it know enough about me to represent someone like me?

 

So that was the 1st facet as to why it was very interesting to me. But then the 2nd, 1 is we work in advertising. So it relies on a creative nature and creative has to be very nuanced to people, their habits, what they like, what they don’t like, how they talk, where they’re from. And that’s very hard to capture on a day to day basis and marketing.

 

So we were very interested in how does that work in something where we’re trying to resonate with a real person? How can a computer resonate emotionally? To convince you to buy something or to do something. So very two different ways in, but it’s continued to be interesting to us.

 

John Koetsier: What have you found when you’ve looked into generative AI and how it’s created people and imagery in your space in, in advertising, in, in marketing?

 

Has it been, um, compelling? Has it been interesting? Have you seen challenges with it? I think

 

Stacey Wade: definitely mixing challenges in it. I mean, um. You know, it’s similar. I think for me, when I, when I think about a, I, uh, it’s so new. I feel like we’re, we’re just like, barely creeping up the hill with this thing. Uh, but what stands out to me is something that, uh, was happening probably in 21, which was.

 

And even when we started the agency, which was representation in the industry, what we’re noticing is that the representation of those images that are being, you know, the output of those images are very similar. There’s confluence to what we experience in agency life, which is, are we represented in a way that is authentic to who we are?

 

And is the voice and tone also authentic? And that’s the 1 thing that there seemed to be some juxtaposition there, which was no. And that was something that was a little bit scary for me personally as an artist. I’m not. You know, uh, computer engineer, I’m not done. So I look at things completely different.

 

It’s very abstract for me sometimes. So to be able to see that shown in a way that look very familiar to when we started the agency 21 years ago, seeing that show up in a was a little

 

John Koetsier: concern. 21 years ago. You guys don’t look old enough to have started the agency 21 years ago. Is this generative AI in action right here?

 

You’re deep faking yourself.

 

Let’s dig into that, Stacey. Um, you, you talked about representation. Uh, you talked about authenticity. Don talked about authenticity. What were you seeing? That was not adequate representation. What were you seeing that was not authentic? Yeah. I just think

 

Stacey Wade: tonality, I think, you know, you’re so quick. A lot of those.

 

Conversations that were happening in 2021 about a show up today in a way from a gender from the generate generative experience. You’re seeing it show up in the advertising. So for us in advertising last year, it was all about, you know, the metaverse this year. It’s all about and the people that are using these tools.

 

Sometimes may bring in those biases in the tools that they’re using and the tools, the output of that is showing up and the output doesn’t look like me, it doesn’t have my voice. And I think that’s the part that we’re trying to, as an agency, luckily enough, we have right, you know, left brain and right brain on this call, you know, so to have somebody, you know, That understands that space and has, you know, the background to be able to kind of put, you know, just very logical, pragmatic thoughts wrapped around AI, what that looks like for us, and then to take more of an artistic approach to understand tonality and touch and feel and making sure that we show up as from a culture standpoint, we’re showing up and not being erased or being dumped down in a way that AI is basically pulling from these.

 

Inherently biases, inherent biases, pulling those into the images, I think, is something that we’re trying to aggressively, uh, have a conversation about and aggressively be a part of that conversation. So that. We can start to the same way that we came into an industry that left us out. We can start to include our thoughts and tone and authenticity inside of, as a part of the output of a,

 

John Koetsier: I think I’m wondering what that looks like.

 

Go ahead

 

Dawn Wade: is that is perfect as people we are imperfectly perfect. Right? So, if you’re looking at an image. That’s AI generated. All the strokes are going to be right. It’s going to be balanced. It’s going to be symmetrical, right? But a true artist that does that are going to have his signatures within that.

 

His brush of how he strokes are going to be slightly imperfect. And I think we as people are okay with being imperfect, but AI is looking to achieve that perfection that I think takes away from the authenticity. So like, that’s my lamest term of saying that in a way. Is that the nuances of that person or that artist are some of the things that somebody like Stacy is going to value because he’s an artist, but that generated image.

 

The eyes are going to have that slight, but that line means something on the eye, or they’re just certain attributes that is not going to see. But a real person looking for the beauty in something is going to be missing based off of the AI generated content.

 

John Koetsier: So Don, AI’s gift to you and generative AI from mid journey and stable diffusion is people’s hands.

 

They’re getting better, but they’re not great. But that’s, I know that’s a different thing than what you’re talking about there. I’m trying to dig into This thing about authenticity because I think that many people might have that from a diversity perspective, but also from an artistic perspective. Right?

 

Um, and, and I’m wondering, you know, do we have people sitting in, um, a high rise in New York City or, or somewhere in LA saying, give me an urban scene to, you know, mid journey or stable diffusion. And then it’s getting something very stereotypical or what’s happening.

 

Dawn Wade: That’s exactly what that is. It’s what you think you saw on TV or saw in some magazine or saw in some way you think that that’s representative of this particular scene, and it’s not until you live those experience.

 

You can’t dictate that for the next person. And that’s where DE& I comes into the space, because there’s not a program that teaches us diversity, equity, and inclusion. These are lived experiences, and I can say as an African American woman, but Stacey’s experiences are going to be different because he’s not an African American woman.

 

He’s a man. And my Hispanic counterpart is going to have a different experience, but this isn’t just a race situation. This is like when you look at the LGBTQ community, the disability community, like those are nuances. They cannot be captured in AI appropriately. So, you know, when it started. Really ramping up in 2020, it was all about facial recognition.

 

But when it comes to other things, it goes deeper. You can’t do that from an AI perspective adequately. That’s going to be representative of those communities, even amongst women. You know, but I think there’s a technical aspect of it. There’s also the community usage of what that means and then who is developed for, right?

 

So if it’s developed for a consumer versus Just an individual user is going to have different nuances in it. So I think that we can’t address AI with a broad stroke. It has to be chiseled in a way if we want it to be sustainable and safe to use.

 

John Koetsier: What’s that look like then? Um, because generative AI isn’t going away.

 

Um, if people are going to use it and frankly, many of those models are open source, they’re out there. There’s a massive amount of innovation and creativity happening there. And it’s kind of a land grab. And it’s also this explosion of capability of people who could never create something like this artwork or that scene or these people or whatever are also doing it.

 

So that’s not gonna, the genie’s not going back in the bottle.

 

Dawn Wade: Absolutely not. But we have, we can’t go from that. If you’re not first, your last mentality. And that’s what a lot of the software is. That’s what a lot of the platforms want to be first to the scene. You have to get away from that mentality when it comes to AI.

 

You have to invest in the time of connecting with those that you want to target. So if it’s AI that’s targeted to a certain group or certain usage, you have to bring in people who are experienced or have those cultural nods and allow them to give you the inputs to get it right. Because the one that’s going to be long lasting and most successful is the one that’s going to get it right.

 

The one that’s first is not going to make it to the end point. So I think that’s the, the, the. The, the moment in which you need to pivot the mindset, you don’t have to be first to market, but you need to be last in the market to make it successful because you’ll be the one who focused on getting it right versus getting it first.

 

And I don’t think many aren’t that way at this point. Stacey, talk

 

John Koetsier: about what that looks like. Uh, talk about what that looks like for a brand that plans on being around for a while. Plans on serving its community well. Wants to connect to its community, AI and generative, and it looks like a cheat code for boom, check, uh, done, got it.

 

Uh, there we go. Talk about how you believe that the lack of authenticity in that will impact that brand over time. I think that’s

 

Stacey Wade: something, even, even when you remove AI, that’s something that brands struggle with today. So now you’re throwing in another level of complication with AI, because you ever seen that book, you ever read that book, Blink by Malcolm Gladwell?

 

Where he just speaks about, you know, you just look at it and you just know something, even though it looks like me, there’s something just not quite, it’s not me. And I think that that’s what, listen, you’re, just as AI is changing the landscape. Let’s not get it twisted. The consumer is also changing and they’re changing really quickly and they’re becoming very smart and they can see something that’s authentic.

 

They can sniff it out. They know. So I think brands are as much as they want to jump into a, I, we’ve seen brands jump into a quickly to Don’s point. You see brands making this quick charge in to be 1st and what we’re noticing what we’ve noticed is that they’re getting it wrong. And now they’re trying to like, you start to see them kind of take steps back and not wanting to be first.

 

And now it’s becoming laggard. So now they’re trying to figure out, okay, how do we actually do this in a way that’s authentic? And I think, uh, that starts with brands actually being authentic so that they can understand the blink effect. Like, okay, I know that this is real. I know that this is not real. I know that we need to make this as perfect as possible, but we need to also bring in those cultural nods.

 

So you need to bring people that are able to see those nuances, able to understand tonality, able to understand, you know, the hat is actually not a Detroit lion’s head is actually fear of God. Those are very. Small details that don’t show up in ai because AI would take this image and make it into Detroit lines, but it’s not, I’m not, you understand what I’m saying?

 

Tigers, not lions. I get it, but it’s not, it’s actually, there is a nuance, a cultural aspect to the logo that has to come in on the output. And a lot of what you’re seeing, you even mentioned it when you talk about urban communities and AI inherently is going to bake in some biases, what it views as what an urban community is, but my urban community is completely different than, say, your urban community.

 

So brands are going to need, we say, you know, when in Rome, bring a Roman. They’re going to need people that really understand these cultural nods and nuances to be able to one, protect them from themselves and add a layer of authenticity to it that actually is going to be beneficial, not only to them, but the consumer that they’re trying to reach.

 

John Koetsier: It’s pretty crazy challenging, isn’t it? Because our technology is a reflection of ourselves, and sometimes it amplifies, uh, bits of what we do and what we create, and all that stuff. And if we look at our culture over the past 30, 50 years or so, And then you look at, um, the token person of color in the TV show in the 70s or the 80s or whatever, and how that person was represented or how that person needed to be represented, uh, the corpus of knowledge that AI is drawing on, whether it’s that or whether it’s just remnants of that.

 

Cultural detritus that accumulates over decades and then manifests itself in my image of what somebody in Detroit who is black and grew up there is versus your image. It’s such a. Insanely complex web of everything. How can anybody get it right?

 

Stacey Wade: You’re, you’re, you’re, you’re nailing it. But I think that’s where Don says something that’s so, it almost clicked like a flag in my own head.

 

It’s like, you know, it’s the ones that are taking the time to not. You know, rush down the hill, but are actually taking the time to walk and understand what’s actually happening so that they can give you the best output. So, being able to curate is similar to, you know, how we curate our own agency, being able to bring different people into the agency.

 

It’s not a black agency. It’s not Hispanic. It’s not a Hispanic. It’s not white agency. It’s a culture agency. It’s being able to weave in these different culture. To be able to slow it down fast enough so that you can speed up as the, as the technology is moving forward. So it’s not a matter of saying it’s going to go away.

 

We know it’s not going away, but we are saying that we want to offer up cultural nods and nuances to make it better.

 

Dawn Wade: But I think anything without checks and balances is dangerous. Anything that doesn’t have checks and balances is dangerous. So when it comes to AI generated content and things like that, it’s generating that, but what’s the check and balance?

 

When you get that output, do you then go and check to make sure that it’s going to resonate or that it’s safe or it’s not offensive? Or is it assumed because you did it using AI that it’s safe already? So where’s that safety check? Where’s that quality assurance that needs to take place? And those are things that I want to hear from, from the developers in terms of how that’s coming to market.

 

To have those safety checks, but time and budget often eliminates that, and that’s the reason for AI. So those are some of my watch outs when it comes to that.

 

John Koetsier: Don, talk about how you see generative AI developing over the next couple of years. Um, there’s a, there’s a, a vast group of people. Who now have access to whether they’re open source models or whether it’s a model, you have a subscription to, uh, in discord or on the web or an app or something like that.

 

Millions, maybe probably tens of millions right now are actively building stuff with generative AI. How do you see that developing over the next couple of years? And how do you want to impact the development of that to make it something that you think is net positive globally?

 

Dawn Wade: There has to be checks and balances, just like you develop a new food or a new drug, there have to be checks and balances before you can, to me, uh, infiltrate the, the, the world with it and that we don’t have that there.

 

And I think that there are some government oversight groups that are being developed to do that. But part of that takes, you know, it takes the fun out of it. So I think of if there were 10 million solutions, right? 90 percent of them may be great, but there may be 5 or 1 percent of it. That could really set us back.

 

And I think that’s why we need oversight because that 1 percent could really, um. Mess something up for us, whether it’s between our country and another country, when you can take somebody’s voice and their likeness and create a video that anybody from the human eye could not know is fake, what happens as we’re debating or working with countries that we don’t have the best relationship with.

 

And it looks like our president is sending a message that may not be true or doing something. And you can fake so many things who, and you only have minutes to react. So that’s what I mean about checks and balances. And as a person who loves my individualism, I don’t necessarily love the thought of having oversight at that level, but to know that.

 

It could be something very dangerous, and somebody could get hurt or killed. I think that anything that crosses those boundaries that could really hurt people, kill people, represent something in a way it’s not intended, requires that, to my understanding.

 

John Koetsier: I think. Honestly, everything negative that you just wanted to avoid there is going to happen and almost inevitable.

 

I, I, I think that there’s large language models that are out in the wild that somebody will train, uh, somebody who’s neo Nazi will train on that content. Um, I think there’s, there’s generative AI art models, uh, that are out in the wild that somebody will drain, uh, train on very racist ways of imagining how different people look like.

 

Thank you. I think that you will have weaponized AI and generative AI and deepfakes globally, and I don’t know that there’s any solution. I don’t, we’re going to have to invent some new technology. It’s funny, I mean, I think Elon Musk has created cultural vandalism with Twitter, um, and lots of other challenges, but he wanted to invent a new AI that will determine the meaning of reality.

 

Great! Um, who’s reality, um, but I, I don’t know how we’re going to avoid this semiotic catastrophe. This, this, this, this. Dissolution of meaning this, this destruction of truth, because I don’t know how we can possibly escape it. I hope somebody smarter than me has a plan. I feel like you, I feel like you

 

Stacey Wade: and Dawn could get together and she reads the books on this, like, this is like her favorite subject matter.

 

So I feel like you all could talk about this all day, but I agree. That’s the part that scares, scares me is, uh, you know, Don, I kid you not. Don’s been reading these. Books that speak to this, you know, grids, you know, hostile takeovers for a long time. It’s kind of like her, her, you know, when we go on vacation, that’s one of the things she always picks up a book.

 

John Koetsier: This is how she relaxes.

 

Stacey Wade: Yeah, it’s a hundred percent, but you know, it was crazy two years ago, even speaking with when she would. Throw this at me. You kind of like, Oh, it’s a nice book to see this come into reality. Like some of the things that she, you know, we’ll have conversations about to see some of them actually become real.

 

It’s almost like watching the Simpsons, you know, how they actually like predict the future. It’s like, you’re seeing it happen in

 

John Koetsier: real time. Don save us. You’re the technologist. What are you going to do?

 

Stacey Wade: Somebody has got to do

 

Dawn Wade: it. So. The thing is, if you can dream it now, it’s like, if you can dream it, you can do it, and that can be scary when people don’t have your best intentions at heart, you know, so I don’t know that anybody has a solution because it’s such, like you said, thousands and millions of solutions that are open source for somebody to take hold of that and to customize it, and 90 percent of the time it’s going to be for good, but it’s going to be some of those that aren’t for good, you know, And I’m just not looking forward to that in any way.

 

John Koetsier: It’s a crazy, challenging world that we’re moving forward in. And I guess, um, I’m going back to Gandalf’s wisdom in Lord of the Rings. We have to live in the times that we’re in. We have to do the tasks that

 

Stacey Wade: we have. No, I’m going to have this one. I’ll do this one. So keep going.

 

John Koetsier: Do the best we can. Uh, hopefully there will be some new technological solutions that will. Tag, uh, when something is artificially created, and I know that we can detect it right now, but it’s going to get to a level where the human eye, as you were saying, cannot detect it. The human ear cannot detect it.

Right? We’re going to need some solutions that tag something that is real. Even as our definition of real changes technologically, uh, it’s a crazy world we’re moving into, Stacey.

Stacey Wade: It scared me a little bit. I mean, I’m excited about it, but I’m also scared the same, the same. 

Dawn Wade: Yeah. When you think about like where we were a couple of years ago when they’re like self driving cars, you know, like, and I’m like, Oh, that’s cool. You know, but now like it really can happen, but then you see, you know, one or two really bad stories and it makes you doubt the, you know, efficiency, like, well, who tested this or how did that?

So can you imagine something like this as such a humongous scale? You know, so I think that within our generation, you know, and what’s going to be that in 50 years is the landscape won’t look the way that it looks now, you know, and that’s exciting. But I think we have to have the foresight to get ahead of it.

And figure out how to set boundaries so that, you know, we don’t mess it up for ourselves or for our future generations. So we’re going to, somebody has to have that brain power and that oversight to do it. 

John Koetsier: Yeah, and government is painfully slow in all these things. So they’re like 10 years behind. Right.

We’re going to have our generative AI “Senator, we sell ads” moment. It’s going to be, you know, 10 years too late, but hopefully someone will figure it out and the companies, uh, big tech will also do the right thing. That’s wow. We’ll see. Who knows? I gotta say it. Go ahead, Stacy. We hope. We hope. We hope.

Absolutely. I gotta say it’s been, this has been fun. It’s been, it’s been interesting. Um, it’s been great. I usually look at stuff. Um, and I talked to people who are inventing new technology. You’re dealing with the consequences of it and using it and, and, uh, talking about the impact of that is also very, very useful.

Thank you so much for taking some time out of your day. Thank you. 

Dawn Wade: Thank you so much. And I appreciate this.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Apptronik has a totally different approach to building humanoid robots

humanoid robots apptronik

Who will win the race to have the world’s first usable general purpose humanoid robot? I thought I knew all the companies making general purpose robots:

  • Tesla
  • Sanctuary AI
  • Figure AI
  • Fourier Intelligence
  • Agility Robotics
  • Boston Dynamics

I was wrong … there’s probably a bunch I don’t know. But one that popped up as interesting is Apptronik. They’re based in Austin TX, they’re partnering with NASA, and they’re building Apollo, a 5’8” 160-pound robot.

In this TechFirst podcast, we chat with CEO Jeff Cardenas. And we learn that he has a completely different approach to building a humanoid robot than probably every other robotic company out there. Keep scrolling for more, or hit the Forbes post for my take on one part of what Cardenas said regarding robotics and international competitiveness.

(Subscribe to my YouTube channel here)

TechFirst audio podcast: Apptronik’s ‘Apollo’ humanoid robot

Subscribe on your favorite podcasting platform:

 

Transcript: humanoid robots, work, and the future with Apptronik CEO Jeff Cardenas

John Koetsier:  Who will win the race to have the world’s first usable general purpose humanoid robot? Hello and welcome to tech first. My name is Jonkitz here. I thought I knew all the companies that are making general purpose robots, right? There’s Tesla, there’s Sanctuary AI, there’s Figure AI, Fourier Intelligence, Agility Robotics, Boston Dynamics.

I’m sure there’s a bunch more that I don’t know, but I know that I was wrong because one that just popped up that launched this week is. Apptronic, they’re based in Austin, Texas. They’re partnering with NASA. I want to hear more about that. And they’re building Apollo, a five foot eight, 160 pound robot.

Here to chat is CEO, Jeff Cardenas. Welcome, Jeff. 

Jeff Cardenas: Thanks for having me here. 

John Koetsier: There’s so much competition right now. What is going on? 

Jeff Cardenas: It’s an exciting moment. I think for robotics and the world as a whole, we finally reached an inflection point. And it’s funny because when we first got started, everyone told us not to do humanoids.

And now everyone’s getting into the race. But it’s been exciting and interesting to see it all play out. 

John Koetsier: Everyone is doing humanoids, and that’s a real challenge, right? There’s pieces that are known and capable. Shoulders, necks, maybe, right? Walking with a bipedal robot is not necessarily the easiest thing in the world, but it’s been done, and it can be done.

Really challenging parts, obviously in the brain and the hands and fingers, we’ll get into all that. Tell us about Apollo, which you’ve called the world’s most capable humanoid robot. 

Jeff Cardenas: Yeah, so Apollo is the result of many years of hard work and research and development. At Abtronic, we’ve built 13 different robots overall in eight iterations on humanoids.

So we started in humanoids when we were still working in the lab at the University of Texas at Austin. Started working with NASA back then for the DARPA Robotics Challenge. Two of my co-founders, Dr. Nick Payne and Dr. Luis Cintas, were on the Valkyrie team. And basically, Aptronik was created in 2016 to commercialize the work out of NASA.

So back then, Valkyrie was… Millions of dollars. It was 300 pounds, but it was one of the very first electric humanoid robots. So we felt like, Hey, general purpose robots, more versatile robots are going to be the future. Electric is going to be the way to go. And we want to build a commercial version of this.

And so. Got it out into the world now, finally, seven years later, in many years of work to get here. It’s really, 

John Koetsier: It’s great to hear that because you launched with Apollo yesterday, right? And so the world wakes up and says, Oh. What’s this new company? What are they doing? Right? And it’s like that old phrase, like an instant success.

You just weren’t around to see all the work that went into it, right? So you built a bunch of robots already. You built a number of iterations on the humanoid robot. Talk about that journey a little bit. 

Jeff Cardenas: Yeah, for us, we always saw that this was really a technology problem more than was a market problem.

I think a lot of entrepreneurs and other folks are looking to get into this space because they see the market opportunity. But for many years, the technology problems had to be solved to make it viable. So it’s interesting to hear you say, walking, we can do that. That was not the case even.

Five years ago, we had no idea how to do dynamic walking. Boston Dynamics was really 10 years ahead of academia in terms of the type of walking that they were doing. And everybody was trying to figure out how to catch up and do what they were doing for many years. And so, piece by piece, we had to solve these problems.

And, the way that we viewed it was, this is a technical challenge, and we need to solve the key pieces that are needed to make this real. How do we go towards viable, commercial, general purpose robots? And we basically just broke the problem down into, and from, and solved it from first principles.

So started with electric actuation for humanoid robots, we’ve done over 35 iterations on electric actuators. Some of those are small, medium, large of the same family of actuators, but a tremendous amount of R&D there. Elon’s talked about the need for actuators for these robots. And that’s been our body of work.

That was my co-founder, Dr. Nick Payne. That was his thesis in grad school was next generation actuation for legged robots. And, the electronics didn’t exist. We needed more real time communication because you have a lot more sensing in these robots. And then certainly the software, which I’m sure we’ll get into, but we started basically at the foundation and we bootstrapped the company.

So I mentioned, it’s funny that everyone’s into humanoids now, because when we got started, everyone told us, do not build hardware, focus on the software, focus on the AI. And the problem was the robots didn’t exist. And we’re like, well, what are we going to put this AI on eventually as it matures and develops, we don’t have the robots yet.

So we have to build these systems. And so we bootstrapped ourselves and what we would do is we would work for other companies. We’ve worked with several big automotive companies looking at things like humanoid robots and we’ve helped them. Build their systems. We’ve built and delivered a variety of systems, including some of the robots for sanctuary.

We’ve partnered with sanctuary in the early days and they’ve been a great partner along and we built their first prototype for them. And each time that we would. We would build these robots. We would iterate, we would learn something new, all getting towards the point of building the robot. We always ultimately wanted to build, which is Apollo.

And so, the thing that I love about robotics is, you got, at the end of the day, you can talk about what a robot’s going to do, but you have to show it in the real world. So our philosophy has always been show versus tell. So we didn’t have a need to really get out there and say, Hey, we’re going to do this.

Our view was like, well, let’s do it. And then we’ll show off what we do. And we’ll let our work speak for itself. And so we really had our heads down over these years, just. Trying to get this stuff working, solving the technology problems iterating pretty quickly as well. And, we’ve had robots walking for 7 years.

We’ve iterated on, we’ve built full systems in 3 months. So it’s not that we’ve been taking our time doing this. We’ve actually been cracking these problems and we’re at this point where. We’ve met the threshold where everything is good enough, which I think of this like the personal computer in 1982, right?

It’s like, it’s the beginning of, a lot of things had to build on each other and converge for this breakout moment to happen, but we’ve got that work behind us. And I think the robot we’ve put out there we’re really proud of and excited to see where it goes. 

John Koetsier: I love that approach. I really love that approach because you didn’t put a guy in a suit and walk up on a stage and say, here’s our robot.

Right? That has a couple hundred year history. I’m glad you didn’t do that. Super. Interesting in terms of the approach talk about before we get into the details of the robot, give us the high level specs. I mentioned five foot eight, 160 pounds. I think it’s five hour battery life. Is it a swappable battery?

Is it a hook it up and charge? How fast can it go? What can it do? 

Jeff Cardenas: Yeah, so 5 ft 8 weighs 160 lbs, it can lift 55 lbs, and it has a 4 hour battery initially, and it’s swappable. So we’re targeting 22 hours a day, 7 days a week, uptime. It can also be tethered as well, and opportunity charged, like what in autonomous mobile robots.

It is fully electrically actuated. We, as I mentioned, we’ve had a tremendous amount of iteration in that space over the years. So, we think that we’ve got a really unique solution for performance and cost, right? Performance at cost. So there’s a trade off where you’re, you can purely focus on performance, which to me, Atlas is an amazing machine.

The Boston Dynamics robot … It’s like a formula one car. It’s really performance optimized but is very difficult to mass manufacture Atlas, it’s got custom hydraulics and other things in it. And so what we’ve really focused on is how do we get performance at cost? How do we find the right trade off and ultimately build a commercial product that we can build for less than $50,000 as our target.

And it can still have the performance that’s needed to do the work that we needed to do. And then through COVID, we learned a lot about the supply chain. And so a lot of the ideas that are now in Apollo are getting around. Supply chain constraints so that we can really scale this thing up into big volumes and we don’t have any single source vendors in terms of what it does.

Initially, we’re focused on what we call gross manipulation, and that’s compared to dexterous manipulation, and we’ve learned that because we’ve built a lot of robots over the years, and dexterous manipulation is very difficult. We have a ton of respect for folks that are going after that in this space.

But it’s a really difficult problem. And the exciting thing for us at this juncture is we don’t have to solve that problem to get these things out into the world. Turns out there’s a huge shortage in logistics. And a lot of those tasks are just moving boxes or totes from point A to point B. And that’s something that we know how to do now.

And so that’s where we’re going to start, but then the beauty of a general purpose robot. It’s a software update away from doing something new. And so we’ll continue to get more advanced as we move into this and get them out into the world. 

John Koetsier: Super interesting to talk about hands and gross manipulation versus dexterous.

One robot CEO that I chatted with before said, the robot’s basically a hand delivery mechanism because the hands do all the work. Right. And he said, actually, creating a hand, a robotic hand, like the human hand and capability is beyond our capability right now. It’s not just beyond any one company, it’s beyond human capability right now.

There might be some different opinions on that we’ll see, but that’s where, of course, maybe half of the degrees of freedom that a robot might have exists. So, it’s a really challenging thing. And then, of course, wear and tear. All those little motors in the hands, skin, whatever you use for skin, super, super hard.

So I see that challenge. You talked about a software update, which is amazing. That’s incredible. Is there a possibility of a hardware update? So let’s say three years from now you crack human hands, maybe not quite as good as this cause you want to hit your 50, 000 target level, 90%, 70%, whatever enough for many manufacturing type jobs.

Can you like, take a hand off and plug a new hand in. 

Jeff Cardenas: Yeah. Yeah. So Apollo is modular. There’s this big debate of wheels versus legs too from the traditional folks in automation. Like what do you need legs for? We can use wheels in all these applications and what we’ve done with Apollo is just taking everything we’ve learned because we’ve been building these robots with customers over many years.

And so, We’re able to take all of that learning and inject it into Apollo. And some people are going to want these things on wheels. There’s a big, huge number of advantages to legs. We think legs will win the day. Overall, there’s this problem with legs that the robots can fall over. And so in some cases you can have wheels.

The beauty of robots is you can have your cake and you can eat it too, and that you could build them to be modular. So Apollo is modular at the torso. So if you want to put it on wheels, you can throw it on wheels. We think that will demonstrate that legs will be the most versatile platform longterm and.

The reason we know about the challenges of wheel bases is because we’ve designed them. And so we’ve deployed versions of humanoids on wheels and we’ve learned from that. And so the same thing is true with the end effectors. I agree that I’m sure that’s Jordy that’s saying hands and, I agree that long term the humanoid needs hands, but in the near term.

There’s many applications where you don’t require a full five fingered hand and you can do things with a one degree of freedom hand. There’s a whole range of things you can do. There’s a whole range of things that robots do today with pincher grippers or, and so you can expand and you can, you don’t have to solve all the problems at once.

I have a ton of respect for Geordie. I’ve worked with Geordie over many years. He’s a visionary. But we’ve just taken a different approach in terms of how we’ve thought about that. And we want to partner with folks like Sanctuary. They’ve been a big partner on it with us. They can put their hands on a robot like Apollo, and we can work together there as they start to crack the dexterous manipulation problem.

So yeah, it’s modular at the chest. It’s modular at the end effectors, it’s also modular at the head as well in terms of putting different sensor payloads on it. So we have a standard sort of camera based vision system, but there’s also debates about LIDAR. Do you need lighter or not?

Our vision approach doesn’t need LIDAR, but in some cases, when you start to put these robots outdoors, if you want to add LIDAR, you can. This is something I think Boston Dynamics has done a really good job of with SPOT, is they created the ability to put different mission payloads on the back of SPOT, and something that we learned from along the way and part of what’s designed into Apollo.

John Koetsier: Love it. And as you hinted, Geordie is Geordie Rose. He’s the CEO of Sanctuary AI and the former CEO of a quantum computing company that sold a 15 million quantum computer to Google, but it’s still around. I want to, there’s so many places to go here. I do want to talk about the brain. That’s really challenging, right?

You’ve got to, How are you building intelligence into your robots? Is it pre programmed maneuvers? Is it versatility with a certain level of intelligence? Talk about how you’re doing that. 

Jeff Cardenas: Yeah. So I think the long term goal is to start to get towards more and more intelligence overall.

But I think in terms of AI and intelligence as a whole. For humanoids, you can really break it down into two buckets. So the first bucket is physical intelligence. So that’s like coordination, hand eye coordination, the ability to balance walking as part of that’s physical intelligence. 

The other side is cognitive intelligence.

So how do you make decisions? How do you reason about the world? How do you abstract ideas, things like that. What I’d say is that we’ve really focused on building from the bottom up and there’s different approaches. You can go from the top down, as in, start with the intelligence and think about how to build a machine around that we’ve gone from the bottom up, which is start with the actuators, the motor controllers, the electronics, really the basic building blocks and then build up into intelligence.

My view was that you want to build. The most capable platform you can possibly build and then you can think of these intelligences as software that you can put on top of the robot. There’s people that disagree with that and say, well, in order to get to full intelligence, you need deeper integration. And I think we’ll see, but we’ve really focused on this physical intelligence and.

The exciting thing for me, and you had a question that maybe we’ll get too deep, but it’s like, where are we at? I think of this as really an evolution of what’s already being done out in the world, and we don’t have to solve new problems to get humanoids out into initial applications to show utility for humanoids to be fully realized.

Yes. You need much higher levels of intelligence than we have today to be fully realized. Humanoid, I think, but we have a lot of the building blocks already today. So for example we in, if you think of the evolution of robotics, 2004 collaborative robots came out. Those are human safe robots.

By 2010, compute got good enough. Batteries got good enough. We could have mobile robots. And we started to do things like SLAM and navigation. By 2016, machine learning came to the scene, and we could do intelligent grasping. So what we’ve done is build on all these things that we’ve seen work in production, and really build from the bottom up, and integrate those things together, and taken maybe more a conservative approach than some people are taking, to basically, use what we know works, and use what we’ve known, we can deploy into the world today.

And then. We can always add these other sort of more difficult sort of R and D problems later on down the road. And so, we can dive in deeper where we want to go. 

John Koetsier: It’s fascinating to see the different approaches and that’s the beauty of the sort of free market innovation system that we operate in the Western world, at least where you do have those people who are coming top down and want to build intelligence and the intelligence will do everything.

That’s a risky bet. It’s an amazing bet if you make it and you win, because if you win, you’ve solved everything, quote unquote, everything, right? But if you don’t win, you end up with an expensive boondoggle that doesn’t accomplish anything. It doesn’t, it’s either. Really good, because you can’t have a robot out in the wild, maybe making a sandwich or slicing something up or using a tool that could be dangerous that is potentially dangerous.

Your approach in software mirrors your approach in hardware, which is starting from the ground up. What can I do? What do I know I can do? What do I know I can do today? And that seems to be a very Pragmatic and practical way of doing it. I think that’s super interesting. I do want to go deeper into what this means and what it looks like, but maybe before we do that you’re doing this in Austin, Texas.

And you feel that is significant. Why? 

Jeff Cardenas: I think if you look, there’s people out there that are saying there’s going to be more humanoid robots than people one day. Like I said, it’s funny to me to have that out there when everyone thought these things weren’t viable even.

Five years ago there were novelties. And I think that’s exciting. I’m not sure that they’ll all be humanoids, but I think there will be a lot of humanoids. And I think it’s the most versatile platform you can build. But the reason I think Texas is important is, where are we going to get all the robots we need?

So if you look at the world today. There’s hundreds of humanoids, maybe right? There’s not very many of these systems. So how are we going to go from hundreds to thousands to millions to maybe billions? And I believe that Mexico is going to play a big role in that for North America. I think if you look at Mexico relative to other cCountries where we’re doing manufacturing today.

It’s got a lot of advantages. It’s about a third cost of labor, depending on how you measure it. It’s geographically located, really close to the U. S. market. So we can get from Monterey to Texas in 2. 5 hours anywhere in the U. S. in 24 hours. And they have the skill sets to be able to pull this off.

And so I’ve always felt like the Texas Mexico corridor was going to be one of the most important manufacturing corridors in the world over the coming decades. And this is something that I was saying, coming out of grad school we were, we had two key ideas when we started Abtronic.

One was that robots had to become more versatile. And two was that somebody had to be building. Robots domestically here to serve the U. S. Market. We didn’t have any major domestic O. E. M. S. And we got started. And so if you agree with the premise that, hey, we’re gonna need domestic manufacturers long term, the question is, where is that going to happen?

And Typically, it’s been on the east or the west coast. The big hubs for robotics have been California or Boston, but I think Texas actually has a number of unique advantages over those hubs where I really believe that Texas has actually much better adjacencies than those other places and so I always felt like Texas was the place to do it.

We’re already producing gearboxes and a lot of the heavy machinery. We’re doing a lot of that for the energy industry before. But as you look towards what’s coming next, is there’s a lot of people in Texas that are looking for. Okay, as the oil and gas boom ends, which it will at some point, where do we apply all of this industrial base that we already have built?

And for a number of reasons, I think robotics actually makes a ton of sense for that. And that’s why I think Texas is going to be important. 

John Koetsier: Well, it is interesting. Cost of living is certainly less, proximity to Mexico. I was going to make a joke, what is this access to labor that you’re talking about?

Aren’t the robots going to build the robots? But I’m sure there’s going to be some tricky jobs for humans in all that stuff. Let’s look forward a little and let’s say we’re in a future where we have tens of millions, and we’re progressing. How does this change our world?

How’s it changing our economy? 

Jeff Cardenas: I think that it fundamentally changes the way that we live and work. And the reason I think that is because as humans, our most valuable resource is time and our time here is limited. And, you had great thinkers in the last century, John Maynard Keynes famously predicted that we would have a 15 hour work week.

And so what I think changes is that instead of doing things that we have to do, that somebody’s just got to do, we can now have machines that do that for us. And what that does is free us up to spend time on things that we really value. Why do we spend more time at work than we do with our families?

And the answer to that today is, well, someone’s got to do that. We’ve got to keep the economy running and going. We have to provide goods and services to our fellow man in order to keep all this moving. But I think what robotics has the potential to change is to change that equation. What if the cost of goods and services dramatically falls because They basically will, slope towards the cost of the raw materials as the cost of labor continues to go down.

So goods and services could become much cheaper and much more abundant than they are today. And that frees us up to do, spend time in a way that we want. And there’s this interesting quote that I heard, what did Darwin, Galileo Newton, what do they all have in common? They were all very wealthy, and so they had time to think and contemplate their existence and think about these higher level ideas.

And today you have people that are stuck in a cycle of working all the time to make ends meet. And I think, an optimistic version of the future is you start to free people up. They’re able to think about things like their own health, about taking care of each other. We start to fix the health care, the education system.

And ultimately we evolved as humans. And I think that, applied in the right way. It could be a really positive thing. 

John Koetsier: Super interesting. Okay. So, you’ve launched the robot, you’ve launched Apollo. What can somebody buy or rent today? 

Jeff Cardenas: So today we’re working on pilots and, my whole philosophy, a core sort of value for us at Aptronic is show versus tell.

So what we’ve done is we’ve built a demo center here on site at Aptronic. We’re mocking up the use cases that we’re looking to deploy Apollo into. So for the remainder of this year, we’re basically signing up pilot customers and we have some of the the marquee customers in the world that have already signed up, and we’re doing these on site proof of concepts through the rest of this year, where we’re demonstrating it might what I tell the partners we work with.

If I can’t do it here at my facility, I can’t do it at your facility. So make sure that you’re comfortable with the performance of the robots here, and then we’ll get it on site early next year. So next year we start with The initial sort of fielded pilots and we feel that a lot of systems up into this point.

So we’ve already put one of the things I tell people when they come to Apptronic, they’re looking for all the robots and we have a lot now. But for many years, every robot we ever made, we sold because that’s how we stayed alive. That was our, that was our business model. We were entirely funded on revenue for the first five years.

We only just raised money in the last couple of years. And so, but next year, we’ll get them out into the world. We’re not putting out pricing just yet. But a big part of what we’ve cracked is the ability to make these things affordable. And so that’s going to be a big part of our value proposition as we move ahead.

John Koetsier: You appear to have been remarkably capital efficient. I know many of the other entities, companies, departments that are building or trying to build general purpose robots. They’ve raised 100 million dollars. They’ve, they’re part of a Trillion dollar company or a multi-billion dollar company.

You’ve been scrappy, you’ve been bootstrapping, you recently raised what, 14, $15 million which is interesting. But that all seems to accord with Connie, your philosophy of start small, build what we know and even to the, to your go to market strategy of bringing people in, seeing what can do super, super interesting.

And I really liked that actually at 50, 000, assuming you can get there. Cause that’s your goal. It’s not there yet. And you’re not in mass manufacturing yet. I’m assuming it’s 50, 000, that’s a very interesting price point, because if you look at the kinds of jobs that you’re going to place us in and logistics and stuff like that, you’re spending probably a bit more than that.

If you look at entire costs for having an employee and benefits and other things like that. And. That sounds interesting. Do you know how you’re going to bring them to market? Are you going to sell them outright or are you going to have a SaaS solution? What’s your thinking there? 

Jeff Cardenas: Yeah. So, once again, the way that I asked, right.

John Koetsier: Robots as a service, not software as a 

Jeff Cardenas: service, robots as a service. And the, one of the exciting things about why this is feasible now is we already have business models that exist for selling mobile robots. It didn’t exist before the AMR market took off, but now there’s lots of companies that are selling mobile robots to these exact same markets in logistics.

And so customers now know how to buy them. The two ways they’re buying AMRs today are either robots as a service or CapEx. Typically with a SAS component to that. So some, the larger companies want to own their own fleets, they want volume discounts and other things. And so they’ll buy them outright. But I think largely what you’ll see in the early stages of the humanoid market is robots as a service model where they want to try them out.

They want to see how this works. There’s people that are worried about technical obsolescence, right? Like how quickly is this going to mature and develop? Am I ready to buy a fleet yet? Or do I want to wait some time? And so for a number of reasons, I think robots as a service is going to be important for this.

And now as an industry, we know how to do that. There’s third party financing groups that have already set up around this. It’s now something that the market understands, which wasn’t the case, five years ago. 

John Koetsier: I keep thinking about additional questions. We’re having a great conversation.

I want to ask this one. Where do you situate the United States and maybe North America a little more broadly in terms of global innovation of humanoid robots? Because you’re obviously U. S. based. There’s a bunch of others .. Figure. 

We’ve talked about the one in Vancouver, right? Geordie Rose’s company. 

We’ve talked about Boston Dynamics, right? So yeah, Sanctuary is the one in Vancouver. I visited Robot Island. It was sort of the name for it in Denmark. It’s gotta be about three years now, just pre COVID where there’s been, there has been for about a couple of decades, a global concentration of companies in automation and robotics is Odense Denmark.

And I haven’t seen anything come out of there. And they’ve been instrumental with some of the big manufacturing companies globally with large scale robots and automation stuff. But I haven’t seen a humanoid robot come out of there. Why am I seeing so many in the States and where do you situate the U. S. in terms of global innovation for humanoid general purpose robots? 

Jeff Cardenas: Yeah, so this is a topic that is important to me because I think the race is on. There’s a lot of interesting stuff that’s coming out, all over the world. I think one of the reasons that you see the U. S. leading right now is because the U. S. Government has invested so much money and the R&D that was required to make this happen. So whether it’s Figure, Boston Dynamics, or us … any folks have teams that have their roots in something called the DARPA Robotics Challenge. The DRC, they injected tens of millions, I think it was over 100 million total DARPA did to really advance state of the art for general purpose robots.

And that was 2013 to 2015. 

And the seeds were planted back then. And we’re just now seeing the fruit that’s being beared off of that investment. But that’s really what made Boston Dynamics big, was DARPA funding in the early years. Atlas came about for the DRC. So the very first Atlases were designed for the DARPA robotics challenge.

And that’s really what spurred a lot of the innovation. There has not been as much government funding in this space since the DARPA Robotics Challenge. And I think that if the U. S. wants to continue to lead, the government’s going to need to step in a big way and really inject more money into it.

But, this is the same thing that happened with autonomous driving. There was something called the DARPA Urban Challenge, and there was a couple DARPA challenges that… That really seeded the technology that moved it from the lab and may give it the 1st nudge out into the commercial space. And then companies were built out of that.

And so, Jerry Pratt, that’s over at Figure … he had a big role in the DARPA robotics challenge and did great work there. And then Boston Dynamics certainly came out of that in terms of who’s leading in the world and where do we go from here, I think China is making a big push in humanoids in particular.

And the government there is really stepping up to make it happen. And so I really want to see the US government respond. This is a race that I think is important long term, how should these be deployed? And I think it’s an area that we can lead, there’s other great countries doing amazing things as well.

There’s great work happening in Korea, certainly in Japan as well. And then all across Europe the stuff coming out of ETH Zurich and groups like antibiotics, they’re doing really great things though, not in humanoids and more versatile, cutting edge next generation robots. 

John Koetsier: It’s important.

It is really important. Hey, because if you win here, you win. We talked about the cost of information approaching zero. We’ve talked about the cost of software approaching zero because replication is essentially free, right? If you achieve a working and capable general purpose humanoid robot, the cost of labor, as we’ve already talked about, starts to approach zero as well.

And all of a sudden, that opens up a ton of capability for manufacturing cheaply. Onshore manufacturing, other things you want to do, jobs that were not, you couldn’t pay for them before, maybe environmental reclamation, maybe public works projects, you couldn’t pay for them before, you couldn’t afford them before, all of a sudden they become affordable and desirable, and everything that a society wants to, as well if you.

You are concerned about the declining population in some of the older nations of the world, Europe, Japan, those sorts of places. China as well. Those that’s all critically important. 

Jeff Cardenas: Yeah. And I have an interesting story about that because the U.S. actually invented the very first industrial robot. So it was invented in the late 50s, went into a General Motors factory in the early sixties, it was called the Unimate arm and the company that built it, Unimation actually ended up folding in the eighties.

I’ve heard a variety of different stories on what happened, but long story short, they didn’t. They didn’t keep getting funded. They didn’t keep getting backed in a critical time. General Motors, I think, was involved somehow and pulled out well. So the U.S. invented the very first industrial robot and effectively lost the first wave of industrial automation.

The big four that were producing all the industrial arms, two were Japanese Fanuc and Yaskawa. One was Swiss ABB and the other one was German KUKA. And so, I think this is important for policy makers and others because this next wave dwarfs the first wave in impact and size. And so I think it’s important to get it right.

But it’s an interesting story and something that I try to tell everyone that will listen when, whenever I get a chance. 

John Koetsier: Jeff, this has been a wonderful conversation. Thank you for taking the time. Yeah. 

Jeff Cardenas: Thank you very much.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: