Giving AI a body is now cheap. I’m not just talking about humanoid robots, and not even just robots. Any physical manifestation in our world with an AI connection is now relatively cheap and easy to create and ship. Physical AI is a tsunami about to hit.
But … are we ready for a world where everything is smart? Not just phones and apps, but buildings, robots, and delivery bots rolling down our streets? Windows … doors … maybe even towels. And don’t forget your shoes.
In this episode of TechFirst, I talk with Mat Gilbert, director of AI and data at Synapse, about physical AI: putting intelligence into machines, devices, and environments so they can sense, reason, act, and learn in the real world.
Check out our conversation below. As you hit play, do me a favor and …
- Subscribe to my Substack
I’ll unpack our conversation in depth, summarize the insights, and contextualize the findings on Substack. I’ll also (soon) start releasing shows on Substack first. - Subscribe to my YouTube channel
It may start coming later than Substack, but you’ll always be able to catch TechFirsts on YouTube
Physical AI: summary of key insights
10 key insights in this podcast with Mat Gilbert …
- Hardware is no longer the blocker
Costs for key components like lidar and batteries have dropped dramatically, making it economically viable to “give AI a body” and deploy robots and smart devices at scale. - Physical AI is already delivering ROI in industry
Manufacturing, logistics, and warehousing are leading the way, with examples like Amazon’s million-plus robots driving roughly 25% efficiency gains, and exoskeletons at Ford reducing strain and injury for workers. - The real magic is in the software layer
The biggest advances are in how we coordinate, orchestrate, and update fleets of devices and robots – turning hardware into evolving platforms, much like Teslas that gain new capabilities via over-the-air updates. - Edge and cloud AI must work together
Physical AI systems need fast, local intelligence for safety and responsiveness, combined with heavier cloud models for planning and learning. Small language models on the edge plus larger models in the cloud form a pipeline rather than a single “brain in the sky.” - Humanoid robots are in a high-stakes commercial race
Different players are making different bets: Boston Dynamics on agility and motion; Figure AI on mass-producible, factory-ready humanoids with OpenAI brains; Tesla on solving general intelligence first and then deploying it into Optimus. The winner will be whoever bets correctly on the real bottleneck. - Homes are the hardest environment
Compared to structured factories, homes are messy, dynamic, and unpredictable (kids, toys, clutter). Passing the “coffee test” – walking into any home and making a cup of coffee – is a rough benchmark for when general-purpose humanoids are truly ready. That is likely at least 5–10 years out. - Physical AI is expanding into new verticals
Beyond logistics and manufacturing, we’re seeing progress in agriculture (like laser weeding robots), healthcare (hospital supply robots), and elder care (robots that provide companionship, reminders, and assistive services), often amplifying rather than replacing human caregivers. - Safety is fundamentally different in the physical world
Digital AI mistakes are usually reversible; physical AI mistakes are not. That means multiple layers of safety, deterministic guardrails around probabilistic models, conservative shutdown behaviors around humans, and serious attention to cybersecurity as robots and smart devices become new attack surfaces. - Human–robot teaming is the next frontier
To work well with people, physical AI systems must be predictable, trustworthy, and able to communicate intent clearly. Designing robots and smart environments that people are comfortable around is as much about interaction and trust as it is about raw capability. - Consumer-facing physical AI is coming into view
The first big wave was hidden in warehouses and factories. The next wave includes service robots in restaurants and airports, delivery robots on sidewalks, drones, and smart interactive spaces in venues and museums. Physical AI is about to become something regular people see and interact with every day.
Transcript: giving AI a body
This is lightly edited by ChatGPT to clean up transcription errors …
John Koetsier (00:01.517)
Are we ready for a world in which our thermostats negotiate with the power grid and our beach towels have a chip, a sensor, maybe a radio?
Hello and welcome to TechFirst. My name is John Koetsier. We’ve spent the last couple of decades embedding intelligence into the digital world. What if it wasn’t limited to that? What if the next big shift is putting intelligence into everything, everywhere? Machines, devices, buildings, products, tables, houses, all this stuff.
So they not only sense, but they can reason, they can act, and they can learn. It’s kind of IoT on steroids.
To chat, we have Mat Gilbert. He’s director of AI and Data at Synapse. Hello, Mat. We’re talking about a world, a future that’s coming faster than most of us can anticipate. And we see a lot of stuff going on, humanoid robots, all this stuff. Welcome to the show.
Mat Gilbert (00:54.110)
Absolutely, thank you. It’s great to be here.
John Koetsier (00:56.669)
Awesome. We’re going to have a great conversation. We’re going to talk about physical AI—where it is, where it’s going, where we see the biggest opportunities: robots, humanoids, factories, everything.
Let’s start here. We obviously see the leading edge of the wedge in smart objects or things in our kitchen, maybe our home, or our office or factory. How do we make physical AI cheap, robust, and long-lasting rather than expensive, isolated, and fragile?
Mat Gilbert (01:30.325)
Yeah, I think that’s a great question. One of the things that’s really driving the deployment of physical AI is the fact that hardware costs are just coming down. Outside of the physical AI space and within it, hardware costs have dropped.
Things like lidar sensors, which we often use as the eyes of autonomous systems and robotics, have dropped from $75,000 a decade ago to under $1,000 now for essentially the same sensor. You think about mobile autonomous systems; you think about power—battery packs are down about 85% in cost over the last decade. So all of a sudden, giving AI a body is becoming economically viable at scale.
John Koetsier (02:11.696)
Mm-hmm. What were the figures on that lidar again? Two hundred thousand to what?
Mat Gilbert (02:17.165)
Seventy-five thousand down to sub-$1,000.
John Koetsier (02:21.356)
Okay, okay, so a factor of 75 times cheaper. That’s almost a 2,000% or 1,400% reduction that we’ve heard about from some political leaders, but we won’t go there.
Where are we in this era of physical AI? Is it the first inning? Is it the second, third? Have we even started the game yet?
Mat Gilbert (02:43.117)
Yeah, I think we’re seeing active deployments today that are showing real ROI. Manufacturing and logistics have been very much the tip of the spear in terms of live deployments that have moved beyond pilot to at-scale.
So you think about companies like Amazon. Amazon Robotics—Amazon now claims they have over a million robots deployed across their facilities. That’s driving a 25% boost in efficiency, and that’s happening today. You’ve got companies like FedEx with very similar types of deployments.
So we’re really seeing the first large-scale deployments of physical AI that are demonstrating good ROI today. It’s already here. It’s not science fiction. It’s only going to accelerate in my view as we go forward. It’s a really exciting time, I think, to be looking at the space, and it’s a really good time to be thinking about how it’s going to change pretty much every aspect of the world we operate in over the next five to ten years.
John Koetsier (03:41.083)
I was just chatting with a director at Amazon this morning, actually, and he said they’ve seen a 25% efficiency increase due to the automation, robotics, and software they’ve implemented.
That’s a big part of it, right? You’ve got the robots going all over the place in the warehouse or the logistics center, but you’ve got to have smart software that tells them where to go, what to do, and also to put things in a logical sequence so that you can pick common orders quickly with different things.
So there are big components of this where there’s physical AI and there’s actually old-school digital AI—some centralized, some localized on the edge, and other things like that.
Where are you seeing most physical AI right now, and where do you see the most opportunity in the future?
Mat Gilbert (04:36.045)
I think outside of logistics and manufacturing, you see the classic robotics automation jobs—tasks that are dull, dirty, or dangerous, the classic “three Ds” of this space.
There are some really good examples in maintenance applications. You think about a company like Gecko Robotics. They have AI-powered robots that can climb the walls of industrial-scale boilers and pipelines and climb into areas that are essentially really unsafe for humans. The data they gather then feeds back into their technicians’ insights, so when they’re undertaking tasks like predictive maintenance or preventing failures, they have all this additional data.
So again, it’s a different but still very industrial-focused application.
Over at Ford, they’re deploying robotic exoskeletons, which help to reduce the physical strain and injury for their workers who are doing very repetitive physical tasks.
It’s across a broad range of industries, and we’re seeing a ton of different types of applications as well. You’ve got the classic self-driving car, but that space is expanding to autonomous construction machinery—large, heavy industrial equipment—right through to these very tiny, gecko-sized robots that run around these boilers. So it’s touching a lot of different industries, and we’re seeing a lot of startup activity in a ton of different verticals, which is very exciting.
John Koetsier (06:00.362)
It’s super interesting. I hit on the software layer earlier, and that’s pretty critical, right? Because you can have all this smart stuff all over the place, but if it’s not connecting, not talking to each other, not working together, not organized, you end up with a mess, don’t you?
Mat Gilbert (06:19.373)
Absolutely, yeah. I think this is the real inflection point. Not only has the hardware come down in cost—fundamentally, the way the hardware operates hasn’t evolved a huge amount—but all of the key advances are in the software layer.
You’re seeing this real collapse of both the hardware and software engineering into this smart physical-AI system.
And I think one of the other really interesting things that enables is this kind of closed loop. If we look at somebody like Tesla as an example: when you buy a Tesla, it’s not the finished object. The features you have on day one aren’t the features you’re going to have on day three, day five, or a year down the line. It’s essentially an evolving software platform on wheels.
You’ve got car sensors collecting real-world data. That data is then used to train and improve the AI. The AI is deployed back across the entire fleet of Teslas via over-the-air updates. And you’ve got this really dynamic, recurring-revenue-type service relationship, which is new to the car industry beyond dealer relationships. This is a new service model.
And I think we’re starting to see that with robots-as-a-service as well. Again, you’ve got the software and the hardware working together, potentially being maintained and updated by a third party, but deployed and solving your problems in your organization.
John Koetsier (07:40.914)
Yeah, yeah. I want to talk a little bit about where the intelligence lives. And we’re going to talk about humanoid robots in a bit, but I was talking to a couple of different manufacturers of them, and they’ve got a layered approach.
You need to have some local intelligence because you want quick response times, right? So you might be running a small-scale LLM to understand a few things or some other AI engines to do that. Some are building in quite sophisticated ones, because if a robot has to send a message up to ChatGPT or OpenAI every time it needs to make a decision about where to go or what to say or what to do when there’s somebody in its space or something like that, you could have these five-second lapses before you actually do something.
How do you see that evolving between cloud AI and edge AI?
Mat Gilbert (08:34.815)
Yeah, I think the evolution will mirror the Internet of Things. If we look a decade ago, there was this goal that we were going to connect everything to the cloud and it would become smart, it would become intelligent. And the reality is that that didn’t really happen. We just ended up with data that now existed in the cloud, and no real smart device intelligence around it.
I think now we’re connecting the sensors and the data up to AI models, and we’re starting to see some of that intelligence and reasoning happen in the cloud.
But similarly to the IoT space, where processing initially happened in the cloud and then, as hardware came down in cost and increased in capability, more of that processing was pushed to the edge. That has upsides if you’re an organization trying to reduce your cloud costs: the more you can run on hardware at your site, the more you can drive your cloud costs down.
So not only is there the technical driver—if I need very quick, real-time processing and reaction, I’m probably going to put that on the edge—but I’m also maybe deploying sensing and processing on the device, maybe at the edge, on-premise, not up in the cloud, but still connected locally. And then there’s a cloud piece as well for those longer-latency tasks.
So you’ve got this very hybrid model. It’s different for every application. Every application has very different needs and requirements, and that’s what drives that architecture.
But I think we often think today that everything is cloud-based, particularly the ChatGPTs of the world. More and more, as you point out with small language models, they’re capable; they run on the edge. And you can think of these things more as a pipeline. There’s potentially an edge classifier happening on the edge, feeding a small language model, and then maybe those insights are going up into the cloud, and you’re seeing those higher-level, longer-running decisions happening up in the cloud.
John Koetsier (10:25.330)
And some of those SLMs are getting amazingly good, right? They’re pre-trained, they’re focused on a certain narrow set of activities, and they’re really, really amazing. That’s incredible.
Some others that I’ve talked with—and we’ll get into the conversation of humanoid robots now—have actually taken the human model, where our nervous system helps us understand the world and react to it. It extends out to our extremities, right? We have some processing very close to where you actually get sensory data, then more processing in, let’s say, your spinal column, and then even more in that big bulky thing you’ve got to carry around on top of your neck, right?
And so they’ve adopted similar models to that. It’s pretty interesting. It’s fascinating how the future is going to work on all this stuff.
Let’s talk about humanoids right now. There’s a huge amount of investment here. There’s a lot of innovation here. There’s also a lot of people who are super skeptical and saying, “Hands are impossible, you’re not going to get them. You can do grippers—we’ve got a thousand varieties of those—but it’s really, really hard. You don’t really need legs if you’re in the warehouse; it’s all concrete and flat and level.”
But we have a lot of investment and a lot of innovation going. How do you see this space evolving?
Mat Gilbert (11:38.667)
I think humanoids are a really interesting example. If we believe the current promise, 2026 will be the year of the humanoids.
I think there’s a really interesting commercial race going on now. You’ve got this very high-stakes race toward the first commercially viable humanoid robot, and there are a few key players making different strategic bets on what the biggest bottleneck to scaling physical AI technology is in a humanoid form.
If you look at companies like Boston Dynamics, they’ve been around for multiple decades at this point. I would say they’ve really mastered dynamic motion and balance—they’re all about agility.
John Koetsier (12:22.013)
But they just kind of recently went from systems that were kind of old-school to actual electric actuators, right?
Mat Gilbert (12:31.853)
For sure. I think their new Atlas robot is a marvel of engineering, for want of a better phrase. It’s a really impressive piece of equipment. But the bet that they’ve made fundamentally is that agility and mastering that complex physical agility is the hardest problem and the biggest bottleneck to scaling.
You compare that to somebody like Figure AI. They’re very, very focused on designing a robot that’s suitable for mass production from day one. They have a partnership with OpenAI for the brains, using a human nervous system as a model. They have a partnership with BMW where they’re deploying into factories for a real day-one factory use case.
That strategy is all about scaling and getting to practical application as quickly as you can.
Then you can look at a company like Tesla, again taking a different approach. They’re making more of an AI-first bet. They’re leveraging a huge amount of data from their self-driving cars and their AI stack, and they’re trying to solve essentially general intelligence first. From there, their bet is that you can deploy that into their Optimus robot, which has already been designed for work in their own factories.
So I think this race for the humanoid—the winner is going to be the company that made the right bet on what that bottleneck is, and it’ll be really interesting to see next year.
John Koetsier (13:56.155)
And there are so many bets out there, right? On Tesla’s side, it’s really hard to progress if you fire your team every six months or something like that and restart. You’ve got all that data—great—but still.
You’ve got Agility Robotics; they’ve got Digit. It was the first robot to get a paying job in a factory. But I think, as you kind of alluded to, it might be fired if it wasn’t a test project.
It has a robot that’s doing a paid gig right now as well, but it’s so early days.
Apptronik is super interesting in the space because they’re super practical. They’re not looking to boil the ocean; they’re looking at what works, what they can do right now, what they can get paid for right now. And they come from a long history of building industrial robots as well, so as they come to humanoids, they bring that sensibility with them.
It’s super interesting.
Let’s give it a timeline. In your opinion, in your mind, when do you think we’ll have a robot that’s living in our home with us and maybe vacuuming a little bit—I’m not talking about a Roomba, I know that’s a robot too—and baking some cakes and maybe, I don’t know, even doing a little childcare?
Mat Gilbert (15:11.839)
Yeah, I think in terms of environments, when we think about environments, I often think that the home environment is the biggest challenge. It’s almost the last frontier for autonomous robotics. They’re very unstructured environments.
If you look at my home, I have two small kids. That environment is very dynamic. It changes all the time. There’s always something to trip over or something new.
So getting a fully autonomous humanoid-type robot into that setting is a challenge.
You look at Roomba. If you look at where robot vacuum cleaners were a decade ago, their algorithm was: they run around, they bounce off all your baseboards, and eventually, maybe they’d have covered most of the space they were meant to clean.
Compare that to where they are today. They’re doing SLAM navigation, they’re mapping out your specific home, they understand obstacle avoidance, they’re doing object detection, and they’re a lot smarter than they were.
I think you can apply that forward. There’s a famous test, the “coffee test,” which I think is a really interesting one for robots. We’ll know we’ve solved general humanoid robotics when you can take a humanoid robot and it can walk into any American home and make a cup of coffee.
I think that’s a really interesting test, because every human can pretty much figure out in somebody else’s home how to make that cup of coffee. For a robot to do that in a bunch of different environments is a big challenge.
Where we are in terms of home environments—I think that’s further off. That feels like it’s in that five- to ten-year range, at the earliest, for consumer environments. There are applications for specific setups. There’s a kind of home-chef, professional-chef setup you can buy on the market now. Right now, the price point is around a quarter of a million dollars. So it’s not a consumer device right now, but the capability is there. As we know with technology, we’ll see that price come down over time.
John Koetsier (17:03.013)
Yeah, that’s a really good point. We’ve got the bits and pieces, right? We have the little robot that does clean your home—it’s a Roomba or the equivalent. There are many brands out there right now.
We have the chef robot, which is stationary, and you’ve got to put stuff where it can get it, and it can make you some food, and that’s great. And we have other robots that are doing other stuff, but the whole package isn’t quite there.
I like that test that you mentioned. We do see some interesting things from Figure, which is showing robots that are putting stuff away that they haven’t seen before—groceries and other things like that. So they have some sense of what this is; they’re using some intelligence there. But yeah, there’s a way to go.
So what are some other places where we should look, where physical AI is making some huge strides right now?
Mat Gilbert (17:56.445)
I think, as we called out, manufacturing and industrial settings—logistics settings—are the biggest places for commercial deployments. But you’re also looking at agtech; we’re seeing physical AI applications there.
Take Carbon Robotics with their laser weeder. They’re on their second generation now, and it’s only doing good things from both a technology perspective and an environmental perspective, which is excellent.
We’re seeing some interesting applications in the healthcare space as well. I think this one was a bit of a surprise to me. Healthcare, to me, feels a little bit like we shouldn’t have fully autonomous systems; I quite like a human to be in the loop in the healthcare space.
But what we are seeing are applications in elder care. There are robots that are providing company for people in elder-care settings, facilitating video calls with family, providing medication reminders, those types of tasks. So easing the burden on human carers and amplifying that human touch as well.
In hospital settings as well, we’re now seeing physical-AI deployments where we have robots that are navigating the hospital corridors, delivering medication and lab supplies.
John Koetsier (19:06.979)
And in the hospital corridors, delivering medication and lab supplies and generally kind of allowing the health…
Mat Gilbert (19:18.773)
I’ve got a really bad echo in my ear right now. Was there a volume setting that just changed? … Is that gone now? Excellent, that’s gone. Apologies for that—that was just the right delay to make it a challenge.
John Koetsier (19:30.923)
No worries, I don’t hear it at all. That’s really tough if you hear yourself with a little delay; you can’t speak anymore, you can’t even think in some cases. It’s all good. You were on hospitals.
Mat Gilbert (19:50.701)
Yeah, so with healthcare, I think it’s a really exciting and high-impact frontier. We’re looking at amplifying the human touch of caregivers in the healthcare space.
So yeah, the robot is navigating corridors to deliver supplies, lab samples, and medication. There’s a company called Diligent Robotics that has a robot called Moxi. You’re looking at really automating these time-consuming, non-clinical tasks, which means you’re freeing up the healthcare staff to do what they do best, which is direct patient care.
John Koetsier (20:25.507)
And we’ve all heard the horror stories in old-age homes or similar places where there’s one or two or three carers for 50 or 75 or 100 people, and elderly people are just left in appalling conditions. There are a lot of different reasons for that, but it’s difficult to get people to do this stuff.
My mother struggles with dementia at 91, and there are just needs where you can have a very patient system—it can be more patient than humans, which is super, super helpful.
I want to hit adaptability. We kind of mentioned that a little bit because you went into this test of a humanoid robot or general intelligence. We know we’ve solved it if it can go into any home, make some coffee, and there we go, right?
This kind of adaptability—machines doing what they weren’t programmed for—we see that in generative AI. We see that in LLMs to a degree and stuff like that.
How do you architect for that in a safe way?
Mat Gilbert (21:26.027)
I think you’ve hit my key point there, which is safety.
Before we get there, looking at these generalist robot policies—these foundation models for robotics—you’ve got things like GR00T from NVIDIA and work from Physical Intelligence. They’re really trying to build, and they are building, this general-purpose brain for robots, allowing them to do exactly as you described: learn skills and handle situations that they weren’t explicitly programmed for.
One of the key enablers for foundation models is the current maturity of simulation and digital-twin tools. You can now train a robot for millions and millions of cycles in a very physically accurate world before it ever hits the factory floor or a physical environment. By doing that, you’re building these foundation models rapidly and a lot more safely than alternative approaches that have been used historically.
I think when you’re thinking about safety, physical AI has a critical distinction compared to digital AI. Digital AI fundamentally operates in a world of reversible transactions, for want of a better phrase. If ChatGPT hallucinates and gives you a wrong answer, it’s an informational error—something you can correct, ideally.
John Koetsier (22:53.676)
Yes, if you’re paying attention.
Mat Gilbert (22:54.765)
Absolutely. And these systems are grounding themselves and improving all the time. But any kind of digital error lives in the digital world.
For the physical world, actions often aren’t reversible; they’re irreversible. So that same hallucination that gives you an incorrect fact—if that’s in a robotic arm controller, then it’s not just a wrong sentence, it’s potentially a catastrophic physical movement.
John Koetsier (23:19.147)
“Let me unbreak that arm for you.”
Mat Gilbert (23:21.597)
Absolutely, yes. That’s not something anybody ever wants to be saying in a physical working environment, for sure.
So I think you absolutely have to have safety designed in. It’s the foundational design principle of any physical-AI system. We talked a little earlier about hybrid systems, and I think this is where you can really start to build out multiple layers of safety.
When your AI model is suggesting your robot make some action, how many filtering layers does that go through before the actual physical action? The more filtering layers you can have, the safer you can build your system.
And again, these systems need to learn over time. For autonomous vehicles, Waymo is a great example. They trained on millions of hours of recorded driving data in simulation before they hit the road. That, I think, applies right across any physical-AI application.
So you really do need to think about the consequences of something going wrong—and they are exponentially larger in the physical world.
I think extending beyond just AI inference, you have to think about cybersecurity and cyber-attacks as well. Digital cyber-attacks are obviously devastating and can be bad from a data-breach perspective, but it’s not just a data breach if it’s a physical-AI system. It’s a potentially malicious actor taking control of really heavy machinery.
So again, as you start deploying these physical-AI systems into environments where perhaps cybersecurity hasn’t been a concern, you suddenly have this new edge that you need to secure.
John Koetsier (24:56.384)
Yeah, that’s a tough one, right? I mean, we’ve seen the exploits by people who went in via some smart software in, like, an aquarium. That’s the famous example, right? And we’ve all seen the movie I, Robot as well.
I was going to be a devil’s advocate about digital AI making mistakes and say, “Oh, by the way, your bank just erased your balance.” But yeah, there are safety systems around that.
So I 100% agree with that. It is challenging as we lean into LLMs for robotics that LLMs are probabilistic—they’re not deterministic. That’s why they get it wrong sometimes; that’s why they hallucinate sometimes, right?
You want the power that you can get from an LLM, but you do need some deterministic barriers on what smart systems—physical-AI systems—can do, because we don’t want broken arms or broken noses or just collisions or other things like that.
Mat Gilbert (25:56.159)
Absolutely, yeah. And again, thinking about layers of safety: in some recent environments I’ve been in, you have an autonomous system and if it detects any motion within 10 feet—or 30 feet—of it, it just stops. It just shuts down and it won’t do anything until it’s happy that it’s clear.
That’s not the higher-layer AI model reasoning about that; that’s a very low-level physical command. You just can’t move the system if it’s detecting people around it.
So that’s one layer, and then you just keep building those up. As you get a system that’s as safe as you can probably make it, then you can start thinking about how to improve productivity.
A system that shuts down whenever there’s anybody too close to it is great for safety, but not necessarily great for productivity. So how do we design the system so that we’ve still got the same level of safety, but we’ve also got that level of productivity as well?
I think that’s an interesting engineering challenge, and it’s one that I think the industry is working on all the time.
This concept of working alongside robotics and automated systems—this human–robot teaming, human–AI teaming—is almost the next frontier of physical AI. How do we design a system that is fun to operate around, safe to operate around, productive to operate around, but also communicates its intent to the human? So you have this bidirectional communication.
John Koetsier (27:24.755)
Yeah, yeah. Interesting.
Mat Gilbert (27:27.437)
So yeah, as you think about humans and robots as teammates: with a good teammate, they’re predictable, they’re trustworthy, and they communicate intent really effectively.
But how do you design a system that builds trust when trust is very easily broken, as you say, by something unpredictable? If you have unpredictable behavior, how do you trust this system? How do you solve the problem of these very messy, unpredictable environments without needing to constantly reprogram?
That’s where these foundation models are really solving those types of use cases.
John Koetsier (28:04.978)
Yeah, that makes sense. Maybe we’ll close here. 2026 is just around the corner. What do you see as the next steps or the next evolution that we might see come to market next year?
Mat Gilbert (28:20.671)
Yeah, I’m excited to see all of the applications. As we mentioned, I think consumer applications are a little bit further away. But consumer-facing applications, I think, are just around the corner.
If you think about physical AI, I think the first wave of deployments was almost out of view for most of us—they were behind the walls of distribution centers and in manufacturing settings.
But the next wave, I think, involves advances in service robotics: deployments in airports and restaurants, robots that will come and serve you drinks or bring your food to you. Those types of systems—I think we’ll see more of them. They’re already there in some places.
I remember being served drinks by a robotic waiter not so long ago and being overly impressed, and I spent far too long of the meal trying to figure out how it didn’t spill the drinks as it was driving. It was very impressive.
We’re going to see those more B2B applications, and then business-to-consumer applications. I think consumer-facing humanoids are a little bit further down the line, but yeah, I’m excited.
John Koetsier (29:25.882)
And we’re going to see this more and more in our environments, of course, right? I just did an interview with Starship and they are the largest home-delivery bot service. They’ll take groceries from a local store, deliver them to your house. It’s a six-wheeled robot that travels on the roads, crosses roads, travels on sidewalks. It’ll go up your sidewalk and near your steps so you can get it and do meals that way.
Uber Eats just brought out Spot, I believe they call their robot, which is essentially the same thing—keep your food warm, deliver it.
And we’re seeing more and more in terms of drones as well, and that’s actually deploying in real life. That’s been going on for multiple years in some places in Europe, and it’s been a couple of years at least in some places in the United States as well.
So we’re going to see those things globally. It’s going to be an interesting world, because physical AI has meant for a lot of us—if we’re not in a huge logistics center—it’s meant some smart cup that keeps my coffee warm or something like that, or maybe it’s an Apple Watch or something along those lines.
We’re going to be seeing this more and more in our built environments, and that’s going to be a phase shift for us.
Mat Gilbert (30:44.013)
Absolutely, yeah. I think not only in those last-mile delivery applications. You’re going to see it in entertainment venues; that’s going to be a big one.
If you think about smart spaces and connected spaces: what are the new experiences that having a smart, adaptive environment that’s aware of you as an individual, and you as part of a crowd, unlocks?
I think that’s another place where, in the very near future, we’re going to see more physically aware systems, and they’ll only progress over time.
John Koetsier (31:16.573)
Super interesting that you brought that up. I was just in a museum in Las Vegas and there was a place in there where you could draw something and it would immediately animate it and show it huge on the wall—just massive. It was quite impressive, actually.
Anyway, super interesting conversation. Thank you so much for your time, Mat.
Mat Gilbert (31:35.757)
Absolutely. Thanks for having me.