Are full self-driving cars just 3 years away?
We’re talking about autonomous driving … specifically … how soon we can expect self-driving cars? And what changes will they bring in our economy and our society?
For this episode of Tech First Draft, my guest is Blair Lacorte, the president of AEye, a Silicon Valley company that “develops advanced vision hardware, software and algorithms that act as the eyes and visual cortex of autonomous vehicles.”
He says: we’ll have self-driving cars within 3 years.
What we cover:
- Many experts say that full self-driving is many years away. Some say we’ll never get there. Talk about how you think we’ll get there in 3 years.
- You also say we’ll need to redefine what self-driving means … maybe redo the Levels of Automation. How so?
- What’s unique about your technology? You have something you’ve call iDAR … what is it, and how does it work?
- Who are you working with right now on this?
- Elon Musk says we don’t need LIDAR. Agree?
- How important is critical mass of instrumented cars and shared data to accelerate the AI that we need for self-driving cars?
- Robotaxis … will we see fleets of personal cars roaming the roads as taxis? Other models?
- There are those who say that autonomous cars — and their development — is unsafe. How do you respond?
Listen and subscribe wherever podcasts are published:
- Apple podcasts
- Google podcasts
- Spotify podcasts
- And multiple other places … see them all on Anchor
Or, you can watch the show on YouTube:
And … a full text transcript
John Koetsier: Are full self-driving cars just three years away?
Welcome to Tech First Draft with John Koetsier. Today we’re talking about autonomous driving. Specifically … how soon can we expect self-driving cars? And what changes will they bring to our economy and our society? My guest is the president of AEye, a Silicon Valley company that develops advanced vision hardware, software, and algorithms that act as the eyes and visual cortex of autonomous vehicles.
He says we’ll have self-driving cars within about three years.
Blair … welcome!
Blair LaCorte: Hey, thanks for having me.
John Koetsier: Awesome. Super pumped to get into all this stuff. Give us the thirty second intro to you and to your company.
Blair LaCorte: Sure, yeah. And I would also qualify that while we’re near Silicon Valley, we’re actually out in Pleasanton, out in the cow country, actually east of San Francisco.
So my background is pretty diverse in the sense that I’ve been in the hardware business. I was head of worldwide strategy for a company called Sun Microsystems, which no longer exists as itself, it’s part of Oracle, and then worked in software with Autodesk, one of the largest software companies.
And then I saw a great transition over my last 20 years … I worked in some database companies and then I took a couple of internet and communication companies public. And so I think one of the things I wanted to talk about today in autonomous cars is we’re seeing the same transition, hardware moves to software, moves to data, moves to network. And I think that it hasn’t changed, it’s just sped up because there’s more technology out there.
And so part of what we will talk about today is what is really an autonomous car and how are those cars going to develop? And why I think it will probably be faster than people think.
John Koetsier: Well, let’s get into it.
A lot of experts say that full self-driving is a long ways away, many years. Some have said we’ll never get there. I think Woz, one of the cofounders of Apple, said it’s going to be a long time and maybe we’ll never get there.
Talk about how you think we’re going to get there in about three years.
Blair LaCorte: Sure. And so I’ll qualify this for your viewers that this is probably worth what you paid for it and you didn’t pay me anything. So I’ll give you my opinion, and while I think it may be slightly radical, I think when I explain why I come to the conclusions I do, maybe there’ll be some things that make sense.
Again, the first issue is how do you define autonomy?
Most of the people in AI came from a military background at some point. We used to talk very often about open loops and closed loops. Open loops means you’re in field, you’re in theater, you don’t know where you’re going to go and you need to be able to do anything anytime, anywhere. And humans are very good at that because they can trade off both spatial and time.
So you can decide where you want to go and how fast you want to go.
Closed loops are things that are much more predictable. So I’m running one route, I’m running it back and forth, I’m going between three or four forward operating bases. And so you can start to constrain the variables and you can start to [create] efficiency. So the first thing I would say is that the industry when it started, actually started with a very, very complex problem, which is ‘how do I have a complete open loop system?’
I’m going to redefine that to ‘how much autonomy do you need?’ And I would call that autonomy ‘on demand.’
Given the situation you have, we actually have autonomy today. For instance, in level 2 anti-braking systems, your car decides whether it sees something, and it has the capability because you’ve given it to them, to autonomously decide whether to stop or not. That truly is autonomy, it’s just closed loop autonomy, only doing one type of task.
The second thing I would say, the variable that I think that while people think about it, I don’t think they think about it at the forefront, is that autonomy will not happen unless there’s a way to make money off it. As a concept, it sounds really cool, but if you can’t make money off it, no one’s going to deploy the processes and the technology that need to actually be deployed to make it happen. And so autonomy that’s happening today, so for instance level 2 braking systems, people want those cars, they’ll pay more.
Autonomy is happening.
So when people say I don’t think… or Woz (who is a great guy by the way) might say ‘hey, look, I don’t think that autonomy the way you defined it as full autonomy, robotaxis that can go anywhere in the world, do anything, and then a consumer can afford to buy, that may be further out.’ And I can agree with that depending on how you define it.
But I think what would be fun for your viewers to talk a little bit about how the loops could be closed and what types of autonomy will happen in the next three years.
John Koetsier: I think that’s a great question and it’s good to contextualize it. A friend of mine, Robert Scoble, drove tens of thousands of miles this past summer in his Tesla Model 3. And he said Tesla drove 95% autonomous to all the national parks with his kids.
We also recently had the truck, the semi-trailer that transported butter across the United States coast to coast, I believe in three days, just a couple of days ago, autonomously driven. And that’s probably a pretty chosen route right?
So there may be some of what you’re saying in terms of ‘hey, it’s A to B, not A to everywhere.’ But maybe let’s talk about… I want to get into what Elon Musk has said about robotaxis and millions of them, by the end of the year or by the end of next year.
But let’s redefine what self-driving means a little bit. You talked about the levels of automation. Can you go through those really quick? There’s level 1 through 5.
Blair LaCorte: Sure. And the levels were developed by SAE, the Society of Automotive Engineers.
And every time a new technology or a new system gets deployed, someone’s got to put a context around it. And the context that was put around autonomy was there’s level 1. There’s level 2, like automated braking systems. There’s level 3, which means the driver’s got to pay attention but the autonomous car will tell the driver when to take over. There’s level 4 that says the driver can be in the car, but the car can have redundancy and do everything that needs to be done.
There’s level 5, where there’s no steering wheel and the driver doesn’t have to be in the car. And I actually think that was a very, very good way three years ago to actually define maybe how technology can be used with humans. What we found, since people have been out there using these cars, is that the levels themselves are very constraining from a business model standpoint. So, for instance, if you want to go from a level 2 to a level 3, the constraints on how much redundancy you have to have and how you can actually get the driver to pay attention, actually make it very unprofitable for someone to deploy a level 3 system.
So what’s happening is that you’ll see it in the car companies, they’re calling their systems level 2 plus, plus, plus, plus, plus. And then they’re saying at some point we’ll get to an autonomous car. That has actually become dysfunctional, not because the technology definition is wrong, but because the business model capability in that definition has been constrained and therefore people are stepping back from it.
John Koetsier: Talk about the business model a second, because I mean, I was looking at potentially ordering a Tesla Model Y recently, and I believe self-driving, the part that Tesla calls self-driving, was an either $7,000 or $9,000 option on the vehicle.
Blair LaCorte: Yes. And look, you’re hitting the point that I think you have to start with here, is that there’s two types of business models.
There’s a B2C model where someone sells a durable product that’s called a car. They sell it to a consumer, and the consumer has to see enough value that they want to pay a little bit more. Now, in the past the way a lot of this technology came in was what we’ll call in the military, compliance-driven.
The governments … the car has to have a safety system, everyone has to do it and therefore everyone raises the price of their car a certain amount to put the safety system in. What I think, and I’ll tell you the things I think Tesla has done extremely well, in that they’ve driven innovation and autonomy, and then there’s also things that they haven’t done well, but there’s a reason for it. The things they’ve done well is they realize that they need to get data, and so they collected data from every single car that was out there.
That is brilliant, because if you ultimately believe you have to see patterns and see where, how to develop technology.
They didn’t do a focus group. They literally did a 500,000 unit test.
The second thing they did, and to your point, is when they developed autopilot. Now people will argue they shouldn’t have called it autopilot because it’s got a connotation that it’s really driving the plane or it’s driving the car. But if you didn’t call it autopilot, you called it auto-assist and you looked at what you and I did when we were growing up with speed control on the highway. It was a huge improvement to what we had before, which is not getting a speeding ticket by setting the speed.
And so what you talk about is the business model for B2C. If you actually took out Autopilot out of Tesla they would have no profitability, because their tax rate is somewhere between 30% and 40% for $6000 to $7,000. They were able to help a consumer decide to pay $6000 or $7,000 more for a third of their cars. And when you average that out that’s good business.
Ultimately, they also built in the ability to do software downloads and an upgrade. So those are the things I really like about Tesla.
The other question that always comes up is about Elon Musk saying ‘well, in my systems, I don’t need to add LIDAR in to get to the next level.’ Again, I think it’s a very pragmatic statement. Today, or at least last year, a LIDAR system would have cost $50,000 to add into his car. So to think that Elon Musk was going to charge $50,000 more to make the car go to the next level, he didn’t have a choice anyway, so why not actually put a gauntlet down and say, until the cost comes down, I don’t see the cost benefit.
And I think that’s actually a fair statement for someone who’s trying to sell a consumer vehicle.
Now, if you were going to take a car utilization, unlike what he’s doing from 4% to 6% and try to take it from 4% to 94%, and it was a B2B car, then $50,000 probably wasn’t that much to get the utilization of an asset from 4% to 94%. So those are the things I think that are pragmatic about what Tesla says. I do think that they’re constrained because they did not look at full autonomy first and therefore they didn’t get the full system capability to figure out how much you needed to apply. They’re working further up the scale versus down the scale.
John Koetsier: Sure, sure. So let’s get into some of the stuff in a little bit about Tesla and robotaxis, and what business models self-driving cars can unlock later.
But I want to talk a little bit about your technology. You’ve got something you call IDAR, talk about what that is and how it works.
Blair LaCorte: Right, so again, I set up a little bit of a premise earlier and my premise is that if you look at technology development that ultimately we end up, everything ends up with data and system.
Even if you looked at sports broadcasting, they make their money off the media today, it’s a content business. It’s entertainment business. You have to do something, you know there’s hardware and then it moves to software, moves to data, moves to network.
I was an initial investor in the company and then I was on the board, and then I actually joined the company because I believe so much in the concept. The concept that AEye did that’s very, very different. I’ll call it a generation two concept, which is most industries start with a new technology and it’s hardware based, and it’s all about power and speed and how quickly you can do something.
Now you take a look at the semiconductor business. Fairchild started it, but Intel building system based chips actually took over. You take a look at social networking, Myspace started it, but Facebook actually made it into a network based system, right? And so when you take a look at what LIDAR is doing, LIDAR’s been in the military for over 20 years. Most of the LIDAR innovations that have happened in the commercial market have all been about how to make a sensor smaller, faster, and more data. It’s passive, right?
And so what we decided to do was say what if we were going to end up with a network that needed to use better data to help a car see better than a human being? What kind of hardware and software do you need versus developing hardware and software from the ground up? A human visual cortex functions at 27 Hertz, which is much, much faster than the autonomous car at 10 Hertz.
But it also has the capability to foveate spatially in any area.
The closest analogy that we found was that a big portion of the people here had worked in the military and ISR and missile defense and targeting systems. That is the closest analogy to replicating or bio-mimicking human vision. And what it does is you can’t miss anything. So you have to see everything that enters your scene, but all objects aren’t equal.
The girl in front of you is much more important than the sky behind you. And finally, you have to be able to modulate speed. Sometimes you have to look at things and see it faster. And sometimes you can actually be a little bit lazy and not update as fast. That’s why a human can function on a highway at 80 miles an hour.
94% of the issues that humans have are not around that a human visual cortex isn’t better than a computer, it’s around that they get distracted by food, or by texting, or by the weather that’s out there.
So our concept with IDAR was don’t argue whether radar or cameras or LIDAR is better, find a way like humans do to integrate your senses together so that you use the right sense at the right time. If you’re just looking long distance and you don’t need to know what it is, but you want to know that something’s coming up so you can focus on it, radar is a phenomenal technology to use and it’s cheap. If you want to look at contrast, Elon Musk in that sense is right there, we’ve got 10 years worth of algorithms using cameras who are very, very effective at finding things, but unfortunately, 20% of the time they can’t do what a human does, which is see depth at distance.
So if you added a LIDAR system in, all of a sudden a camera and a LIDAR system is one plus one doesn’t equal two, it equals four. That’s how humans do it. They use things like radar, which to us would be hearing. I hear a siren, I look right? They use trigger senses like radar and then they combine 2D and 3D.
So what we’ve tried to do is jump to the end.
And the end is, at the end of the day car companies don’t want cheaper individual sensors, they want a system that takes whatever sensors they have and allows it to perceive in an intelligent fashion. And I’ll say the last thing about this is that this will probably startle your viewers, is that every system today that from radar, to a camera, to the LIDAR systems on cars are passive detect systems. They only bring data one way and they bring as much data as they possibly can.
The human equivalent in biomimicry is autism. I have all the information …
John Koetsier: How do you prioritize it?
Blair LaCorte: The way they do it today is there’s a huge computer in the trunk of every autonomous car, which is why they can’t go over 25 miles an hour because the time it takes. If you’re not going to intelligently prioritize it when you grab it, then you have to use power and time to actually figure out what you have.
75% of the time an autonomous car under the existing systems are based on the fact that you’re using camera data and radar and LIDAR, and then you’re throwing out 90% of it. So again, it was a great first generation. We proved that you can collect more data than a human can. We proved that you don’t get distracted. And just as in almost every other industry, the first step is to get hardware to actually do things.
The second step is now software, and the third step is going to be data and networking. And I think that’s why I believe that you’re going to see bigger jumps than most people think because the hardware is actually very advanced today. Still too expensive. But that’s actually coming down. I can tell you in my industry where I combine LIDAR and camera, my costs have come down 10 times in the last year alone.
John Koetsier: Wow.
Blair LaCorte: I think Waymo has actually announced and they’re starting at a much higher cost than we are, that they’ve actually seen drops of 10. And I think we’ll see another drop of 10. So the argument that Elon Musk would make that it’s not ready for a consumer car, probably will go away in the next couple of years.
But I do think that you’ll also see B2B models where a car can, whether we call it mobility as a service, where a car runs around a campus or a bus line runs in a closed loop. Those things are doable today, to your point, I can drive all the way across the country as long as I close the loop. So as long as we change how we think about the business model of autonomy. I think you’re going to be surprised how quickly it’s going to move.
John Koetsier: Super interesting.
So I mean what I’m gathering from what you’re saying, it’s not just about the sensor data, it’s about sensor data and getting the best sensor data you can get, but it’s also about prioritizing which data to look at, what to pay attention to, how to know what’s important, what’s not important, and to deal with that appropriately.
Super interesting to get that perspective.
So let’s talk about that data piece and that network piece. Obviously if you’ve got mobile data collectors out there, millions of them, that’s a good thing. That’s a wonderful thing. You can get data on environments which are constantly changing, and get data on reactions to things that move in those environments, which is also a good thing. But you also get a sense of what’s working and what’s not working.
Do you think that we’re putting people at risk? Do you think that there are dangers inherent in maybe what some of the automotive manufacturers are doing right now? Maybe Tesla, maybe others, in releasing some of these systems perhaps too early?
Blair LaCorte: I think that again, I’ll come back to the fact that the technology is advancing very, very quickly and when you’re on the leading edge of that technology, will there be times when for instance, unfortunately in the Uber situation, the system wasn’t tied together.
When you read the NTSA report, you find that the individual components were fine. It’s just that the way that they configured the system and the safety protocols that went along with it aren’t fine. I think it’s very difficult not to have a little bit of risk. I would also say that while the Q score, which is, I used to be in the entertainment business, the belief in something and the likability, is something that computers is higher than religion.
So we do actually default to computers, to the point where if you look at the study, psychological studies we’ve done, when a computer makes a mistake we believe it learned it. When a human makes a mistake, we’re not sure a human did learn it, which is actually the exact opposite.
So what I would say is that we do believe in computers.
But we also have this mental image that says, you know what? If a computer kills a human, that’s much worse than a human killing a human. And I think that’s our safety valve. So I think the pressure on the industry will be that we should be ten times more careful in giving that control to a computer than to a human.
I have three teenagers. I can tell you there is no functional safety in a driving test. Okay. When they go in there and they pass their driving test, they still have three years worth of learning that needs to be done after they’re out there. And I think that’s okay. I don’t think we should give computers that chance. I think there should be a higher bar, which is why the business models will have to be more constrained. So I don’t think there’s a right or wrong to it.
I think that you will see the industry constrict and expand depending on how smart they are about moving through the risk levels.
John Koetsier: That’s a really interesting question to consider because if you look at… there’s obviously huge publicity, as you said, we think it’s worse when a computer kills a human versus a human killing themselves or somebody else.
There’s many accidents out there. Millions of people die annually in car crashes and self-driving can take away a lot of that. But there’s also a risk, and I’ll give you an example. I did a story recently on Apple and they’ve redone some of their AI models in iOS 13, released that and their spellchecker dropped significantly in how good it was, how well it understood what was going on, how it autocorrected.
And you run the risk of something similar happening in self-driving.
Let’s say that some random company, we won’t name it right now, has amazing self-driving. They update it and there’s a risk in that update just like in aviation where you have a cascading failure and 100,000 people are at risk, not just one, because it’s one system that millions are using. And they run into one error condition. So obviously there needs to be significant testing and checking of these things as you go forward.
But I do agree with most self-driving experts or advocates, which is there’s carnage on the roads right now and we’re going to get safer over time. There will be casualties and that’s a horrible thing, but it will be safer overall in the longterm, in my opinion.
Blair LaCorte: No and I think there’s two things, two analogies I’d give you.
One is AAA just came out with a study on the level 2. Now we don’t call that autonomy, again the automatic braking type systems, right? What they found is there was a 20% belief in those systems when they were first deployed, and now there’s an 84% belief in those systems since they’ve been deployed.
And why? Because humans have seen over time that the systems actually are better, that they don’t get distracted.
So while they may have false negatives, they may stop sometimes when they don’t have to. They have been tuned to stop as fast as they can. Part of the problem with the initial tests and self-driving cars were that we use two metrics, kind of like in the internet when we said eyeballs are the metric for a company’s success. We use two metrics, which are not bad to start with, but didn’t evolve quickly enough. One was disconnects. So if disconnects is your metric, then you keep pushing the edge because if you get too many disconnects, you can’t get a license in a certain state.
So you keep pushing it that the safety driver doesn’t have to take over because every disconnect is very costly to you and your regulatory environment. And I think we got too far in that.
The other was how many miles are driven. Which again, it sounds great, how many eyeballs went to the website, but the reality is the 95% of the issues had been solved with very simple algorithms. The 5% of edge cases, you’d have to drive millions and millions and millions of miles. So they were very good metrics to start out with.
But again, it’s, they have to evolve and change.
The other analogy I would make is, look, humans always go back to the patterns that they trust. You know, I’m going into space next year. I’ve spent 10 years working both in aviation, running an airline, but also on the space project. When you take a look at the way we actually validate aviation, there’s always redundant systems and that was one of the big challenges with the [Boeing 737] Max, was that when you added in software, it’s very easy to find redundant systems and hardware.
As you add in software intelligence, which makes everything better it also implies that you have more risk because it’s thinking about things, not just having a binary one or two. So what you’ll see in the next couple of years is while the autonomous systems will be deployed, they may not always have safety drivers, but they will have multiple levels of redundancy.
So in essence it will cost us a little bit more. So will it go slower? Yes, it will go slower, but we’re going to learn along the way.
So again, the question you asked me at the very beginning is why do I think autonomy will happen in the next three years? And in fact, full autonomy, because I think humans will decide the business model is worth it. Let’s just close the loop to reduce the risk. And let’s not tell ourselves that we’ll have a robot that’s as smart as a human, can go anywhere in any kind of weather.
There may be constraints on what we’re going to do. And I think if we look at it that way. We’ve probably spent $100 billion in the past four or five years developing technology for fully autonomous go-anywhere cars. That stuff’s not going away. That stuff can now be used in more intelligent ways, in more intelligent business models.
And that’s been a pattern you see time after time after time again.
John Koetsier: Talk a little bit about some of those business models. Let’s say that in three years we do have a significant advance, as I believe we will, talk about some of the business models that that unlocks …
Blair LaCorte: Right. So again, if I started from the ground up you have to actually take a look at consumers differently.
Look, we have a very expensive, durable product that people only use 4% to 6% of the time. If you take a look at the auto industry from 30 years ago, they sold you a car every three years, the margin was $12,000, they got a 10% kickback from the channel because they bought their own parts and they spent a lot of money on marketing to make you feel good about your choice.
It was a very, you know it’s capitalism. People were very happy. People love their cars.
The challenge has been that with global supply chains and the competition that we’ve had the average car company makes $3,000 per car, sells you a car every seven years, and with an EV will be 10 years.
Most durable products the way that they survive over time is to continue to add services on top of it.
And the first foray into services for auto was to put it in entertainment systems and then play, remember with Microsoft that we would be getting charged more. OnStar was one of the few successes in building a service into a car. What I think is now that safety has become an issue I think that the way autonomy will get deployed in the consumer space is, there’s 500 NIST safety standards. There’s probably we think there’s 50 to 70 that can’t actually go to the next level without some advanced kind of autonomy.
Those will get deployed over the next three years, so that you will pay more when you want that type of safety in that type of car.
So that’s the first wave autonomy. What that’s going to do, if you think about it from a business model, is that goes on every car. So that’s millions of cars. And what that does is drops the price of the components that go into building autonomous car significantly.
We believe that 20, 30, 40 times cost reductions can happen through a supply chain that uses the base components in what’s called ADAS, which is advanced safety systems. That’s the first level.
The second level is there will be people who already own an asset today. Who are those people? The bus companies, the people who have campus shuttles or who have construction vehicles, right? If you can prove that autonomy will make them more money or make them safer and reduce their insurance costs or their downtime, they will deploy. So that’s a second model.
Which is, you know, closed loop mobility on demand.
And then the third model that I do believe will happen is we’ll have constrained implementations where they’re not just on one site, construction site, or maybe not on one route all the time, but they’re constrained within 10 blocks in a city that’s high density. And those will roll out as a hybrid. But I think the fully driving, replace all the drivers, Uber is going to switch over, that requires two things.
Not only the technology to be ready and cheap, but the thing that people don’t think about is all the companies that you want to talk about. For instance, the Lyft’s, the Ubers, the the DDs or even in the consumer, the Door Dashes, none of them own assets today. So the jump to actually use autonomy is the jump. Not only that the technology works and is cheap enough, but that I would go into the asset business.
So there will be some crossover.
If you take a look at the press announcements, say in the last six months, which people have actually started to embrace autonomy. People like UPS, people like FedEx who own their own trucks or lease them. So you need new things, technology that works that’s cheap, and you need the ability to buy an asset, and then you put it into a business model. I’m telling you all the components exist today. I’m not saying technology is perfect, but all those components for the next three years actually exist. The technology doesn’t have to make major jumps. And that’s why I probably am contrarian. If I redefine autonomy, what I’ll tell you is there will be money made on autonomy within three years, and it will look, we’ll call it autonomy, not what we call it today, which is full autonomy.
John Koetsier: Yes, understand. So you’re sitting in a car, which is very appropriate for this conversation. Talk to me about why you’re sitting in a car and if there’s something you have to show us.
Blair LaCorte: Well, we’re growing too fast and there’s no conference room. And they kicked me out because I’m lower priority than an engineer. That’s, you know, it’s the story of my life. And today what we were going to show you is from the outside how unobtrusive the technology is becoming, even for true mobility. So we’d have to actually take the computer out and maybe walk around the car and show you how the technology is converging. If you’d like to do that, I’m happy to have you make that happen.
John Koetsier: Go for it.
Blair LaCorte: Okay. So…
John Koetsier: I’m making you the solo layout right now.
Blair LaCorte: So again, if you’re looking at a consumer car and you’re deciding that you want to have an ADAS system, one of the big things will be that you could put the ADAS system in the grill of the car, which we’ve done in this car here.
And you cannot see the system, but it actually is very open to weather and it has some constraints in its visibility. The system that we’re actually implementing in this Jaguar is behind the rear view mirror and behind the windshield. So most people may not think about that.
But for a consumer car, the movement from being in a grill to the movement of being behind the windshield, which has much different challenges with physics about how much you lose going through the windshield, that stuff has actually been advancing very, very quickly. So that’s one of the things that’s going to help the ADAS systems. The other is that the systems are becoming smaller. Now, if you take a look at the top of the car, we also put a full driving mobility system on the top of the car. The thing that’s actually for me, interesting about this is as you look at the system itself, what you’ll see is there’s a two squares and a circle.
The circle in the middle is actually both the camera and the laser system going down one aperture, that was actually prototyped by a bunch of guys on my team for fighter jets. So the missile targeting systems that we have send radar and camera and the LIDAR systems down one aperture because each of them do something different that had never really been done in commercial.
So again, when you talk about the R&D, that’s been done over the past couple of years, that is a huge advance.
If you take a look at, and it’s unfortunate to talk about R&D during times of war, but, if you take a look at Desert Storm, you’ve got GPS that actually got down to sub five meters and was deployed in consumer. If you look at Iraq, it was both satellite back hauls and it was RFID. So you’re actually getting this type of advancement of technology moving quickly. From R&D and military into consumer space without having to actually have a war fund that or do it.
The other thing that’s interesting about this is that while you see these systems here today, what I will tell you is that these systems are less than a year old. And they’ve reduced in size by two thirds as of when CES happens in January.
The other thing that is interesting is that we’ve moved from most systems today, when you think about a computer, it’s all about a detect, and all you’re trying to do is get a ping, like a sonar system or a radar system. You’re trying to figure out whether there’s something out there. The systems now are moving from detect, searching and detecting to now acquiring targets and deciding what they are. And instead of us using that technology to try to figure out how to nullify a target, we’re trying to figure out how to save a target.
So we’re not only searching, detecting, we’re acquiring and making decisions all in the sensor.
And we’ll be actually showing this for the first time in history I think outside of very, very expensive military applications, we’ll actually be doing it with a basketball demo. Where we’re going to allow people to shoot a basketball and within a millisecond, we’ll be able to tell you whether the basketball is going to go in or not go in.
What we’re really showing is once you acquire a target, you can actually predict intent and therefore you can do what a human does, be careful someone’s going to step off a curb. Be careful, the trunk is open, you’re at a supermarket, someone’s probably loading the trunk. And that will be the big jump again, that will help people feel much, much more comfortable that their cars can be smarter.
John Koetsier: Oh, that’s wonderful. Thank you so much for that quick demo, and I’m assuming that what you’re showing us there on the car, what we saw in those components are going to be miniaturized by another factor of 10 in a year or so. And another factor of 10, and at some point, maybe it’s three years, maybe it’s four, maybe it’s less, will be basically invisible.
Obviously on a Tesla right now they’re basically invisible, not with the sensors to the level that you’ve got, obviously, but other cars as well, some Cadillacs with self-driving features, you basically can’t tell that there’s a package on the top of the car or something like that.
Blair LaCorte: ADAS systems, the safety systems, the level two systems today are already getting small enough that you can hide them and the cameras and the radar and even the LIDAR systems.
I would end with where I started this thing, which is every tech cycle has shown you that hardware will get cheaper and commoditize, that software will actually drive the hardware so that you don’t have to build a new hardware thing to get new features into it. And that ultimately, I would say, which is where we started the program, which is within three years, you’re going to see most of the focus on data and network capability to use that data, how to actually make the system run a car work.
We’re very, very far along on getting the hardware and software, to the point where we’re better than a human under certain circumstances. And I think that ultimately my company and other companies in my space will be in the data business. Which company provides the car the best data because it’s the client.
John Koetsier: Super interesting.
Well, Blair, I want to thank you so much for taking time out of your busy schedule and giving us a preview of what you’re releasing at CES and some of the technology that you’re working on, as well as some of the insights that you have into the industry, for everybody else who’s been along on the ride.
Thank you for joining us on Tech First Draft and whatever platform you’re on, please like, subscribe, share, comment. If you’re on the podcast later on, you like this, please rate it and review it. Thanks so much. And until next time, this is John Koetsier with Tech First Draft.
Blair LaCorte: Thanks John.