Making drones as smart as birds via Intel’s neuromorphic Loihi chip

flock of birds drones neuromorphic computing smart

Intel’s Loihi chip is a neuromorphic chip which tries to emulate the human brain. As far as we’ve come with convention computing, we are still way behind tiny organisms like insects and birds at understanding the world and adapting to it.

Now Intel is applying neuromorphic computing to autonomous drones: drones that can fly at high speed in challenging, obstacle-filled environments. To do, essentially, what birds and bats and even tiny-brained insects can already do with ease. How? Partially, thanks to the Loihi chip, which implements probabilistic computing to deal with uncertainty and ambiguity in in the natural world.

In this TechFirst with John Koetsier, we chat with Mike Davies, who leads Intel’s neuromorphic computing lab about how it all works.

Here’s the Forbes story for this episode of TechFirst …

Scroll down to subscribe to the podcast, watch the video, and read the transcript.

(And … if you’re wondering whether you’ve heard about neuromorphic computing from me before … you have! Check out how Intel built a chip with a sense of smell.)

Subscribe to TechFirst: Making drones as smart as birds

 

Watch: How Intel’s Loihi chip is 100Xing drone capability

(Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.)

Read: Making drones smarter via Intel’s neuromorphic computing

(This transcript has been lightly edited for length and clarity.)

John Koetsier: What changes when we have smart autonomous drones? Welcome to TechFirst with John Koetsier.

Intel is working on smart drones that can fly themselves. And not just anywhere … we’re talking tight spaces, dangerous environments. To do that, Intel is putting its Loihi chip into drones in the lab. And Loihi is a neuromorphic chip, it tries to emulate the human brain. Essentially, it’s implementing probabilistic computing to deal with uncertainty and ambiguity in the natural world.

To dive into what changes when we have smart autonomous drones that fly themselves all over the place, we’re chatting with Mike Davies who leads the Neuromorphic Computing Lab at Intel. Welcome, Mike! 

Mike Davies: Hi, John. Happy to be here. 

John Koetsier: Hey, it’s a real pleasure to have you. Let’s start off with, I guess the big question: why do we need smarter drones? 

Mike Davies

Mike Davies, who leads Intel’s neuromorphic computing lab

Mike Davies: Well, I think, for all kinds of reasons. You know, if you want to take a very broad look at it, compare what is possible from nature.

Animals like cockatiels flying through the world at speeds up to 20 miles an hour in foreign environments, quickly learning the map, and the landmarks, and the obstacles that it faces; maneuvering with incredible agility and dexterity; and also being able to do all kinds of other really intelligent tasks, like learning how to solve problems, find food, communicate with other organisms.

And this is all types of behaviors and capabilities that we are far from reaching with our technology today.

Of course, there’s all kinds of things we might want to do beyond just fly like cockatiels, but it’s sort of that vision in mind that is leading us to try to enable really fast adaptation, and maneuverability, and intelligence, and those kinds of form factors.

John Koetsier: Well, I’ll tell you something, my wife is not impressed with my abilities as a drone pilot, that’s for sure. I’ve crashed my drone a few times. And so something that was smarter and faster would be amazing, be incredible.

So you’re putting your Loihi chip into drones. How’s that going to help? 

Mike Davies: Well, think about all these kinds of future use cases people have been talking about for drones. So, things like package delivery. I mean, just racing for one thing, which is more of a sports entertainment thing.

But also, say, monitoring the structural integrity of buildings and bridges, so structural health monitoring. Applications like that, where we’re deploying drones into the real world and all the unpredictability that arises from that. And wanting these drones to really be able to respond, and anticipate, and maintain safe operation in those kinds of real-world environments.

John Koetsier: Now, you’re putting the Loihi chip into those drones, but you’re also putting what you call a neuromorphic camera in there. Can you talk a little bit about what you mean by a neuromorphic camera, and what the advantages are there?

Mike Davies: Well, a neuromorphic camera is really where this kind of modern definition of neuromorphic design and hardware came from.

So this was in the eighties … Caltech, Carver Mead, really observing that the way that natural organisms observe visual information is very different from how our camera technology — which is now going on 150 or more years old, you know, that conventional camera technology that we take for granted takes repeated snapshots of the world — frames.

And these frames are processed synchronously, so at very specific intervals of time they’re processed. There’s not much, if any, state that is preserved from the previous frames that have been seen. So this is very, very different from how natural retina eyeballs in organisms are extracting information.

Here you have change events called ‘spikes’ that are generated whenever there’s a change in the field of view. So if an object moves, you immediately at that instant in time get some notification of that movement. You don’t wait until the next clock cycle, you know, the next frame to come 30 milliseconds later.

And so this leads to very low latency.

So, very fast response, because if a drone is about to run into the branch of a tree, well, it’s going to see that immediately as it comes into view, as opposed to the next 33 millisecond interval.

But it also leads to really low power operation, because if there isn’t much changing, well then the sensor is not producing much activity.

It’s only producing activity in proportion to the actual change that’s happening in the real world. So those are the benefits that come from this kind of new type of vision sensor that’s just coming out of the lab, and there’s a couple companies now trying to commercialize this technology. 

John Koetsier: That’s really interesting. Because, I mean, you really see how it’s neuromorphic, because if we think about how our vision centers work, right, we’re really attuned to motion. I mean, we can look out and not even see something almost that isn’t moving, but as soon as there’s a flicker of motion we sense it and we turn.

What I wonder is, have you measured what kind of improvement you’re getting there? How much faster is it? How much quicker are you at sensing what’s going on? And how much energy are you saving?

Mike Davies: Sure. So, others have shown that these new types of ‘event cameras’ as they’re called, can provide up to three orders of magnitude improvement in latency. So down at the microsecond scale of sensing change. So that’s just in latency.

You can also get a benefit of about 10-100 times in the power consumption of these sensors. So these are big numbers.

Now the challenge though, with using these kinds of new sensor technologies, is in the compute side. Since you have this fantastic sensor that can process information, receive information at very low latency, asynchronously, at low power … what has been done typically so far, is that immediately those events, or those spikes are being transferred into the conventional computing domain.

And now they’re back into a world of sequential instructions, matrix vector multiplies that are very non-sparse, you know, very dense vectorized operations. And you lose so many of the advantages of that sparse, event-based, asynchronous nature of the data that’s coming.

So, the promise is very clear. What’s necessary in our view — what is pretty clear, I would say — is that we need a neuromorphic computing side of that to preserve the event-based and preserve all the benefits that are coming from the sensor, delivering it to the end application. 

John Koetsier: And this is a case where you absolutely need edge AI, right? You absolutely need the AI or the intelligence to live very locally. You can’t call up to the cloud and see, hey, what should I do? I see a branch in front of me. 

Mike Davies: Yeah, no, absolutely. This is needing response times in milliseconds, and there’s not much time to communicate or even to do much even on chip, you know, beyond just the immediate local computation.

John Koetsier:

Let’s talk a little bit about where these drones will function. We’re talking about creating drones that are much more sophisticated than anything that’s out there right now. There are some drones that can be relatively autonomous, even without GPS they can sense where they are, they can build sort of a — I’ll call it a mental map of the world that’s around them. They can fly from A to B, they can do a few other things like that.

They’re not common, that is cutting edge stuff. But you’re building something that is potentially orders of magnitude beyond what’s there even. Where do you see that functioning? Where will we use them? How will they help? 

Mike Davies: Well, the initial target for this is what we’re pursuing in collaboration with Davide Scaramuzza at ETH Zurich, which is — initially, it’s drone racing, right? Which is kind of the use case you described of recognizing gates, and kind of these are very rigorously pre-programmed, and it can understand what sequence it needs to move between these gates, and to try to one day outperform humans.

So that’s the goal for that research.

Of course, the final vision is not here to beat humans at drone racing, right? It’s really to prove out the basic capabilities and, you know, solve the challenges of the technology using that kind of race car high-end as the showcase for it.

Ultimately, then we’d want to put this into the real world for those applications I was describing earlier, things like delivering packages, right? And monitoring buildings, and all these other cases where you want to put drones out autonomously, not have to have a constant communication link back to the cloud, and have them do their thing. And offload laborious manually intensive tasks from humans, and may be very difficult for humans to get up to positions of monitoring, you know, the external, the outside of large skyscrapers and these sorts of things that might be very expensive to go and monitor. Agriculture monitoring is another example, finding ripe fruit and discovering infestation in fields and these sorts of things.

Which, of course, humans can do that but it takes a lot of time, it’s very laborious, and you can get to a better quality result by having drones out and doing that.

John Koetsier: Yeah. I’m interested, just totally off the cuff, you know, does it matter what kind of drone? I mean, we see a lot of obviously sort of quadcopter drones with rotors, right? But we’re seeing the emergence of some drones that they use their battery more efficiently because they have sort of stubby wings and so they have some lift, right? Those sorts of things. Does it matter to you what kind of drone that you’re putting this technology in? 

Mike Davies: Well, from our perspective, we’re focused mostly on the basic technology — the processing. So that’s not my area of expertise, specifically. Of course, the smaller the form factor, the greater burden there is on the battery and the compute, right? You have a smaller budget. So that’s going to be sort of the best match for the advantages that we’re offering with these new neuromorphic techniques.

But certainly drones of all form factors, you know, have their different niche in what they’re able to do. If you’re delivering packages, a large drone is necessary. It will have a larger budget, but also has to travel further and has to have all that energy available to lift. So, I think that across the spectrum, we’ll be finding good applications. 

John Koetsier: Yeah. Yeah, that makes sense. What challenges do you foresee? What challenges are you working through right now? You’ve talked about energy, of course. You talked about speed and needing to be locally resident, not in the cloud … those sorts of things you’re probably able to overcome, you’re seeming to overcome right now. What other challenges do you see in bringing this technology to a point where it can come into the market in broad scale? 

Mike Davies: Yeah, well, it’s really important to recognize this is very foundational basic research, in the sense that we’re rethinking computing from the transistors up.

And while we have good examples, like the drone demo that we’ve unveiled just last week, we’re lacking a programming model. We’re lacking a whole portfolio of techniques and known best methods that can just be deployed and composed to produce kind of large systems that solve large-scale complete problems. So, we’re making good progress and we have an increasing set of examples that are pretty compelling, showing great energy gains, latency gains, adaptability, these sorts of things.

But there’s a long way to go in terms of defining new abstractions, new methods for programming these systems, and then building, of course, a whole user community that actually knows how to go and deploy them, and build systems, start startups, and put them into the marketplace. It will take some time. 

John Koetsier: That is a really interesting challenge, right, entirely new architecture. How do you program it? How do you address it? How do you talk to it, tell it what to do, tell it what information you want? I’m reminded of when we talked previously about neuromorphic computing. I was talking to a biological computing company, I believe in Australia, and they made biological chips. And they had a similar problem — probably a bigger problem — how to communicate with it, and how to get it to do stuff and recognize what it was computing as well.

A serious challenge, and of course what you want is you want somebody who’s an ordinary programmer in Python or C or whatever to be able to come and just use whatever knowledge they have and be able to use that Loihi chip, correct? 

Mike Davies: Yeah, absolutely. I mean, it’s great if we have, you know, we have 100 groups, 120 groups in our research community now using Loihi.

But really this is a tiny, tiny number when you compare it to the tens of millions of people in the world that know how to program conventional architecture. So there’s a long way to go until we can really scale the technology and put it out into the real world.

And yeah, we want this to be accessible to the general developer community. 

John Koetsier: We need some low code infrastructure for neuromorphic computing. That would be very, very interesting. I wanted to bring up, this podcast is about tech that’s changing the world, innovators who are shaping the future. That’s kind of the focus that I have for TechFirst. And I was originally going to ask you about smart autonomous drones and how they’re going to change the world — and that’s a part of it, and I do want to ask you that — but it’s a bigger thing, right, because you’re working on neuromorphic computing which is a new paradigm, a new way of doing computing, and offers new opportunities and some new challenges.

So maybe bring that into it as well, and I’ll ask you how what you’re doing is changing the world? And then maybe bring it personal as well, why you’re doing it? And what you care about it?

Mike Davies: Well, neuromorphic computing, as I mentioned earlier, is rethinking computing.

And it’s really following this biological model, recognizing that despite decades of progress we’ve had, we’re still far from the capabilities of natural organisms in how they compute, and move in the world, and consume data that’s arising in real time. If we can achieve that kind of capability — what we see our own brains and the brains of natural organisms doing effortlessly. I mean, really, with microwatts of power, things that we just don’t even know how to solve today.

I think what we can do as far as intelligence deployed into edge devices, scaled up into the cloud to brain scale and beyond, this truly will be game changing for the world and enabling all kinds of capabilities that, today we either cannot deploy because of power and performance constraints, or we simply don’t have the answer yet of how to code these things conventionally.

So I think there’ll be large change in the world as a result of this. It’s a long, multi-year effort. I’m personally fired up because of the, just the challenge that this represents. And this has been a guiding motivator for computing since the very earliest days. 

John Koetsier: Yes.

Mike Davies: Alan Turing, John von Neumann. I mean, they had the brain in mind as their one example of an intelligent, complex, computing device when they were defining the fundamentals of computing as we know it today. So it’s kind of carrying on a long chain of very challenging, elusive innovation that is exciting, because we’re making progress and we can see the impact of even the modest incremental gains we make in changing the world in the near term. But in the long term, it will be truly, you know, amazing what comes.

John Koetsier: What that brings to mind is, I suppose you could argue — and you would understand this much better than I would — but I suppose you could argue that as far as we’ve come with conventional computing, we’re kind of still at Babbage’s Difference Engine, right? I mean, we’re counting ones and zeros, they’re sequential, all those things, right?

And you’re building a different architecture. Very interesting. Mike, I want to thank you for taking the time. I want to thank you for coming here and sharing your insight. 

Mike Davies: My pleasure. Yeah, thanks, John. 

John Koetsier: Excellent. For everybody else, thank you for joining us on TechFirst. My name is John Koetsier. I really appreciate you being along for the show. You’ll be able to get a full transcript of this in about a week at JohnKoetsier.com. The story at Forbes will come out shortly after that, and the full video is always available on my YouTube channel. Thanks for joining.

Until next time … this is John Koetsier with TechFirst.

Subscribe for free

Made it all the way down here? Wow. You’re dedicated 🙂

The TechFirst with John Koetsier podcast is about tech that is changing the world, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe on your podcast platform of choice: