AI and neuromorphic computing: How Intel built a chip with a sense of smell

neuromorphic computing smell

Does artificial intelligence have a sense of smell? With Intel’s neuromorphic computing … maybe.

Lots of AI is built to solve relatively simple problems, like sort Legos of different colors/shapes, or identifying patterns based on huge datasets. Intel has been building AI chips with neuromorphic architecture. The goal: solving complex real-world problems in ways humans naturally do, with very limited training data.

In this episode of The AI Show with John Koetsier, we chat with Mike Davies, the director of Intel’s Neuromorphic Computing Lab, about what he built into Intel’s Loihi chip for identifying smell.

Listen: AI and neuromorphic computing

Subscribe on your favorite podcasting platform:

Listen: AI and neuromorphic computing

(Sorry, you’ll have to turn up the volume to hear Mike.)

 

Full transcript: AI and neuromorphic computing

John Koetsier: Does artificial intelligence have a sense of smell? Welcome to The AI Show with John Koetsier.

Lots of AI that we see today is built to solve simple problems, sorting Legos into different colors or different shapes. Intel is building AI chips with neuromorphic architecture to solve more complex, real-world problems. Real-world problems that consult with less training than most AI systems require today. To talk to us about that we’ve got Mike Davies, the Director of the Neuromorphic Computing Lab at Intel Corporation.

Thanks so much for joining us! Talk to us a little bit about what you built. You built something very, very interesting to give AI a sense of smell.

Mike Davies

Mike Davies

Mike Davies: Yeah, sure. That sense of smell, this capability to learn and recognize odors is really one of the more recent applications that we’ve developed on our Loihi chip. And the chip itself is a general neural computer, you could kind of think of it as being inspired from the way the brain works at kind of the lowest level that we have any real understanding of, which is the neurons, and the connections, and the temporal processing that our brains are performing in continuous in real time. 

John Koetsier: Yes.

Mike Davies: So the chip is Loihi, and this is some research we’ve done in collaboration  with neuroscientists over several years now, and it’s just recently published.

John Koetsier: So talk to me about why you tackled this particular problem.

Mike Davies: Well, it’s a really interesting one from a couple different perspectives. In general, there’s so much we don’t understand about the brain, right? It’s still a mystery after decades of neuroscience and many, many intelligent people thinking deeply about it, studying it. But one area that we do understand, or relatively, is olfaction, so the sense of the circuits in our brains related to smelling. And in general, what we see is the further out from the core cognitive parts of the brain, the cortex, as we go to the edge of the peripheral nervous system and the sensory processing, that’s where we have kind of relatively better understanding of what’s going on in this mysterious black box of the brain. So it’s just an area that was well understood in the sense that there were low level neuroscience models that we could abstract to the point of being able to map them into the feature set that our Loihi chip supports.

And it’s also just from a machine learning perspective, it’s a difficult problem that deep learning doesn’t necessarily solve that well. So especially when you consider the efficiency of learning, how many samples are required to achieve a good level of classification.

John Koetsier: And that’s typically an issue, right? I mean, needing so much big data, if not tons and tons of data, to be able to make determinations on future instances that it sees. But you found a way to make that much quicker, is that correct?

Mike Davies: Yeah, yeah, exactly. With the network we have, we can effectively learn with single examples. So with just showing one kind of clean presentation of an odor, kind of a 70-dimensional vectorized representation of an odor, we can store that in this high dimensional representation in the chip and then it allows it to then recognize a variety of noisy, corrupted, occluded odors like you would be faced with in the real world where you have them all, you know, bombarded with all kinds of different smells, and it will be able to detect at very faint corrupted levels that particular learned pattern.

John Koetsier: That is really, really interesting. Is that somewhat analogous to how a human might learn based on seeing one image of let’s say a tiger and being able to recognize a tiger again, or smelling an apple and recognizing that again?

Mike Davies: Exactly, exactly. That’s really one of the main things we’re trying to understand and map into silicon and our program, is the brain’s ability to learn with single examples or very few examples. Which is  kind of the opposite of the deep learning tool that we have today built on backpropagation, which it can solve some incredible problems, but it requires lots and lots and lots of data, and it’s learning in very slow incremental steps. And that’s just different from how we intuitively think of learning, you know, if you look at a toddler and an infant, and you present single examples and immediately, if it’s that tiger that they’ve seen for the first time, they’ll learn it forever based on a few examples.

John Koetsier: Yes.

Mike Davies: And what’s even more amazing is that they’ll learn a cartoon of a tiger, they’ll recognize something that looks nothing like the original input you may have presented through this automatic process of abstraction that our brain is performing. 

John Koetsier:  And is that analogous to what you just said, where you talked about occluded samples or samples where it’s not clear what the smell is, it’s there, but there’s something else with it. Is that equivalent?

Mike Davies: Exactly right, what our chip and this algorithm is doing is it’s searching the data that’s presented to it for snippets, little subsets of this pattern that it recognizes, and it will suppress the noise just automatically through this very efficient, parallel search process. And so it’s kind of understanding the real salient, important parts of that odor representation and kind of intuitively you can think of as ignoring the part that is just clutter and noise.

John Koetsier: Talk to me about error rates, because one of the challenges that we have as humans is we see something, we see a specific and we make a generalization based on that specific. So we see one example and boom, hey, we know what reality is. Are you seeing similar things with your system?

Mike Davies: So for this particular example that we’ve demonstrated and published, that’s not necessarily a part of it. This is really an ingredient on the path towards that larger goal. I mean, that’s really a very hard problem of AI is to understand this kind of generalization and abstraction process, this hierarchical understanding of the constituents that together form the pattern that we recognize as the whole. So that’s ongoing work, and we expect to make progress and build on this example of olfaction. But you could think of this as, this is what we have now with this odor perception is the first layer. It’s recognizing just the raw data, and now what does that mean? How does that connect to other concepts, is this a good odor or a bad odor?  If it’s an apple, you know, what type of apple is it? This kind of next level, more subtlety and generalization, is something that will be future work.

John Koetsier: Super interesting, right? I mean, you can think of in a human sort of context, okay, I recognize the smell of ‘apple’ now I attach it to the concept of this green thing, but also maybe this red thing, other things like that, maybe I attach it to the concept of being hungry or something to eat, or other things like that. Super interesting what you’re building there. Can you talk a little bit about what AI technologies or techniques you used in doing this work?

Mike Davies: Well we are, from a conventional computing perspective, it’s super exotic because it doesn’t look like a conventional processor and it doesn’t even look like a conventional deep learning accelerator. So in the conventional AI world, everyone speaks of “multiply accumulation operators,” MACs, you know, how many MACs per second or per watt can you compute? And in this technology that we’re using, we don’t have a single MAC. You know, we’re not even using that same basic computational primitive to solve these problems. So we’re taking more of a first principles perspective of rethinking computing based on what we find in the brain, ignoring or forgetting everything we know about conventional ways of designing and computing chips, and instead just trying to kind of reverse engineer and understand the principles of what is the brain doing and mapping that into silicon. So that’s the basic technology.

John Koetsier: That is super interesting. That almost hearkens back to how people tried to do AI. Researchers tried to do AI maybe a couple decades ago really, how does the brain work and map that onto silicon.

Mike Davies: Yeah. This approach is, I mean it dates back quite a long time. The original perceptron, this was Rosenblatt did this and is really the inspiration for all of the deep neural networks and this artificial neural network model that’s come to fruition in the past decade. That dates back to the 1950s, I believe it was, where he had an entire machine room, a whole system the size of a cabinet, which was implementing one neuron. So that idea has been around a long time, and more recently this approach that we’re taking dates back into the eighties with Carver Mead and work that was done at Caltech, which is taking a more modern look at what’s understood from neuroscience, but still, that’s now three decades ago and we’re still working on getting this to a point of commercialization where it can provide real world value. But yeah, these ideas have been around a long time. The brain, in fact, was an inspiration for John von Neumann and Alan Turing way back at the genesis of the computer architectures that we’re using today. So, yeah, this has always been a vision of computing, certainly.

John Koetsier: Interesting. So talk a little bit about some of the challenges or problems you had as you were building this. Did you come into some dead ends?  What problems did you have as you were building it out?

Mike Davies: Most of the problems we had were, I would say, the more mundane engineering types of challenges. We had the benefit of bringing a really unique design technology into this program when we started it, an asynchronous design methodology, which is different from how chips are conventionally designed. But the brain is an asynchronous machine. It doesn’t have this global clock, which is present in all the conventional chips that are produced. Happens to be that my group came from a background of a startup company doing ethernet switches with this asynchronous technology, so we had that available. But because it’s so different, there’s a lot of challenges in building chips with a different methodology that doesn’t conform to the standard method. So it gets very technical in terms of the particular problems, but everything from design entry, to verifying that the functionality is correct and as expected, and just conforming to all the standard quality checks that Intel would require for chips that we fab. So those are mainly the problems we had. 

John Koetsier: Interesting. So you’ve released the research, you’re working on commercializing it. What’s the current status, when will this be out there, and what else will it work for?

Mike Davies: Well, the chip is in a sense, it’s out there. It’s a research chip and we’ve built a number of systems around this now in a whole software stack, and we made this available to a whole network of research collaborators in academia and government, and even a number of corporate groups now. So we have about a hundred groups around the world using the chip and helping to advance it to a commercialization point. And you know, the challenge there is really connecting it, the basic hardware capabilities, developing the software, the methodologies, the algorithms that then solve real-world problems.

That’s really the challenge we’re at, and almost all of our focus now is on that problem, working with all of these collaborators. I think there’s good progress. There’s things like this olfactory example, which we can already see some real world uses for it, and a number of other promising kind of avenues like that. So it’s hard to put concrete predictions on it, but I think we’re dealing with a matter of a few years, five years maybe at most before we’ll see this in the real world.

John Koetsier: Sure. What kind of sensitivity are we talking about here? I mean, are we talking dog-level sensitivity? Are we talking beyond that? 

Mike Davies: It’s hard to equate it to human capability because we’ve evaluated this on a kind of an abstract dataset which isn’t calibrated in that way. But I think one general property of this is if you, and we see this in many examples in this neuromorphic domain, is as you give the network, the problem, more resources, more neurons in this case, you can achieve greater levels of sensitivity. So it’s really a spectrum of how many resources, how many chips, how much memory do you throw at this problem to achieve like superhuman or lessthan human, depending on what the application would require.

John Koetsier: And you can imagine a lot of applications there, right? I mean from food-safe testing: is something rotten or is it still fresh, to security applications: sensors to sniff out perhaps explosives or other things like that? I mean, what are some of the applications that you’ve thought of?

Mike Davies: Yeah, well, you’ve named two exactly right there.  Measuring or detecting hazardous chemicals in manufacturing settings, potentially even disease diagnosis. So there’s some really intriguing research about being able to smell kind of the chemical byproducts of various diseases… 

John Koetsier: Wow.

Mike Davies: Including cancer, that just from someone’s breathing basically.  So yeah, at Intel we’re unlikely to take a product like that to market, but we would certainly partner with others and enable those kinds of applications. There’s the challenge of the sensor side of that as well, in that we see that in many of the applications we look at, you know, at the edge, in that the data needs to be presented in a manner that’s suitable for this kind of neural processing, and that’s sometimes just as hard a problem as the computation side.

John Koetsier: Sure. It’s really interesting to me because we have a lot of sensors for visual things, right? If you look at smart home or you look at smart cities, that sort of thing, you have a lot of sensors for visual data. You’ve got a lot of sensors for auditory data. I’m speaking into one right now, so are you, everybody who’s listening with it right now as one of those, right? You have sensors for other things like heat, particulate matter, right? I’ve got a sensor in my house that senses how much particulate matter is floating around in the air. But smell is one that we really haven’t exploited so far, and it kind of adds to the entire picture. You can really see how that could be a part of a whole smart city, smart office, smart work, smart factory, smart home type of scenario. Correct?

Mike Davies: Absolutely. Yeah, it’s an under appreciated sense. In fact, it’s really the, evolutionarily speaking, it’s the oldest sense that’s been around. I mean, the very earliest bio-organisms basically smelled, they detected chemicals and that’s what’s survived all these billion years. So yeah, it’s definitely, maybe its time has come and once we have the sensing and the compute technology, it’ll be a recognized first-class citizen along with vision and others.

John Koetsier: Wonderful. Now put your futurist hat on for just a second, walk us out three to five years, something like that. Where do you see this technology? What do you see as the capabilities? And where do you see as the implementation?

Mike Davies: So there’s many possible applications across different segments of computing.  It’s really a general purpose computing architecture is a way to think about it, just a very different one from our conventional style. So you can find applications at the edge and all the way to the most lowest level sensory processing like we’ve been talking about, you know, odor detection at very low power levels, battery-operated devices that could be distributed to more intelligent processes at the edge. So, robotics is one where we see a lot of promise. Now, robotics is another one where the compute is not the only problem there. There’s of course all the manufacturing and the mechanical aspects which have to be cheaper and more robust, but neuromorphic control is very, very well-suited for that problem. That is basically why brains evolved, to control bodies and limbs and articulate things. So that’s an area where you’re processing real world input from whatever sensor modality you have, and you need to make real-time decisions about how to move or respond. And that’s what brains are good at, and that’s what this architecture is really good at. So we could see this going into, for example, a manufacturing setting, that’s where robotics has taken off so far. And so we could see neuromorphic chips processing data more adaptably, faster in real time which would allow us to just have more productive assembly lines, defect detection, and being able to screen and monitor all parts in real time as opposed to just sampled parts that would boost quality and productivity of our factories.

John Koetsier: Very, very, very interesting. Well, I just want to thank you for being on with us. This has been Mike Davies from Intel and really appreciate you being with us.

Mike Davies: Sure, my pleasure.

John Koetsier: Thank you so much for joining us on The AI Show. Whatever platform you’re on please like, subscribe, share, comment, rate it, review it, that’d be a great help. Until next time this is John Koetsier with The AI Show .