Can you make smarter AI systems by combining biological neurons with silicon chips?
Almost all AI uses silicon chips. After all, that’s the “artificial” in Artificial intelligence. Well, what about using actual brain cells … using what is already known to be capable of intelligence to improve artificial intelligence? Most interesting: when put together, the system starts to wire itself together, and learn “on its own.”
In this episode of The AI Show with John Koetsier, we chat with Hon Weng Chong, CEO and co-founder of Cortical Labs and Andy Kitchen, the company’s CTO. The big question: is biological computing the future of AI?
What we talk about:
- live neurons from mice embryos
- neural networks with actual neurons
- neuroscience
- Intel (“our sworn enemies”), the Loihi chip, and neuromorphic computing
- connecting biological neurons and CMOS silicon chips
- teaching biological AI to learn ping pong
- scaling to millions of neurons on a silicon chip
- mice driving cars
- “dish brain” eSports
- Hebbian ‘fire together, wire-together’ processes
- programming in Python for biological AI
- machine language for bare metal versus biological language for biological chips
- engineering life support systems for biological neurons on chips
- von Neumann architecture versus analog biological systems
- And much more!
Listen: Biological AI
Subscribe on your favorite podcasting platform:
Watch: Biological AI
And … subscribe to my YouTube channel to get notifications when I go live in the future.
Full transcript: Biological AI
John Koetsier: Can you make smarter AI systems by combining biological neurons with silicon chips?
Welcome to The AI Show with John Koetsier. Almost all AI uses silicon chips, right? That’s the “artificial” in artificial intelligence. But what about brain cells? Real life, actual brain cells, neurons? Kind of like using what’s already known to be capable of intelligence.
I’m bringing in Hon Weng Chong, CEO and co-founder of Cortical Labs with Andy Kitchen, the CTO, to talk about … is biological computing the future of AI? Welcome guys!
Andy Kitchen: Hi John.
Hon Weng Chong: Hey John.
John Koetsier: Glad to have you here. Talk to me … what did you build?
Hon Weng Chong: Yeah. So what we’ve actually built is a hybrid chip that is comprised of a CMOS sensor (Complementary metal–oxide–semiconductor), so it’s a silicon chip with a very fine mesh of electrodes. They’re about 17 microns in pitch and there are about 22,000 of them. And what we’ve done is we’ve taken live neurons that we’ve extracted from mice embryos or we’ve differentiated them from STEM cells and grown neural networks on the actual chip surface. And these neurons start forming synapses and they then start to sort of hybridize with the actual silicon surface. And because these are electrodes, we can see the electrical activity and also apply a stimulus, a bit of a voltage, and in a sense, now we have a read and write interface into a biological substrate.
John Koetsier: So this seems super science-fictiony right? I mean, actual brain cells, and I know you’re getting them from really, really, you’re not stealing them from somebody’s brain.
Hon Weng Chong: No.
Andy Kitchen: That’s what we’re telling investors anyway.
John Koetsier: What’s that, sorry?
Andy Kitchen: That’s what we’re telling investors. We’re not stealing any brains.
John Koetsier: Excellent, very good. You are not zombies. Wonderful. But why are you doing this?
Hon Weng Chong: Yeah. So for us, we did a bit of research quite a few years back and we were looking at the AI space and of note was the call to action by folks like Demis Hassabis from DeepMind, and even Geoff Hinton as well from Toronto, about reengaging back with neuroscience. And he was calling for the AI researchers to look and see what have we done in neuroscience and how do we learn from some of the stuff that’s come through neuroscience and reincorporate it back into AI.
We took that a little bit too literally and went all the way straight into the neuroscience space, because the more we started to learn about the work that was coming out from some of our colleagues in Japan from an Institute called RIKEN, where they were able to get these neurons to sort of perform a basic computational task called the “blind source separation,” we were blown away by that kind of research and we thought, you know what? This is probably going to be something really big if we can actually show that it can do more than just blind source separation and do a lot more other tasks for us. So that’s kind of part of the reason why we decided we wanted to build this, because we said, ‘Look, you know, the only machine or the only thing that we know of that actually has true intelligence is the brain.’ And the brain is made up of mini sort of organoids and then they’re made up from neural networks, and then you have the neurons that sort of, you know, that’s the hierarchy. And somewhere along that hierarchy we started getting these amazing things like intelligence, consciousness and so forth. So for us, we said, ‘Let’s start with the basic building structure, the building blocks being neurons, and let’s build our way up and maybe we’ll get there along the way.’
John Koetsier: So it’s super interesting. Let’s dive into that just a little deeper. How do you attach an actual living neuron to a silicon chip? What are the connection points and how do you do that? And how do you keep it alive?
Andy Kitchen: Yeah. So the system we use, as Hon was saying, are these micro electrode arrays and they’re really grids of microscopic electrodes. And what you do is you basically, I mean, think of it in a simple as possible sense, like smearing peanut butter on a piece of toast, right? You take these neurons and neural progenitor cells and you smear them on top of this electrode grid and there are certain binding chemicals as well, which can cause them to stick better. And these neurons are so close to these electrodes, just physically close to these electrodes, that when they fire you can pick it up. You need very sensitive electrodes, but you can set it up that way.
John Koetsier: It’s interesting. I mean, it’s a little more sophisticated than Frankenstein, but then you say, “just smear it over like peanut butter” and it doesn’t sound so sophisticated anymore.
Andy Kitchen: You need a PhD to do that though. This is PhD-level peanut butter smearing.
John Koetsier: PhD-level peanut butter, that’s wonderful. Okay. Talk to me a little bit about what you’ve actually built. You taught your neurons how to play ping pong, is that correct?
Andy Kitchen: That’s a process that we’re working on, yeah. So basically, imagine this, you’ve seen the film The Matrix, we’ve all seen it, it was like 1990s. We sort of have built like the “Matrix 0.1 Alpha.” So you could imagine that we have what we call a “closed loop stimulation system.” So that means that when neurons fire they cause some change in a simulated environment, and that simulated environment then will create a stimulus for these live neurons, and that’s how you kind of connect up the pong matrix.
So then we need to shape their behavior to actually do something, and that’s where a lot of our sort of development and secret sauce is. But in essence, we use just like any other learning happens through kind of a series of stimulus response cycles, like learning to ride a bike, we create a very specialized stimulus response cycle in order to induce a specific behavior that we care about.
John Koetsier: Very, very interesting. So I just talked to Intel’s director of its Neuromorphic Computing Lab, right? And neuromorphic computing…
Andy Kitchen: Our sworn enemy hahaha.
John Koetsier: Excellent. So neuromorphic computing, of course you know, you’re kind of replicating neurons in a digital form. Can you talk a little bit about that approach versus this approach?
Hon Weng Chong: Yeah, I think there’s a spectrum, right? And so if we look at current artificial intelligence as we build it today, using artificial neural networks that’s kind of using preexisting silicon devices that we have, and mostly they’re coming from GPUs, so the Nvidia and AMD stuff. Not very much difference and then you start going down the hierarchy and you end up with neuromorphic computing, which is I guess silicon that’s trying to mimic more biological aspects, so things like spike trains and so forth.
So they kind of have properties that we can sort of see in neurons and then we’re going all the way down with, I guess, the next level down. We’re actually just going straight to the neurons and using them. And I mean, the thing about it is that – not to diss Intel or any of the guys working neuromorphic, I think it’s amazing work – I think there’s a lot more that we don’t know about how neurons work. I mean it’s amazing how these blocks of carbon and protein sort of like form together and are able to produce computation. I mean there are a lot of properties that we don’t seem to quite replicate yet, and we’re still learning a lot more new things how these neurons work.
So I think it’s like it’s taking a snapshot from 15,000 feet of what New York City looks like, but when you actually zoom straight down into like, say Times Square, it’s completely different because you’ll see a lot more things that you didn’t see exist. So for us, we think that it’s great work that they’re doing, and I think that some of the stuff that we will be learning also will feed back into the neuromorphic computing space. But for us, we thought, you know what? Let’s just … understand the limitations of what we know and go straight to the source and say, ‘Let’s just use the same biological substrates and work our way up from there.’
John Koetsier: So the latest version of Intel’s Loihi chip, which uses this neuromorphic computing architecture, or it can be used to build a structure which is neuromorphic, the latest construct that they built is about a hundred thousand neurons – actually a hundred million neurons together – which I believe is three orders of magnitude less than what we actually have in our brains.
How many neurons are you actually kind of putting into chips right now?
Andy Kitchen: Yeah the systems we’re currently… sorry Hon… the systems we’re currently building, depending on the density, are tens of thousands to hundreds of thousands of neurons, and that’s brain chip one. So a lot of what we have in our roadmap is scaling that up to millions of neurons.
Although I will add that the neurons that you would have in a neuromorphic system aren’t exactly equivalent. So simulating a neuron at a high degree of fidelity would take, well, I mean we know for example, some of the biggest supercomputers in the world have been built to simulate like millimeter cube areas of brain materials. So assimilating everything anyone does is very difficult. So it’s not a direct comparison, but we would see hundreds of thousands of neurons and millions of neurons is certainly more powerful, have more latent power, than an equivalent silicon system.
John Koetsier: Very, very interesting. Are there… go ahead.
Hon Weng Chong: Sorry, to add to that point as well, it’s amazing to see what kind of properties emerge from even just several hundred to a thousand neurons. I mean, what is it C. elegans, the worm? That only has like what, I think maybe five neurons or something like that, it’s able to sort of exhibit interesting behavior.
And then you’d move your way up to the hierarchy, you end up with things like flies and insects, dragonflies.
I mean, swatting a fly is pretty hard, they’re pretty hard to kill, they get around, they do things really well. So, you know, just having a handful of them, you’re able to sort of see very intricate, advanced behavior emerge from that. And then as you move up the hierarchy, you go up to things like small mammals and mice. I mean, I just saw a video a couple of months back where I think the guys from Virginia Tech were showing that these mice were driving these little cars kind of stuff. I was like, wow!
So you know, it’s really a hierarchy, and actually the more, I think it’s somewhat of an exponential kind of increase the more you actually sort of put in, the more computation and more intelligence you get out of it.
John Koetsier: So if somebody does mice NASCAR, I’m all into that. Or mice Formula One, I’m all into that. Sign me up, I will sign up for the pay-per-view subscription service.
Andy Kitchen: Well John, we’ve got to do dish brain e-sports first. So you’ll see…
John Koetsier: Looking forward to it. Talk to me about, project yourself out a little bit and say somebody is actually working on writing a program or an application, an AI application, using your chip some years from today. What kinds of technologies will they use? How will they work with it?
Andy Kitchen: Yeah, so we have a pretty elaborate and complex roadmap for that, but essentially the systems we build in that, as I said, are based on creating this structured stimulus sequence which is interactive, and that’s what we see as kind of the basis of biological learning, learning to ride a bike, learning to even walk around.
There’s a huge amount of constant feedback involved in the kind of Hebbian ‘fire together, wire-together’ process that you can essentially only fire together, wire together if you’re somehow embodied. So the premier way would be to describe your task somehow, probably through some sort of very high-level language, and then we would turn that into a stimulus sequence which would shape biological behavior to fit your specification. And the kind of level of automation there is one of the things that we will use essentially computer science and a lot of what you would call regular artificial intelligence to achieve as well.
John Koetsier: So that’s super interesting because I mean, as you program systems right now you use high-level language which often gets interpreted down to machine code or machine-level language which is more efficient, right, on the bare metals, so to say. So you’d almost have like a biological code in some sense that interprets the high level code for your biological neurons?
Hon Weng Chong: Well, I think it’s hard to come around it because it’s a paradigm shift, right? So what we think is going to be really amazing, and we’ve seen this with some of our chips as well, is the fact that these neurons rewire, they actually reprogrammed themselves in order to solve the particular task. So, I mean, it’s the same thing for us as humans, right? Or even a dog, you teach your dog how to play fetch. You’re changing its environment, you’re changing the stimulus and it’s adapting to it. It’s not reprogramming, it’s not going in there and like rewiring.
Andy Kitchen: That’s right, but you’re not programming each little step, right?
Hon Weng Chong: Correct, yeah.
Andy Kitchen: It’s not like programming today where it’s kind of a difference between programming in Python and C++ is doing some sort of machine learning tasks. You kind of specify what you want to achieve, but you don’t in fine detail. So all the steps of how to do it, and that’s where the self-organizing part of using biological neurons comes in, that to a high degree you would want them to wire themselves to solve the problem.
John Koetsier: Very, very interesting. And that has profound implications for what it means to program a system down the road if there’s this level of self learning. I mean one of the things about what Intel’s working on is that they’re trying to develop AI with way fewer, way smaller training datasets, right? And what you’re saying is, ‘Hey, you’re going to actually program the system and it’s going to learn itself how to do the job that you want it to do.’
Hon Weng Chong: Exactly. I think that’s the biggest sort of paradigm shift in the sense that maybe programmers will be redundant in the future because of this, but it’s more of the thing where it’s probably going to be something that is required for robotics to really excel. So robotics, they kind of put computer programs as well, but unlike computer programs that live in a very deterministic world where every rule is set and you can sort of see what the future looks like, the world that we live in is highly variable. We don’t know there’s a car going to like skid down the road and hit me kind of thing.
But it’s got to plan and predict for all these things.
And so, having an assistant that learns by itself and reprograms in this environment is going to be really important for robots to actually operate in the real world. So I think that’s something that we have come around this, and this is the reason why we try to build environments in games like pong, face of it and so forth, so that we can show that these things learn. There’s a sort of universal learning algorithm backing it, but it’s the environment that changes.
Andy Kitchen: I think one analogy which is really good, is like looking at it as a sort of a spectrum. On the one hand you’ve got CPUs as we know them, that’s the von Neumann architecture, so that’s RAM, registers, like an instruction code.
And then you have kind of on the other side, the work we’re doing, which is using completely what you’d call analog biological systems, and then I think neuromorphic is kind of over here, where it’s like we’re using the same basic technology as von Neumann architectures today, but we’re sort of trying to make it a little bit more connectionist. So you’ve kind of got von Neumann over here, connectionist over here, and we see ourselves as like way pushing over that side.
John Koetsier: Super interesting. How far away is this from something that you could ship as a development ship and perhaps how far away from something that would be shipping and publicly available?
Hon Weng Chong: Yeah. So there are quite a lot of technical challenges and hurdles that we are still working on at the moment. You know, one of the key components is actually having a sort of artificial life support system. So we’ve got to keep these things alive for a year or so. And so we’re working on a profusion circuit that will sort of circulate clean or healthy media through the system, and while doing that also transporting out the waste materials. So, in a sense, like an artificial life support system, like the Darth Vader kind of thing but for the neurons. And one of the keys for that, one of the reasons why we want to do that is so that we can extricate these neurons from the laboratory environment and they can start being embedded into lots of different things like data centers, cars, and robots, and stuff like that. But for us, the roadmap really looks for us to showcase these things working, and then opening up our laboratory remotely to researchers around the world to actually try their hand at building environments for our systems. So, we kind of take the same model that the guys use with quantum computing where the machines are really hard to sort of replicate, initially that is, and using cloud computing and plugging it into like AWS or Azure made a lot of sense for actually researchers to sort of get their hands dirty without having to touch any wetware biological substrates.
John Koetsier: Super interesting. I mean, at some point it may actually make literal sense that yes, my computer died. You may have to have some kind of life support system in place and you might have neurons that age out and you have to replace individual neurons and they have to be trained up and your pathways. It’s kind of mind boggling what this could lead to, but project yourself out, and I don’t know if three to five years is the right time frame, but project yourself out to a time frame when you’re actually shipping hardware and there’s a good translation software for somebody who can code just about any language and it translates to instructions that work. What kind of systems, what kind of products, what kind of applications do you envision being possible?
Hon Weng Chong: Yeah, I think it’s a really great question, and I think this is something that is limited by the imagination really. And for us, I mean, we’re kind of, I guess Andy and I, and maybe a lot of people, your viewers as well, are sort of driven by science fiction. So we see a lot of applications from say robotics being one of them to data centers. So, you know, the applications for these things would be things that require planning, that would require operation in the real world. So anything that requires sort of fluid intelligence, which is the ability to sort of move from one problem set to another problem set fluidly, I think it would be an excellent application for this. But having said that, I mean, it’s very hard for us to know where this could go because we work on the base technology and just as the first transistor was built, nobody really imagined that it was going to form the backbone of the internet, or the fact that we’d be able to do this call around the world, right? So, I mean, I guess the first thing they were thinking of, well maybe we’ll just make this thing to switch really quickly so we can break some code and win the war kind of thing. But, you know there are a lot of other applications that you’ve really just got to get it into the hands of very talented creative people for them to apply that technology on.
John Koetsier: Super, super interesting. I can totally see that and I agree with that. Also as you see robots getting integrated more into manufacturing, more into smart home, more into smart city as well, and just a lot of opportunities to be in complex, diverse environments where you don’t really want to train a robot every little thing that might come around. You want some self-learning capability, could be super, super interesting.
Hon Weng Chong: Oh yeah.
John Koetsier: So what’s your next challenge? What’s the status right now and what’s your next major challenge?
Andy Kitchen: Yeah, so the status right now is that we have this sort of world class neuroscience lab that’s fairly fully operational now. We have our kind of Matrix 0.1 Alpha software. We have done this sort of experimental data series we call “Genesis” so that’s about a hundred hours of fine-grain data collection of like neurons learning as they hooked up to the system and we have some really promising results from that. So we’re getting play, which is, depending on how you measure it, often like statistically significantly … well significantly above kind of a simple random baseline. So we’re seeing some kind of behavior shaping, but we really wanted to get it to the point where sort of a lay person looking at it goes like, ‘Wow, it’s really playing’ as opposed to like, ‘We can detect a behavior change.’
So I think our sort of next major challenge in this quarter is a kind of expanded, rigorous, data collection sort of push, plus on top of that really increasing the performance and like tuning the system that we have currently. And then I think into the future is scaling up. I mean, scaling up is something that clearly is very important to us, so I’d say those were our two biggest challenges would be really bedding down what we have and producing something which is unambiguously like, you can just watch it playing on say like a video stream on Twitch and say, ‘Hey, you know, whoa it’s playing.’ And then really scaling up to sort of three dimensional structures with sort of like actual three dimensional electrodes, so we’re not just limited to this 2D plane, I think would be the next big thing we really want to push towards.
John Koetsier: Super interesting. And you have challenges unlike almost anybody else in the field of AI or hardware because you’ve not just got the technological components, you’ve got a biosphere to create, and maintain, and support. Thank you for joining us. I really appreciate hearing from you what you’re doing. Super, super interesting work.
Hon Weng Chong: Thank you, John.
Andy Kitchen: Thank you, John. Thanks for having us.
John Koetsier: It’s been a real pleasure. Thank you as you’ve been watching with us. Thank you for joining us along on The AI Show. Whatever platform you’re on, please like, subscribe, share, comment. If you’re on the podcast later on, please rate it, review it, that’d be great. Thank you so much. And until next time, this is John Koetsier with The AI Show.