Our brains are 1 million times more efficient than ChatGPT: chatting with Gordon Wilson of Rain AI

Photo by Alina Grubnyak on Unsplash

The wetware in a casket of bone that we each carry on our shoulders is 1 million times more efficient than the AI models run by services like ChatGPT, Stable Diffusion, or DALL-E.

In this TechFirst with John Koetsier we chat for a second time with Gordon Wilson, CEO of Rain AI, which is building a neuromorphic artificial brain simulating the structure of our biological brains, and aiming at 10,000 to 100,000 greater energy efficiency than current AI architectures.

We also discuss “mortal computation” and a radical co-design of the hardware and software for AI systems, which could lead to much more efficient (and more effective) smart tools, machines, and companions.

Scroll down for full video and to subscribe to the podcast, or check out my story on Forbes

Watch: 1 million X more efficient than ChatGPT

(Subscribe to my YouTube channel)

Subscribe to the TechFirst podcast:

Transcript: artificial brains, ChatGPT, and efficiency

(This transcript has been lightly edited for length and clarity.)

 John Koetsier: Hello everybody and welcome to TechFirst. My name is John Koetsier. I have a super special and short episode for you today. We’re chatting with Gordon Wilson, he’s the CEO of Rain AI, which… they’re building a brain. It’s neuromorphic computing, they’re building a brain. 

Now, there is a challenge here. I’m actually talking to him while he’s on his way out just before the Christmas break, to go on vacation. He’s in an airport lounge in SFO, San Francisco, and the audio is not great. So I’m gonna put text on screen. If you’re listening on the podcast, it is possible to make it out, but it might just kill you. So, go to YouTube. Subscribe on YouTube. 

This is not a ploy to get you to subscribe on YouTube, but if you do, great. Listen to it there, watch it there, and you’ll have the words on screen. It won’t be perfect because it is a machine translation and guess what? Machines aren’t perfect yet. But it is amazing, we’re talking about the cost of AI, the cost of running things like ChatGPT, and the fact that our brains are still one million times more efficient. Enjoy. 

John Koetsier: We’re seeing miracles daily. Generative AI is kind of in a golden age. We see images that startle us from Stable Diffusion, text that looks almost human-written from ChatGPT, video from other tools. 3D cloud points from DALL-E just announced yesterday. But what’s the computational cost of all this magic?

Rain Neuromorphics makes an artificial brain. It’s an analog computer. It’s 10,000 times more efficient than some machine learning algorithms on Nvidia GPUs. Is analog the future of AI? Welcome, Gordon. 

Gordon Wilson: Thank you so much for having me here, John. Always a pleasure chatting with you. 

John Koetsier: Always a pleasure. Thank you for taking time. Where in the world are you? You’re in an airport lounge, is that correct? 

Gordon Wilson: I am. I’m at SFO. I’m on my way down to Los Angeles for tonight and then to Costa Rica where I’ll be spending a week. But it seems I can’t fully disconnect ever. I’m bringing my work with me, but excited to have a little bit of tropical warmth over the winter.

John Koetsier: Excellent. Okay, so we got some background noise. We’ll take care of that later in post-production, hopefully. No worries whatsoever. Let’s start here. Performance costs of AI. We’ve heard from OpenAI, for instance, that eventually they’ll have to start charging for ChatGPT. They said the bills are astronomical. What are the costs here? 

Gordon Wilson: We’re talking, to run ChatGPT, it’s on the order of millions of dollars a day, and similar models, I mean, you mentioned that we are in this golden age of generative AI. Just over these last few months, you know, between DALL-E 2, and ChatGPT, and Stable Diffusion, these are models that are incredible and they’re capturing our imagination, but they are staggeringly expensive.

You know, the costs here are primarily it’s data center compute. You have racks and racks of GPUs and CPUs that are used to train neural networks to run these types of models and to create these types of dreamy landscapes or visuals or stories.

But, of course this is extraordinary, I mean, as you mentioned, and the costs are massive. And, fundamentally, the costs are extraordinarily high because, well, I think perhaps we should have a metric of comparison, right? I mean, they’re expensive compared to what, right? This is brilliant. This is magical. Like, it might be worthwhile to spend $3 million a day to run ChatGPT, but … what is the metric of comparison? Right? 

So for us, at Rain, the metric of comparison for the cost of artificial intelligence is the cost of intelligence, right? The cost of biology. And for our brain, we don’t require being plugged into a data center worth of compute to draw something. We don’t require megawatts of energy to write an essay on our own. 

We’re still on the order of a million times more expensive to run digital AI, deep learning, things that Stable Fusion, ChatGPT, and DALLE-2 were all built upon, worth about 1 million X in cost between that extraordinary AI today that we see now, and what we know the brain is capable of. So…

John Koetsier: Interesting. So just make sure that I got that, you think you’re a million times more efficient at running those AI models than what we got right now? Is that correct? 

Gordon Wilson: Our brain is a million times more efficient. I should clarify. That is the metric of comparison. 

John Koetsier: Oh, I thought the brain you were talking about was the Rain AI brain, but you’re talking about wetware. 

Gordon Wilson: Wetware … because that’s the comparison. That’s the north star, right, that guides us at Rain. You know, we’re trying to take clues from what the brain has achieved and then build that into hardware. So, what we have demonstrated at Rain is moving towards that direction. 

You know, we have, I think we just had recently published an article in Nature Electronics, which was a demonstration of this new flavor of very brain-like algorithms running on also a new type of hardware.

And these are memristors. And this allows you, in this paper, we were projecting between 10,000 and 100,000 X greater energy efficiency than for training the equivalent models with back propagation on graphics processing units, on GPUs. 

So, there’s a lot of numbers I threw out there. You know, the 10,000, 100,000 X is our Nature Electronics, but the 1 million X remains that gap between artificial intelligence today and biological intelligence. 

John Koetsier: One begins to understand why in The Matrix the AI is plugged in human brains.  

Gordon Wilson: Yeah. They are extraordinary machines and extraordinarily efficient, especially at what they can do. You know, I always like to talk about scale and efficiency, right? The brain has achieved both. And typically, when we’re looking at compute platforms, we have to choose. You know, do we either want the scale of compute that can support the creativity that we see in Stable Diffusion, or do we want something efficient enough that we can deploy it onto our cell phone without communicating to the cloud?

And right now, that’s the dilemma that we have with AI hardware and AI platforms, that we either can choose this massive, robust scale but it requires data centers worth of compute. Or, we can deploy little models, very, very compact models and usually only inference to the edge. And that’s a trade-off we don’t wanna have to keep making.

John Koetsier: Right. Let’s take a step backward. Talk about what Rain AI is building and where you are right now, how it’s different, and what you’re doing that is gonna be so much more efficient. 

Gordon Wilson: Absolutely. So, I can start with we are building artificial brains and we call ourselves the “artificial brain company.” It’s kind of our new tagline we’re going with. And what is an artificial brain? Well, we compare it to also a brain. 

A brain is a platform that sports intelligence. And a brain, a biological brain, is hardware and software and algorithms all blended together in a very deeply intertwined way. An artificial brain, like what we’re building at Rain, is also hardware plus algorithms plus software, co-designed, intertwined, in a way that is really, like, you can’t, inseparable. You know, the computers that we’ve been using for the last 60 years are Von Neumann machines, and they were built off of the fundamental separation of memory and processing, but also ultimately the separation of hardware and software.

John Koetsier: Mm-hmm. 

Gordon Wilson: That you can build a program, write a program, and that program, the memory of that program can survive the hardware dying. And the brain is not like that. Biology is not like that… 

John Koetsier: Unfortunately. 

Gordon Wilson: Unfortunately not. Certainly not yet. And because there’s no separation, right, between hardware and software, as we define, as we see it in intelligence. And so an artificial brain also you have to make that trade-off.

You have to combine hardware and software. Co-design them together. A phrase that I really love from Dr. Katie Schuman at Oak Ridge is “radical co-design.” She’s a leading neuromorphic researcher. 

We have to radically co-design hardware and software and algorithms together to be able to achieve these types of multiple order of magnitude gains, which are the kind we demonstrated in Nature Electronics

John Koetsier: Now, historically, the challenge with designing hardware and software together and building them together has been that you built something that was purpose-built. You built something that could do one thing or maybe two things, but wasn’t general purpose. 

Now, our brains, which you’re modeling after, are exactly what you’re talking about, but they’re very general purpose. I mean, we can do art, we can do higher mathematics, we can waste time on a mobile game. We can do a lot of different things. What is your artificial brain going to be capable of?

Gordon Wilson: So, on the longtime horizon, our roadmap is to ultimately build a general purpose artificial brain. In the near term, we’ll be building for more specific applications because we can’t solve for every use case all at once.

But the brain, again, gives us proof that there is this in evolutionary time, very new portion of the brain, which is the neocortex. And it has the same structure repeated across which are these about 11 layers and these tall cortical columns. And the neocortex, somehow, even though it’s the same structure, supports vastly different types of intelligence.

It supports vision, it supports hearing, it supports natural language and higher order reasoning. So that is evidence enough for us that we know there is an architecture that is general purpose. It already exists, from biology. But for us, initially, our brains will not be solving every problem, but all of the brains that we are building critically are going to enable efficient learning.

So, why efficient learning? Right? Well, I think that’s, first of all, that’s true of all biological brains as well. They can all learn, they can all adapt and they can all do so with such low power they can fit inside of an animal body. So, this goes back to the trade-off I was saying that we face today with our options for hardware.

People either have the option of vast scale in a data center with racks of GPUs, or they have the option for efficiency and for deployment of very small models. And, but that deployment is limited to inference. An inference is not learning. It’s not training. The learning portion of it is so expensive that it’s stuck in data centers.

So, the problem that we’re solving is this question of efficient learning.

How can we make training so cheap and so efficient that you can push that all the way to the edge? Because if you can do that, then I think that’s what really encapsulates an artificial brain. It’s a device. It’s a piece of hardware and software that can exist, untethered, perhaps in a cell phone, or AirPods, or a robot, or a drone. And it importantly has the ability to learn on the fly. To adapt to a changing environment or a changing self.

Remember we chatted about that last time. But that is a critical requirement of all artificial brains that are on our roadmap, that they all have this ability to learn. 

John Koetsier: I’m just sitting here right now and I’m kind of laughing inside because you’re in a public lounge, in an airport, and you’re talking about artificial brains, and I’m wondering who’s listening, who’s hearing that and wondering, “Oh, we’ve got Dr. Frankenstein here flying out of SFO?” Hopefully not too many people. 

Okay. So, you’re building something very cool and super ambitious. When can a company like OpenAI come to you and say, “Hey, give us 10,000. We wanna plug ’em in on the back end.” 

Gordon Wilson: So, that’s still a few years out. So, unfortunately it’s, or fortunately rather, it just, it takes time to build something that’s this radically different and that’s this much better. And initially we intend to support intelligence in new places where no one else is providing a solution, as opposed to kind of going to market to compete directly with Nvidia. You know, our solution is really offering something brand new and the ability you’ll be able to push training again to these edge locations.

So, there are a lot of interesting use cases that we’re gonna be tackling in the near term which are, say, in industrial manufacturing, or in robotics, and you have a machine that wants to learn on the fly, is maybe adapting to a changing environment, there are changing conditions, or the degradation of that machine. And that’s where we’re gonna be plugging that in first. But to get to 10,000 unit orders, 100,000 unit orders for OpenAI, we’re still a few years away from that. 

John Koetsier: Super interesting. I see so much potential for that. We keep talking about healthcare with aging populations, whether that’s Japan, whether that’s the United States, whether that’s Europe or anything like that, but you need smart help, and we don’t have the people for it. I mean, even a dog will be smart and adapt to its companion, its owner, right? And having machines like that that can help people as well is great. 

Super interesting. I know you’ve gotta fly. You’re heading off for Christmas. Thank you so much for taking this time. Is there anything else that you wanted to hit us with?

Gordon Wilson: There are a few more just quick updates I wanted to share with you all. So, you know, we have been on this approach to building neuromorphic hardware and on this path to build artificial brains for a long time. This is our sixth year at Rain and, you know, for a long time, I think we’ve been quite contrarian, but in the past few months there have been a few moments of, I think that in the broader conversation that are worth mentioning.

So, Geoff Hinton, you’re probably familiar with, he’s one of the godfathers of deep learning. He works at Google right now. He gave the closing keynote at NeurIPS, which is the largest machine learning conference in the world, one of the largest. And in that talk, he spoke about the need for a new substrate of hardware, for something that’s cheaper, that’s more efficient. And he mentioned analog and neuromorphic, and he used a term that I’d never heard before. I think it was great what he coined, called “mortal computation.” And that was the idea that we have to give up immortality, right? We have to give up the idea that, you know, we can save software, we can save the memory of the system after the hardware dies.

So he was actually talking about this blending of hardware and software that we build at Rain. And what was really cool is that he released this paper of the new algorithm called “The Forward-Forward Algorithm,” which should be compliant with new hardware. And then in the paper that we looked up, I think that same day, there was one reference to hardware that it could be compatible with, and that was our work for 2020 with the offshore NGO.

John Koetsier: Wow. 

Gordon Wilson: So, was very pleased we’re beginning, you know, exploring this collaboration with Geoff. But to see him validate our approach and validate this lead for a new substrate for compute, and to move in the direction of neuromorphic and analog, was very, very exciting to see. 

John Koetsier: Mortal computing. It just brings up so many possibilities. Machines with personalities, machines that become more than they are when they ship, machines that adapt to you and become part, I mean, wow. You start thinking about life and artificial life and we can get very, very deep into that. I know you gotta fly. Thank you so much for taking the time. 

Gordon Wilson: Appreciate it, John. Thank you so much. 

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: