Communication for all: Upgrading Stephen Hawking’s communication system with AI and GPT-2

Stephen hawking communication GPT-2 AI Peter Scott Morgan

Lama Nachman is an Intel scientist who built Stephen Hawking’s communication system. Now she’s helping another scientist and roboticist, Peter Scott Morgan, who has Motor Neuron Disease (like ALS or Lou Gehrig’s disease), to live and communicate with a more advanced version.

In this TechFirst with John Koetsier, we chat with Lama about what she’s building. Her system, called Assistive Context-Aware Toolbox (ACAT), uses gaze control and AI to essentially control a computer that allows him to talk, write, control his environment, and retain some measure of independence. Most of the technology is open source, and the next version, which senses brain waves, only uses a few hundred dollars worth of equipment.

Morgan’s vision is using AI and technology to essentially cyborg himself (eventually, perhaps with help of a robotic exoskeleton). Nachman is using AI, including GPT-2, word prediction, and more, to help him communicate.

Sometimes the result isn’t just him or just the system, but a combination of both.

Get the full audio, video, and transcript of our conversation below … 

Subscribe to TechFirst: using AI and GPT-2 to enable communications for ALS and MND patients


Watch: using AI and GPT-2 to enable communications for ALS and MND patients

Subscribe to my YouTube channel so you’ll get notified when I go live with future guests, or see the videos later.

Read: using AI and GPT-2 to enable communications for ALS and MND patients

John Koetsier: Will AI help you cyborg yourself when you can’t be you anymore?

Welcome to TechFirst with John Koetsier. We often think of AI and cyborgs as something scary, right? Something sort of like the Terminator or something like that. What if it’s something that can actually help us be more ourselves? Even as we age, or maybe as we deal with degenerative diseases. Stephen Hawking used early systems like this to help him stay productive and to be able to communicate late in his life.

To learn more, we’re chatting with one of the people who helped build the tech that helped Dr. Hawking live and work, Intel’s leading AI researcher, Lama Nachman. Lama, welcome!

Lama Nachman: Thank you. I’m so glad to be here. 

John Koetsier: Excellent. And I just noticed that I said you’re Intel’s leading AI researcher, and you may very well be, but you were humble and you said “one of Intel’s leading researchers,” or at least your PR people did, so … 

Lama Nachman: Yeah, I’m definitely not Intel’s leading AI researcher, for sure. 

John Koetsier: Let’s start with Stephen Hawking. What did you build for him? 

Lama Nachman: So basically, Stephen really relied on his machine to pretty much do everything, right? So communicate, you know, to talk to people, but also surf the web, give lectures, all of these functions, right?

So he had no ability to like, use clearly his hands or any of that, so the only muscle that he really had quite a bit of control over is his cheek. 

John Koetsier: Yes. 

Lama Nachman: So, the only thing that he could do is essentially move his cheek. 

John Koetsier: Wow.

Stephen hawking

Dr. Stephen Hawking

Lama Nachman: So the system that we built for him essentially is three parts. Two of them is what we built, and one, which is what he had before that.

One part was actually reading that cheek movement, and that was a proximity sensor that actually sat on his glasses — and you could see that actually in the picture that you have — and every time he moved his cheek, essentially the distance to the proximity sensor changed, and that is the equivalent of frankly just pushing a button with his cheek.

John Koetsier: Wow, wow.

Lama Nachman: Right? So that was the first part, which is extracting that push button signal.

And the second one, essentially building a whole software platform on top of Windows so that you can control all of Windows from a push button. 

John Koetsier: From that one input? 

Lama Nachman: From that one input. So that really meant essentially you think about it as an interface that over time moves across different options. Of course these options are contextual based on what he’s trying to do at any point in time. And whenever the thing of interest is highlighted, he would actually push the button, and that meant everything that he had to do had to go through that type of an interaction. 

John Koetsier: How many things would he have to look at in order to, you know, is that in the dozens? Is that the hundreds, before you find the one that he wanted to sort of click on? 

Lama Nachman: Yeah. So that’s actually how, like the UI really needs to be kind of more intelligent, right? So, essentially there are multiple things. I mean, you kind of divide, you don’t just do kind of basic one level, right? 

John Koetsier: Yes.

Lama Nachman: You divide the interaction into multiple levels. You use a lot of contexts. So for example, what we ended up doing is incorporating a lot of options within menus that he can choose from. That would come out based on what specifically he’s doing at the moment. 

John Koetsier: Yeah. 

Lama Nachman: So that was kind of a big part of it. Like, how do you contextually change that? So you turn anything into a selection of a few options. And when you do that, you really dramatically change the performance.

And as a result, one of the things that I think people don’t sometimes realize, is that Windows has this assumption that people can move a mouse. 

John Koetsier: Yes.

Lama Nachman

Lama Nachman, an Intel Fellow and AI scientist

Lama Nachman: And if you can’t move a mouse, like even if you start to really think through how many mouse clicks it takes to accomplish any function, it’s pretty amazing. I mean, even if you say, okay, I can automate this and it can bring up something to select the file. I mean, just opening a file used to take him about four minutes. 

John Koetsier: Wow! I mean…

Lama Nachman: Just opening one file. 

John Koetsier: If I think about it, I mean, there’s so much … I need to see the screen, obviously. I need to know where my mouse cursor is. I need to move it to a certain place. I might need to double click into something. I might need to navigate a structure and then double click into something else. And obviously when you can move one muscle, in your body… 

Lama Nachman: Exactly.

And the problem is he can’t, so essentially the mouse interaction was, imagine like a radar, you’re scanning ’cause you’re trying to select a point on a 2D selection, right? 

John Koetsier: Yes.

Lama Nachman: So you’re scanning all the way in this dimension, and then you start to scan in that dimension to actually click that one point. 

John Koetsier: Right.

Lama Nachman: So whenever he had to go to the mouse, that meant minutes of interaction. 

John Koetsier: Wow.

Lama Nachman: So what we really tried to do is never have him use a mouse.

Now imagine an interface that’s built on a mouse, trying to avoid the mouse, was not trivial for sure. But that meant a lot of automation under the hood.

So if you, like, rather than think of any function as a series of mouse clicks, we just redesigned these functions as what they’re meant to be. If you want to open a file, you don’t need to click on 20 mouse selections, you just start writing what the file that you want, right? 

John Koetsier: Right.

Lama Nachman: And then we just did that consistently across all of the functions that one can think of, and so we won every time we had him not go to the mouse. 

John Koetsier: So, that was what you built for Stephen Hawking.

And we’ll probably come back to that and see what you would change, what would you do differently today. But you’re now working with a roboticist, Dr. Peter Scott Morgan— and this is a picture of him right here — and he has something a little similar, a motor neuron disease which is similar to, or most people may know as ALS or Lou Gehrig’s or something like that.

What are you building for Dr. Morgan? 

Lama Nachman: So, just to explain a little bit kind of some context here. So when we built the system for Stephen, that system was really meant to be essentially a modular system that can be configured in very different ways, right? So the idea was that there was really this gap of having a system that people can build, configure, innovate on top of, etc. right?

So, you know, you build a specific configuration maybe for Stephen, but the idea is how do you take it to the rest of the world?

So what we did is we actually took it to open source and we, now, if you have somebody who cannot use their face, but can actually maybe move a finger or do something else, we can essentially map any type of movement through a very simple plugin, which will take you a day to do. And then all of a sudden, now you have the power of this whole system that we’ve built on top of that Windows platform.

So that was kind of the vision behind the ACAT [Assistive Context-Aware Toolbox] when we built it, because you know, Stephen from day one also had the same thought. It’s like, how do we enable people to actually just use this more broadly? So then when Peter came along, really initially, when we started to think about what is really interesting within that space, having learned a lot from my experience with Stephen, Peter was kind of like on the other end of the spectrum, if you will, right? 

John Koetsier: Yes.

Lama Nachman: So with Stephen, Stephen was really kind of like, risk-averse, had this interface, didn’t want to change the UI. So we had to really improve his speeds dramatically with essentially, almost like my hands tied behind my back, right? I couldn’t change the UI, but I could change a lot of things under the hood and get them to do that.

With Stephen, he needed control, right? I mean, I always joke that — and it’s not a joke actually, it’s true — that Stephen wanted to predict his word predictor.

Like he was that type of a person, it’s like, I need to know every single, I need to control every single letter, right? 

John Koetsier: Yes.

Lama Nachman: Peter is the opposite end of that spectrum. He was essentially, his approach was, look, when I’m communicating with people, what I’m really trying to do is communicate with them quickly, right?

John Koetsier: Okay.

Lama Nachman: So, how do you reduce that silence gap? How do you, because right now, if you think about it, somebody says something and then that person finishes the sentence, or you can predict the rest of the sentence, and you start actually trying to enter your thought. There’s a huge silence gap before the system will now speak what you just dictated, right?

John Koetsier: Right. 

Lama Nachman: So what Peter was really interested in is, you know, when you were actually trying to maybe write a book, you want that, right? You want that control of every word.

But when you’re actually trying to have a conversation, it’s really about social connection. 

John Koetsier: Yes.

Lama Nachman: So can we reduce that time.

So basically Peter was like, let’s embrace everything that AI can do, right, to try to actually make the system much more efficient. So when we came in, we really were focused on this research thread, which is how do you make a system listen to the conversation and highlight things quickly so that you’re interacting with the system at a much higher level.

You’re not interacting with the system at the letter level, but at the sentence level or the word level, right?

John Koetsier: Mm-hmm, mm-hmm.

Lama Nachman:  But interestingly enough, when, so when I came into this, I wasn’t really thinking ACAT because Peter was going to use gaze control and, you know, there are all of these systems that use gaze today that I just assumed, okay, we’ll use one of them.

Like, we don’t have gaze in ACAT because we were trying to fill a gap, right?

John Koetsier: Yes.

Lama Nachman:  But then what we’ve realized, as we kind of ’cause there are all of these different researchers and all of these different companies doing different pieces of that huge vision that Peter had, right?

I mean, Peter wanted a system that retains personality, that has his own voice, right, so train with his own voice, that had an avatar of him that got projected when he spoke, right, that actually had like an exoskeleton.

I mean, he had this whole vision. 

John Koetsier: It’s the full meal deal.

Lama Nachman: Right. Of all of these things, and he talks about Peter 2.0 as this, you know, really cyborg, right? So the problem is, well how do you do it? How do you control all of these things? So, the issue is that there wasn’t an open platform that allowed people to innovate, even though there are platforms that allow gaze control, you could use that to just communicate. 

John Koetsier: Yes. 

Lama Nachman: Right? But it’s not an open system that you can plug different pieces to it. So then we came back all the way around to ACAT to try to actually now bring that in for the sake of modularity and configurability, and being able to bring a lot of innovation from different places, ’cause it’s an open platform.

So then, what that meant is now all of a sudden we took a detour from all of the AI stuff that we were working on to actually bring gaze control to ACAT, right, to make him essentially, just when he gets out of the hospital, he has a system that he can use to communicate.

John Koetsier: Wow.

Lama Nachman: And that, so essentially that was kind of the first big deliverable, which is how do we get him something that is tailored for his use, that uses gaze control, that has an interesting interface that allowed him to be efficient and so on. And we spent quite a bit of time actually doing that and that system he is now using. That’s what he uses to communicate today, post his surgery, and clearly we continued the thread of the research on the response generation piece.

And so the idea of the response generation piece is that the system listens to the conversation, it is, you know, going to highlight options for you and then you can nudge it in different ways, right?

It’s not an automated chat bot that’s going to speak for you, because at the end of the day, you’re trying to speak something and then that would essentially make it faster. And that’s the research part of what we’re doing right now. 

John Koetsier: And can it go in different modes? So for instance, if Peter wants to write a book, like Stephen Hawking did, can he go in a different mode versus conversational?

Lama Nachman: Yeah, absolutely. And in fact, actually what’s interesting, even with Stephen when he, so, I mean, you can do certain things to optimize the system, right? In this case, it would be a totally different interface ’cause you’re not going to be interfacing at the letter level.

And you want to be able to go back and forth because what if the response generation system doesn’t give you what you want, right?

You want to also be able to go down to that next level down, but even with Stephen, what we’ve done actually is that we’ve trained different language models for the word predictor, based on the context of use.

So when he was talking to people, it was very different than when he’s trying to write a book. And specifically when he’s trying to write the book, the text of the book really needs to get much higher influence on the language model, right?

John Koetsier: Mm-hmm.

Lama Nachman: Because that’s, I mean, that’s how you want to optimize for the next word prediction. So a lot of what we’ve done with Stephen in terms of on the AI side was really just trying to optimize the word prediction, because you know, he wasn’t willing to go, ‘oh, just do a response generation’ but…

John Koetsier: Talk about the differences in technology. Obviously we have more technology right now. There’s a lot more emphasis on AI now than when Stephen Hawking was alive, even in the later years of his life. 

Lama Nachman: Yeah.

John Koetsier: How more advanced is the software and how much faster can you, for instance, write a book or have a conversation?

Lama Nachman: Yeah. So, I would say there is definitely, I mean if you think about it, when we started working with Stephen that was in 2011, right?

So, definitely the advancement in deep learning during this time, from 2011 to now, has been huge, and specifically in language. So, one of the things that today, you know, we can definitely build on is for example, GPT-2, right?

John Koetsier: Yes.

Lama Nachman: So we could actually easily fine-tune, take these models that are trained on millions and millions of pages, right, and then fine-tune that model for specific tasks that we want.

So, frankly, I mean, if you want to think about a response generation system that can listen to our conversation and give you an output, right, that is much more possible … even though it’s still a very hard problem. And I’ll talk about why this is a very hard problem.

But it’s still, I mean, you have a starting point, right, which we really didn’t have in 2011. So definitely from a technology standpoint, things have improved dramatically. But even word prediction, like, you know, when we started to use, I mean, his word predictor that he used to use before the system that we built for him, before ACAT, was even like something from even 10 years earlier, right?

And it was really very simple, you know, two grand model that he used. So when we came into this and we actually worked with SwiftKey on this as well.

John Koetsier: Nice.

Lama Nachman: What we were able to show is that the performance of just the word predictor enabled him to go from having to enter more than like 25% of the letters to actually write a word, to less than 8%. 

John Koetsier: Wow, and this is Stephen Hawking.

Lama Nachman: Right. In terms of the amount of input, just by improving the word predictor, right? 

John Koetsier: Nice.

Lama Nachman: To go to kind of just latest word, and that was, you know, 2013. I mean, the technology has improved dramatically since as well. So, and this is why I was joking, like, you know, when we did that, and this is where, when you think about these systems, you really absolutely have to be thinking about the human aspect of this, and what does the interface look like, and the UI, and what are the assumptions of the interaction?

Because initially when we did this and we got a much better word predictor, Stephen just totally rejected it. He’s like this doesn’t work. And I’m like, what do you mean? So then eventually I took a text from his book, right, and I basically literally ran it through his old one and his new one, and showed him that he needed to enter way less input to get there.

And then when I showed him, he was like, ‘Hmm, okay.’ But here’s the problem: I can predict my old word predictor, so I know where to look when the word comes up. If it doesn’t show there, I’m not scanning to see where it is. So even though you brought it up way faster, I didn’t even see it because I’m not expecting to look at it. 

John Koetsier: Yes. 

Lama Nachman: So in some sense, it’s like, I think that always, like for me when I think about these systems, and you know a lot of my research in general is about human AI systems, right? I mean, how do you not necessarily automate the humans out of the loop, but how do you bring them into the loop of these diverse AI human systems?

And you always have to think about that human angle. You have to think about how do you measure performance of these systems in terms of how do they make it easier for people to perform their tasks, rather than theoretically, right? What metrics of success are we using? 

John Koetsier: It’s really interesting that you’ve brought up GPT-2. I mean, because you can create texts from that with some training data that look pretty human. I mean, I know some people have posted posts or tweets or whatever, and they maybe did 10 versions and they picked one or something like that, but it looks really good.

I mean, it might not win a Pulitzer, but it looks really good. But is it you, right? That’s the core question.

Lama Nachman: That’s a fantastic question. And in fact, I mean, if you think about it, there are multiple pieces of this puzzle that we’re trying to tackle that where we see a lot of the, you know, where the research and innovation needs to happen. One is, first of all, like kind of basic thing, it wasn’t really meant for dialogue, right? It wasn’t trained on dialogue. 

John Koetsier: Right. 

Lama Nachman: So the first hurdle that we ran into is we need to fine-tune it on dialogue data, right, because that’s what we’re trying to accomplish. The second one is that there is no, like, if you think about these systems they are meant to just generate the whole thing for you, right?

So this notion of interacting with the system to nudge it, to give you options, to actually make it change the options based on what a thought you would have, is really hard to do. So one of the things that we’ve been working on is how do you, in the fine-tuning, how do you train it with another piece of input, that what would be the input of the user. 

John Koetsier: Yes, yes. 

Lama Nachman: Right. So, and what would that input be? Is it a theme? Is it an intent? Is it a keyword? Right? How do you even bring that up? Because if you think about it from a high level interaction perspective, you could think, okay, well, it’s listening to the conversation, now that it’s trained on more dialogue data, it’s listening to the conversation, it’ll give you some recommendations.

But most of the time, those recommendations are not going to be what you want, right? So rather than say, well, I either take a recommendation that it has or start writing letters, there is something in between there that says, can I nudge it? Can I give it a keyword that maybe will give me better recommendations, right?

John Koetsier: Mm-hmm.

Lama Nachman: So that means now you have to retrain this with that notion in mind, not just taking the dialogue up to this point, but a theme, right?

John Koetsier: Yes, yes. 

Lama Nachman: So that’s the second big piece that we’ve been trying to do, which is how do you then incorporate these keywords and themes into the training? How do you use that in your loss model? Things like that. And then the part…

John Koetsier: That’s fascinating … sorry, go ahead. Finish the third part. 

Lama Nachman: The third part is the actual personalization, right?

John Koetsier: Yes. 

Lama Nachman: Specifically. So there with personalization, you know, there are kind of multiple ways that we’re looking at this. One is, how do we fine-tune it with some of his data, right? 

John Koetsier: Mm-hmm, mm-hmm.

Lama Nachman: So that’s in the training path, but also, I mean, one of the hard things is like, when you have a model that’s this big, trying to actually change its output based on limited data is not an easy thing either, right? 

John Koetsier: Right.

Lama Nachman: So you have to really think about, you know, do you actually at the inference time, do you use his corpora to help in how do you weight the decoder, right, so it gives you what you want. And essentially, how do you start to incorporate that data in different ways into that whole system. And that’s kind of the third, big piece of that puzzle that we’ve been dealing with.

And then the fourth part is, how do you bring even more diversity in the output, right?

So these things are meant to give you, I mean, they do well, if you actually test these things out, they do well on the things that are very generic that most people would say, right? And you see those high, you know, selected outputs that are the most likely ones. So they do well on generic stuff, but that’s not what people are trying to communicate. 

John Koetsier: Right. 

Lama Nachman: Other than the typical first line in an email, typically, you know, it’s all about the specificity of the context of the topic of what you’re trying to do. And that’s where they don’t do really well. So now how do you nudge it so that it’s actually bringing much more diversity and operating not in the obvious places, but also in the options.

Make sure that the options that are highlighted are very diverse, right?

John Koetsier:  That’s a huge challenge. I mean, if you look at our digital assistants, our personal assistants like Siri or Alexa or Google Assistant, they’re only just starting to develop kind of a short term memory so that their future statements have some reflection based on the context of the discussion that you’re having.

We’re only just starting that, and those have amounts of training data that are insanely huge, right?

And you’re trying to do that on a person’s training data and make it not just something that okay, you know, GPT-2, hey, it sounds kind of human, it sounds pretty good. But is it me? Is it saying that the way I would say it and is it my personality that’s coming through? Really, really challenging thing. It brings up big questions, right, because you know one of the things that I saw that Dr. Morgan said on his website is “Changing what it means to be human,” right.

And as you said, he was roboticist, he looked at what can I do? What’s Peter 2.0, right, for obviously some of the communication things, but also the physical capability things. What are some of your thoughts on that? And what does that mean, changing what it means to be human, as you’re helping someone communicate?

Language is so core to who we are, right? 

Lama Nachman: Yeah, absolutely. That’s a fantastic question actually, John. So, I was actually lucky that I managed to meet Peter before his surgery, so I could actually have deeper conversation with him, not at the rate of communicating through a gaze tracker.

And we had a lot of conversations about this specific topic as we were kind of thinking about, well, how would we go about this? What does AI, you know, what role would, how far do we want to push the AI and what role does it play? And one of the interesting things, I mean, I came into the conversation with some bias clearly, because of eight years of working with Stephen, on one end of the spectrum.

And in my conversations with him, you know, what dawned on me is that he really thinks of communication as — I mean, I don’t know if you’ve ever seen him, you know, kind of communicate and… 

John Koetsier: Yes.

Lama Nachman: …you know, before his surgery. He is someone who’s very, very witty, right, who really, a lot of time is like trying to really be very quick and get something out, right?

And you know, a lot of times there’s sarcasm and so of course you can’t do sarcasm with a delay. Like that just doesn’t work in any way, shape or form. So the things that he’s keenly aware of as like the challenges in his mind were those, right.

So he was actually proposing like, ‘Why would we want to think about me controlling an AI system? We should really [be] thinking about me and the AI system evolving together as another thing.’ 

John Koetsier: Wow. 

Lama Nachman: ‘And I am willing, I mean, if something comes up that, you know, isn’t really what I would say, but it’s actually interesting, great. That becomes part of me and the AI system.’ Of course, like he’s definitely on the other end of the spectrum of many of the people with ALS that I’ve ever talked to.

But, there are other things then as a result where the notion of his personality comes in.

So one of the things that I’ve been struggling with is for someone who wants to really be quick and funny and witty and use sarcasm, how do I actually make that clear to the AI system that now I’m in this mode, right? I want the sarcasm.

So one of the things that we’ve been actually toying with is this idea of, and that’s where really it’s the intersection of HCI and AI really that’s interesting here, right. It’s like, how do you enable someone to essentially make almost his intent obvious to the system that now you’re actually dialing up or dialing down these options, right? And how do you even pre-think about some of these things, because there you would really want to just nudge it a lot based on his specific input and mostly utilize a lot of that, right? But also, part of that personality retention piece is how do you bring in emotion, for example, into the voice.

So Sarah Prock, who actually generated his voice — so he banked a lot of data for his voice and they actually made his voice and it sounds pretty realistic, I mean, it sounds really like him.

But you know, he also, at that time, banked a lot of different ways of saying even the same thing. Like “really,” I think he had maybe 10 recordings for the word really, right, it’s like ‘really, really, really?’

So then the question is like, how do you incorporate that in the moment? And even if it’s actually you, how do you enable then the system to quickly be set up, because you’re not going to go, okay, I want this really now and that’s going to take — so that’s some of the ideas that we’ve been thinking about is like, how do you quickly nudge the system? So now it’s actually trying to bring something up because it’s meaning to do something specific in communication. So I think he really cares more about these concepts than necessarily okay, this is exactly what I would have said. So he’s much more open to okay, you know, the content will grow over time. So that’s one piece.

The second piece that I’ve been spending a lot of time thinking about is, if you think about it, because of expediency, when the AI system brings a few options the difference between having to write something and select something is huge. So he will be tempted to select things, even if they’re not ideal.

John Koetsier: Yes. 

Lama Nachman: The problem with this is over time, you get more and more boxed in, because the system is learning, reinforcing its behavior based on that non-ideal input, and it would just get worse over time.

So one of the ideas that I talked to him about, and what we are going to incorporate in his system is, while you do that selection because you care about expediency, indicate to the system quickly, like with just the gaze, right, that this wasn’t an ideal option. I’m just choosing it for the purpose of expediency. And then the notion of you over time learning with the AI and the AI learning with you, is that now when you have downtime, if you’re willing to spend some of the time, the system can bring up these, you know, it would record all of these episodes, right?

You can come back and say, ‘Well, that’s what I really would have wanted to say,’ right? And then it would learn from these specific interactions. 

John Koetsier: It’s, the complexities here are incredible because we’re dealing with the complexity of human personality and sentience itself, and what that actually means.

Lama Nachman: Exactly.

John Koetsier: It’s funny as you were talking about some of those things, you know, the sarcastic “really,” or the genuine “really” or something like that. I was thinking of the movie Interstellar where he’s adjusting the humor level on the robot, right, take that whole humor level down a little bit.

And that’s almost what you’re kind of doing if you’re creating Peter 2.0, you know, here’s my sarcasm level, here’s my humor level, that sort of thing. But those things change as well through a day, and now I’m in business mode, now I’m in play mode, now … so it’s super complex.

That’s where it really becomes interesting to think about what you talked about and what Dr. Morgan talked about changing what it means to be human and sort of becoming with the AI. You almost wonder about something like one of Elon Musk’s companies, the [Neuralink] company, where you get a direct neural interface.

I assume that’s something that you’re looking at and thinking about as well.

Lama Nachman: So, we have actually, so I talked about the open source system that we have, right, the ACAT and that we put in open source. So really we’ve been thinking more about utilizing BCI for communicating in kind of noninvasive ways. So a lot of our focus there has been — so I talked about how you can map any kind of muscle movement to a trigger, right?

But there are a lot of people who can’t use any muscle. So my focus there was okay, well, then in that case, the only thing you can tap into is EEG or brain computer interface, right, is what we were trying to build.

So actually what we’re building right now, and we’re getting really very close to releasing that into open source is essentially utilizing a very simple set of electrodes that you can just have in a cap, very cheap system. This is not your twenty thousand, high fidelity, whatever gazillion electrodes, but really something that’s in the order of a few hundred dollars, like what OpenBCI has.

And then you could actually use that as a way to communicate through that same like built on top of ACAT same that system. So really my focus has been less on the intrusive let me decode brain intent at a much, you know, human intent at a much finer level. But how do I really bring all of these humans on earth that have no way of communicating whatsoever, right? And you know…

John Koetsier: That makes a ton of sense, obviously, because there’s the population of people who are willing to allow somebody to drill into their skull, through their skull and interface with wires into their brain, is very small.

And obviously the other point that you’re making there is in terms of cost, that’s a huge cost because there’s medical procedures and there’s, you know, it’s very experimental. So that makes a ton of sense. 

Lama Nachman: Yeah. Yeah, that’s what I’ve been really thinking a lot about because, you know, the whole concept of ACAT was really trying to address gaps that we have seen. People who are just left out of that. And the thing, if you think about it, today, with all of the innovation that happens, if you actually just enable someone to interface with their PC, they can control everything, pretty much, right? I mean, it’s like now, because of everything that we’ve built, right, with the physical and the digital dimension.

John Koetsier: My windows, my doors, my alarm, my music, my TV.

Lama Nachman: Exactly. All you need is that one control. So it’s like, what I’ve learned from working with all of these people with disabilities is that ultimately it’s really about independence.

How do you bring independence back into their life? And every single time they need to rely on a person to do a function, that takes a lot out of that independence.

So that for me, it’s like, okay, well, how about those people who don’t have any other option, that’s really what we’re trying to enable with a lot of the work that we’re doing. 

John Koetsier: Lama, why is this a passion for you? Why are you working in this area? What prompted you to start here and why are you continuing in this?

Lama Nachman: So I think, I mean for me, there’s always this theme of how do I address inequity? Like, this is something that I just feel very, very strongly about and I’ve felt about from even when I think I was a child.

But this notion of inequity and where can technology play to actually help with that inequity, rather than amplify the inequity in the society, like that’s been kind of a constant theme of all of my work. So in my research, you know, I mean, assistive computing for people with disability is one piece of that puzzle, but we work a lot on things like education or manufacturing.

So how do you essentially bring AI to amplify human potential in all of these different areas, right? 

John Koetsier: Mm-hmm.

Lama Nachman: So if I think about education, you know, there is definitely a lot of inequity in terms of resources that are available to schools, different ratios and resources and all of that. So can we actually start to bridge some of these gaps with these type of technologies, right? Can technology help understand and improve student engagement? Because we know that that will correlate with learning outcomes, right?

John Koetsier: Yes.

Lama Nachman: You know, the same thing in manufacturing, like, can we actually have these systems watch out over people and help them out, remind them, do these things. But that means they need to perceive the world. They need to be able to make intelligent inferences and they need to be able to communicate with people.

John Koetsier: Mm-hmm.

Lama Nachman: So I would say the theme that cuts across all of the different research that I’m doing is really how do you amplify human potential and reduce inequity in the society? That’s what I’m really focused on, in some sense.

And technology is, you know, I like to think of technology as the equalizer, right, rather than the have and the have-nots. So that’s really kind of what drives a lot of this work for me. 

John Koetsier: Wonderful, wonderful. Well, I want to thank you for this. It has been a wonderful conversation. I want to thank you for the work that you’re doing, it’s super interesting. How is Peter now, and what are the next phases in his life?

Lama Nachman: So, you know, he’s doing well. I mean, I talk to him every week. We have a standing meeting weekly and he’s been really excited about kind of driving all of these different themes. So it has been a lot of progress on the avatar on his voice system.

He’s also been, there’s another thread of work about, you know, controlling essentially his chair and his mobility. So that’s another thing that he wanted to actually have full control over. And again, it kind of cuts across this whole notion of independence, right? How do you become more and more independent in these functions.

So he’s been really excited about — I mean, he’s essentially also a technologist, right, at heart. So for him, it’s like he loves to dig into these details and he’s an active member of any of these conversations around the technology. So the conversations that I have had with him is like really detailed on, okay, these are the different approaches that we’re thinking about, and what are your thoughts? And he always has a lot of insightful stuff to bring in.

So I think his engagement in that process of discovery and implementation is really kind of also, I mean, he needs to do all of his work clearly, but he’s very engaged in that process as well. And you know, he’s constantly trying to push the edge of technology, right? Understanding that he needs the practical now, but how do we continue to push that edge? And that’s been really, frankly remarkable for me to see that. [Silence]

Lama Nachman: I think you’re on mute, John. I just lost you … ah, there we go. 

John Koetsier: You know what? That’s the first time I’ve done that on my own show, wow! I need some assistive technology hahaha. Lama, I wanted to thank you for this conversation. It has been wonderful. It has been enjoyable. It has been super interesting. It’s been inspirational. Thank you! 

Lama Nachman: Thank you. Very nice speaking to you. 

John Koetsier: Excellent. For everybody else, thank you for joining us on TechFirst. I’m not on mute right now, it’s a wonderful thing. My name is John Koetsier. I appreciate you being along for the ride.

You’ll be able to get a full transcript of this in about a week at You’ll see the full video and the story on Forbes later on after that, and on my YouTube channel. Thank you for joining. Hey, maybe share with a friend.

Until next time … this is John Koetsier with TechFirst.