Open source AI: essential for a free future?

open source AI

Who owns the AI that searches for the knowledge you need?
Who owns the AI that manages your home’s energy use?
Who owns the AI that lets you speak and get the music you want?

Artificial intelligence is clearly more and more central to our lives. We interact with it daily, in fact, moment to moment … from the predictive text in our messaging … to our web searches … to most of our digital activity and commerce … and in our homes. In this episode of TechFirst, we’re chatting with someone who is building open source AI: AI that you can get the code for, use, add to and enhance, or just deploy for yourself.

Most of us won’t do that, of course, but it speaks to a future in which we might be able to get smart assistance from AIs that only work for our interests … and not some corporations’.

Our guest in this TechFirst is Michael Lewis, CEO of Mycroft AI.

(Subscribe to my YouTube channel)

Subscribe to the TechFirst podcast: talking about open source AI with Mycroft CEO Michael

 

Transcript: is open source AI essential for the future of freedom?

(This transcript has been lightly edited for length and clarity.)

John Koetsier: Who owns the AI that searches for the knowledge that you need, or manages your home’s energy use, or lets you speak and get the music you want on demand? AI, or artificial intelligence, is clearly more and more central to our lives. We interact with it daily, in fact, moment to moment, from the predictive text in our messaging, to our web searches, to most of our digital activity and commerce… and our homes, as well. In this episode of TechFirst, we’re chatting with someone who is building open source AI. AI that you could, if you want, get the code for, you could use, you could add to, enhance, or you could just deploy for yourself. Most won’t do that, of course, but it speaks to a future in which we might be able to get smart assistants from AIs that only work for us, for our interests, and not for some corporations. Our guest is Michael Lewis, CEO of Mycroft AI. Welcome, Michael. 

Michael Lewis: Thank you, John. I’m glad to be here. 

John Koetsier: Wonderful. I’m glad you’re here as well. Let’s start here. Explain the name Mycroft for those who haven’t read any Sherlock Holmes. 

Michael Lewis: Sure. So Mycroft actually comes from a Robert Heinlein book called The Moon is a Harsh Mistress. And in that book, there’s a sentient AI that comes online and it goes by a lot of different names in the book, but Mycroft is the main name that it goes by…

John Koetsier: Okay!

Michael Lewis: They also call it “Mike.” 

John Koetsier: That’s really interesting… [crosstalk]

Michael Lewis: Mycroft is [inaudible]… to the Sherlock Holmes. 

John Koetsier: Absolutely. That’s where I took it from, right? I thought it was the Mycroft who was Sherlock Holmes’ brother, who was supposed to be smarter than Sherlock but was much more retiring in his public life, let’s put it that way. I had forgotten about Heinlein’s Mycroft. So, excellent. Who owns most of the AI that we encounter in our day-to-day lives?

Michael Lewis: Well, it’s pretty clear that you don’t and we don’t. 

John Koetsier: [Laughing] No, I don’t. 

Michael Lewis: So, yeah, I mean, most of the AI that we use in our everyday lives right now is actually owned by The Big Three or Big Four tech companies, you know, you’ve got Google and Amazon… Apple has a very tight knit closed ecosystem. And yeah, and it’s not us. 

John Koetsier: Definitely not us. Maybe Facebook in there as well, a couple others. 

Michael Lewis: Oh yeah. 

John Koetsier: And pretty much any private company of size or public company of size. Why does that matter? 

Michael Lewis: Well, it matters for a bunch of reasons. One, you’re not in control of your data. So you don’t know whether or not what they’re collecting about you is correct or not.

John Koetsier: Mm-hmm 

Michael Lewis: Right? They get a selective view of your interactions and then they’re using that for whatever purposes they have, whether it’s for actually providing services to you, or for marketing to you, or for reselling to other people for marketing purposes. You don’t necessarily know that that’s an accurate view of who you are, right? But, you know, more insidious reasons that this is important is that data in aggregate is just that… it’s just, it’s an average. They get an average view of how things are, and what we get fed back is the result of these averages and these trends. And so, the way I like to think about it is… I was actually thinking about explaining it to my kids. And if you were to think about, like, rolling a six-sided die, you’re going to get a 1, 2, 3, 4, 5 or 6, right? What’s the average result? Three and a half. How many times are you ever going to roll a three and a half? Never. Right? 

And so, we’re all individuals here and we have our own particular preferences and desires and things that we care about. But we get treated as these big averages, right? And so I think that that’s a real problem. It doesn’t allow for as much individualization, you know, as much representation, whether you’re in a minority group or a less well-represented group, whether you’re maybe even in a group that’s being discriminated against in various parts of the world, right? And so…

John Koetsier: And there’s other reasons as well, right? I mean, like you mentioned, the data they get may not be accurate. It may be more scary if the data is accurate. [Laughing] I mean… 

Michael Lewis: Well, that’s… [inaudible/crosstalk]

John Koetsier: … accurate data. I haven’t turned on, let’s say, Alexa, on the sound bar, the Sonos sound bar in my bedroom, just because, well, I don’t really need Amazon in my bedroom, right? 

Michael Lewis: Exactly. Yeah, for sure. And, you know, the movie Gattaca, for example, is another example of this, right? You could be discriminated against just based on your genetics or… yeah. So, there’s definitely no need to have Alexa in your bedroom. But… or at least Amazon’s Alexa. But that’s not to say that there isn’t a place for voice control in your home and ubiquitously in your home, right? Voice control is going to be everywhere in your home. And because of the nature of it, it is always listening. And so… 

John Koetsier: I think that’s a very good point. I think that’s a great point. I mean, just because I don’t want Amazon in my bedroom doesn’t mean I can’t benefit from smart assistants, AI assistants in my home, wherever I am. Doesn’t mean I can’t benefit from AI assistants in my financial life, which I may choose to keep more secret as well. Other things like that. I could definitely benefit from smart assistants and I could see that in the next decades as well. So there’s been a lot of attempts at open sourcing AI. OpenAI, the organization behind GBT-3, started out as open and seems pretty closed right now. Why is open source AI hard? 

Michael Lewis: Well, open source anything is hard, for a couple of reasons. One, it’s still a relatively new phenomenon, despite the fact that the Linux kernel has been around for decades now, and it’s widely adopted; it’s in all of the OSs, you know, it’s the underpinning of all of the Apple OSs, the Microsoft OSs, obviously Android. It’s everywhere, right? Basically, everything is built on Linux which makes it a very successful open source project. You know, it’s still hard for people to wrap their heads around, well, okay, if it’s free and it’s open, how do I make money on it, right? And so there’s a real paradigm shift that we have to think about, well, what is it that we’re doing that is valuable? Is it writing the code that’s valuable or is it something else? And I think that’s one of the key issues with any kind of open source project, whether it’s AI ready. 

John Koetsier: Yeah. I mean, the interesting part, and I started going there a couple minutes ago, is if I had an AI that I knew was safe, that I knew that perhaps I controlled or that I had access to everything that that AI did, everything it knew, even its code, even if I can’t personally read its code and understand its code and add to its code… that would make it more likely that I would trust that AI, at minimum, to keep my secrets, to maintain my privacy, to do whatever, those sorts of things. Talk about what Mycroft is doing. 

Michael Lewis: Sure. Well, at the very basic level, if you looked at the front page of our website, for example, what we have is a privacy-respecting smart speaker. So, like a Google Home or an Amazon Alexa sits in your kitchen, or in your bedroom, or wherever you want to have it… and you can do all the basic things that you can do with those other devices. You can play music, you can set timers, you can ask it about the weather or random internet factoids and that kind of thing. And so, it’s basically a… I think of it as sort of a web browser kind of interface that you interact with using your voice. And that’s sort of the first level of what we’re doing. It’s an open source, completely privacy-respecting solution to that. So, but beyond that, what we’re really building is a platform because we want to enable homes of the future to be smart. You know, like I grew up watching Star Trek. I want to be able to say, you know, “Computer, dim the lights in the kitchen,” or whatever, right? And know that that’s my computer, that’s not being sent up to a cloud somewhere and then bounced off a satellite and then back down and whatever… there’s no reason for that to leave my house, right? We have the capability to do all of this work on the edge right now. So, that’s sort of the next step. And then beyond that, well, we’ve got some more pie-in-the-sky kinds of ideas. 

John Koetsier: Well, that’s interesting to talk about, of course, I mean, because that becomes very interesting. As we said off the top, most of our experience with AI, in fact, you could argue, perhaps all of it, is AI that somebody else owns, that somebody else has created, that somebody else is monetizing in some fashion, in some way, whose interests are… they may align with ours in some cases, but they may not align with ours in other cases. What I do wonder, is if… at some point in the future, we will… and I struggle with the language here because when you talk AI, right now we’re not talking sentient systems, obviously, we’re not talking self-aware systems. So you can say, “Will I own my own AI?” That may change over time, you know, the legal status, the moral status of AI may change over time, the long term, obviously. But will I ever own my own AI that I can use that would give me financial advice, stock advice, other things like that that I could say, “Hey, you know what? You know everything I know, you see everything I see, and you’re like a smart advisor or a memory on demand, other things… Do you see that future? 

Michael Lewis: That is one way that things can go. But, as you alluded to, I think that there are some problematic aspects to that. If you own something that’s sentient, well, we know that that’s problematic.

John Koetsier: [Laughing] Yes, that’s very problematic. There have been wars fought over that. 

Michael Lewis: Yeah. 

John Koetsier: We’re not finished with the consequences yet. 

Michael Lewis: Yeah, exactly. Yeah. So, I actually personally have a problem with this, and so, when I came on board on Mycroft a couple years ago and really started thinking about this, what I gravitated towards was the idea of AI not being artificial intelligence or a sentience in a way, but more of an augmenting intelligence, right? So in the same way that smartphones have become this external memory for ourselves… I don’t know how many phone numbers you can remember right now, but I can remember like a couple, right? 

John Koetsier: Yeah. 

Michael Lewis: But we have hundreds of them stored in our smartphones, right? And we can call them up on a moment’s notice. And so, it’s become sort of an external memory of ours and computers are augmenting our abilities in other ways. And the way I see AI… the AI that I want to create, I guess I should say, is one that augments my own abilities and makes me more capable. 

John Koetsier: Mm-hmm.

Michael Lewis: I get hundreds of emails every single day. I don’t want to necessarily have somebody else go through all of those emails and figure out which ones I want to read. I just want to get faster at going through them myself and being able to get rid of them. Navigating the complexities of signing up for a new internet service or whatever, right? 

John Koetsier: Yes. 

Michael Lewis: It shouldn’t be that hard, but, you know, they make all these sign-up procedures labyrinth and, you know, but being able to navigate that stuff more easily is something that a computer could really help us with, right? And so I like to think of it as augmenting ourselves to make ourselves more capable of navigating the increasingly complex digital world that we live in.

John Koetsier: I like that a lot. And I like that because there are things that are just way too challenging, like, coordinate the schedules of all my friends and get us a reservation at a Greek restaurant in the city, right? You know… 

Michael Lewis: Right. 

John Koetsier: Like, you ought be able to give that task to an agent, have it do it and manage it, and it can interact with the other people’s agents so that they don’t have to say, “Well, I’m free Tuesday, but only til four o’clock” and blah, blah, blah, blah, right? 

Michael Lewis: Right. Yeah, absolutely. 

John Koetsier: That makes a ton of sense. And that doesn’t require sentience. That doesn’t require artificial general intelligence, let’s put it that way. Those are just smart systems that you could own, like you own software, if we still own software… do you still own software? I don’t know, we usually rent it right now or we just pay for it with our attention by watching ads. So you could theoretically own those. I think the problem of owning a sentience, that’s probably much farther out on the horizon. I don’t know, some say never. Some say 50 years. Some say we already have it. 

Michael Lewis: Yeah.

John Koetsier: But it is definitely a problem of the future.

Michael Lewis: Indeed. Yeah, and like I said, I think that’s one that, personally, I don’t feel like I need to get involved with. You could have something that passes the Turing Test, right, and you can’t tell whether it’s a human or an AI. But that still doesn’t tell us whether it’s an AI. I mean, it’s a definitional test, right? The idea behind that test is that if you can’t tell the difference, then, well, it is sentient, right? I’m not sure that I necessarily buy into that definition. But, if you do buy into that definition, then the problem of creating something that can interact with you through natural language is almost the same problem as creating an artificial intelligence. And so…

John Koetsier: I agree.

Michael Lewis: …you know, so we need to skirt around those issues. 

John Koetsier: I agree. I think that test is far insufficient, but I also do agree with you that if we do eventually create something that has some level of consciousness, some internal dialogue, some, you know, human recognizable intelligence level, some beingness — as hard as it might be for us to imagine a machine owning that or possessing that or being that — we will have to rethink how we treat our machines, [laughing] how we deal with them, what place they have in our society and in our lives. In any case, you currently have the voice assistant, you can buy that, put that in your house. What does that interact with? What does that engage with? I can’t imagine the ecosystem is as large as Alexa has built, but what does that work with right now? 

Michael Lewis: Well, what we have right out of the box is the 10 most commonly used functions on your typical smart speaker. So, like I said, we’ve got internet radio, thousands of stations. We’ve got, you can play it as a jukebox, you can play your own MP3 collection if you want. You’ve got all the timers and alarms and things of that nature that very, you know, all the basic stuff. You can query through DuckDuckGo, Wikipedia… do simple, like calculations through Wolfram/Alpha, you know, how many teaspoons are in a gallon or whatever, and all this through a natural language interface.

John Koetsier: How many milliliters are there in a mile? Yes. I’d like… [laughter]

Michael Lewis: Exactly. Yes. And so, that’s the basics. In addition, we also interface with Home Assistant and they have a system that’s been widely deployed, that interfaces with thousands of IoT devices throughout your home. So, we’ll be able to provide a voice interface to all of those smart devices throughout your home. 

John Koetsier: Excellent. Excellent. Super interesting stuff. I’m sure it’s early days. I’m sure there’s much more to create and to build. One of the challenges of making AI smarter is the availability, accessibility of data. How are you solving that as a smaller company building something that’s open source?

Michael Lewis: Yeah. Well, we’ve… from the beginning, community has been one of the pillars of our company. We have a fairly robust community of contributors, both at the source code level as well as at the data level. And so we recognize that data is important for creating these AI-driven systems. But we also know that it’s not that much data. You don’t need everyone’s voice to be sampled in order to get an accurate representation of how to do speech-to-text translation or text-to-speech and that sort of thing. So, purely based on the voluntary contributions of our community, and various communities around the web, because there are a lot of open source communities that are contributing data to various data sets, we’ve been able to build the tools that we need. And so that’s how we envision going forward. And, like I said, the tools that we will build in the future will be very personal. So, if I want to speak in a colloquial way with my voice assistant, I will teach it how I speak, right? So, if I say something that it doesn’t understand, it can say, “Well, I’m not sure what you meant,” and I say, “Oh, I meant, set an alarm.” Then it could say, “Oh, well, now I know that’s how you work.” But that doesn’t necessarily get broadcast to the thousands of other users who have a similar profile to me or whatever, because that’s a very personal thing, right?

John Koetsier: Mm-hmm. 

Michael Lewis: So, we are planning on building a system where your AI learns with you and it becomes a very personalized experience.

John Koetsier: Excellent. Thank you for your time. 

Michael Lewis: Oh, of course.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Consider supporting TechFirst by becoming a $SMRT stakeholder, connect to my YouTube channel, and subscribe on your podcast platform of choice: