The Future of AI: Superintelligence and humans

romany-yampoliskiy-steve-wozniak

Superintelligence: What happens in a world with AI that is hundreds or thousands of times smarter than humans?

That’s the primary question for the latest edition of future39. In this episode, we chat with research scientist Roman Yampolskiy. He’s a professor at the University of Louisville, and his most recent book is Artificial Superintelligence: A Futuristic Approach.

What we talk about

  • Define superintelligence … what are we talking about here?
  • Assume we can create a superintelligence … or one emerges … what are the potential dangers?
  • Is there any reason to assume that an artificial superintelligence would have any particular kind of morality?
  • What about the opportunities? What would a superintelligence enable?
  • Elon Musk thinks we’ll have to augment to compete. Agree?
  • What happens to humans in a world with one — or many — superintelligences?
  • How likely do you think a superintelligence is in the next 10 to 20 years?

You can:

  • Listen to the podcast
  • Watch the video
  • Or … read the full transcript below

Listen to the podcast

Subscribe wherever you find podcasts:
If you listen to podcasts, here’s where you can subscribe to future39 and here more interviews like this on the future.

Watch the interview: super intelligence

Or, of course, you can read the full transcript …

John Koetsier: Superintelligence. What happens in a world with AI that’s hundreds or thousands of times smarter than we are? Today we’re speaking with research scientist Roman Yampolskiy. He’s a professor at the University of Louisville, and his most recent book is Artificial Superintelligence: A Futuristic Approach.

Roman, welcome! 

Roman Yampolskiy: Hi John, thanks for having me. 

John Koetsier: Thank you so much for coming on the show. You have an amazing background there, I love it. What have we got there? 

Roman Yampolskiy: Just props. A lot of what I do is recruiting students, so nice to have some props for high school students, they like it. 

John Koetsier: Excellent, excellent, wonderful. Cool. Well, can you define superintelligence for us? What are we talking about here? 

Roman Yampolskiy: So we’re kind of used to artificial intelligence being really good in specific domains. So like a calculator is really good at algebra and much better at adding things than any human. Superintelligence is same idea, but taken to multiple domains.

So a system which would be smarter than any human in science, engineering, law, economics, across all domains. A system capable of transferring knowledge within those domains and essentially making all of us unemployed. 

John Koetsier: So if I understand you correctly, we’re talking about an intelligence which is similar to us in that we can be good at a variety of different things, not just narrow in one field, a legal AI, a medical AI, that sort of thing. You’re talking about an intelligence that can generalize. 

Roman Yampolskiy: Right. We’re usually somewhat good in one or two domains and very mediocre in others. Whereas this system would be really the best compared to any human in every domain. 

John Koetsier: Yes, yes. Interesting. Okay, so we can say an AI that’s a hundred or a thousand times smarter than us. It’s hard to imagine what that’s actually like. Can you you give us some help there? 

Roman Yampolskiy: So in the domains where AI is already dominating, we can kind of get a feel for it. So if you’re playing chess, for example, against modern AI you don’t have a chance, essentially. And that’s, I don’t know if it’s a thousand times better than you, 10x, 10% better, it still feels like you’re playing against God. It just knows every right move and nothing you can do will get you to win. 

John Koetsier: Yes. Interesting. Okay, so let’s assume that we can create a superintelligence or that one is going to emerge from some self-driving car or some cloud system or whatever. What are some of the potential dangers here?

Roman Yampolskiy: So we’re not in control.

We don’t know how to control systems which are either at human level or better. And so we don’t know what it’s going to do. We cannot predict what it will decide to do, and even if we can give it certain goals, we don’t know how it’s going to achieve those goals. So kind of the standard example is cure this virus… cure this virus coming from China. And one way to do it is to kill every person who has it and then we don’t have a problem. That’s not a solution we had in mind, but to a system which doesn’t have the same values, same common sense, it makes just as much sense to do it that way. 

John Koetsier: Yeah, interesting. So we’ve seen in the past where people have talked about superintelligences, AIs, and saying that they would have some level of morality. Is there any reason to assume that a superintelligence would have any kind of morality at all or that their morality would be similar to something that we would recognize?

Roman Yampolskiy: So, first of all, we don’t agree on morals. We as humanity don’t have a common set of morals. Philosophers have been trying for millennia. We don’t agree on anything. I can always find someone who disagrees with you on this specific issue.

But even if we agreed, even if we had some like ‘let’s do this, this is the set of ethical standards we all agree on,’ hard coding it in, forcing it in will not work. You have a superintelligent lawyer who’ll always find a loophole in everything you say… oh, it says don’t kill people. Well, what is the person? How do we define it? Does it work for abortion? Does it work for cryopreserved people? There is always a way to bypass those hard-coded rules. 

On top of it we don’t know how to program something like that in. We don’t have terms in programming languages which map perfectly onto those things. So, A, we don’t agree. B, we don’t know how to code it up. And if we could do those impossible things, it would still fail miserably. 

John Koetsier: Yeah. And I mean, how do you hard code something for a superintelligence? I mean, we can’t hard code a human. We can’t hard code a human to say ‘thou shalt not kill’ or something like that, right?

And so if you have a superintelligence I think one of the terms that might define that is, it’s going to identify for itself its purpose, perhaps, how it sees the world and its morality. Would you agree?

Roman Yampolskiy: Well, this is what we’re working on, right? We want to kind of stay in control to a certain degree. At least give a general direction, like ‘be nice to humans, help us out with our problems, not just worry about your stuff,’ and we don’t have a solution to that. We just, we hope to do it that way. 

If you create a completely random superintelligence, how is this helping me? Right, it can do something I don’t want. It could be actually quite awful in terms of, ‘oh, let’s experiment on humans and see how much suffering we can do.’ We just don’t have any predictions about what it will be like if a system is at that level, it will be definitely different from us. If you just look at people with different IQs, they do different preferences, you like beer and football, you like wine and philosophy, and that’s like 10 IQ point difference. At a 1000 points, maybe you like something really weird. 

John Koetsier: Maybe you do, and maybe some superintelligence really likes football and would drink beer if it had a body, but we don’t know. 

Roman Yampolskiy: … overflows back to stupidity yeah.

John Koetsier: Yes exactly. Interesting, interesting. So when you watch something like maybe Terminator or something like that, and you see Skynet or whatever, what comes to mind?

Roman Yampolskiy: So this is existing problems. This is just military using this technology and militarizing existing AI systems. You don’t have to have superintelligence to have smart drones to have automated soldiers. All of that is possible with what we have today. 

John Koetsier: Yeah.

Roman Yampolskiy: So it’s kind of like a documentary in a way with watching what they can do, and it’s like, well, we’re still investing billions into that. 

John Koetsier: Yeah, exactly. We do see that the military is building AI. We do have drones that can kill people. They currently require a finger on the trigger, maybe thousands of kilometers away, but it is there. 

Roman Yampolskiy: Not anymore. China sells fully autonomous drones, kill drones.

John Koetsier:  True. 

Roman Yampolskiy: So you can get one right now. 

John Koetsier: Not scary at all. Okay. Let’s look at the positive side. What about the opportunities? What would a superintelligence enable? 

Roman Yampolskiy: So if it’s controlled, if it does what some of us want, it’s definitely nice to have this godlike assistance when you’re doing science, engineering, you can cure all diseases, you can do infinite life extension so immortality is an option. Obviously free labor, cognitive, physical, so economic explosion, no poverty, no problems with shortages. Could be nice. 

John Koetsier: Could be nice. I mean, absolutely if the genie does what you want it to do, then the genie is really nice. It’s hard to imagine, however, a superintelligence that will just be a docile, ‘okay, obey orders’ type of superintelligence. 

Roman Yampolskiy: But we also don’t know what we want. The problem with genies is always you think you want it, and then you go, ‘Oh, that’s what it’s like… undo, undo!’ So we’re not sure. Ideally we want to better control what we want, what we want right now will definitely backfire. 

John Koetsier: We have human problems, never mind superintelligence problems. Let’s talk about some of the things that Elon Musk has talked about and worked on with neural lace and stuff like that.

He’s basically saying that we’re going to have to augment in order to compete, we’re going to have to cyborg at a mental level, chips interfacing with our brain, extension of our brain in terms of adding cores… thoughts on that?

Roman Yampolskiy: So he’s right, if we want to compete we need to be better. We’re not competitive as we are now. The problem is then you add the brain implants, which are smarter than your brain, or you upload yourself into a laptop.

You stop being human under any modern definition.

So you’re basically saying to compete with superintelligent AIs we’ll create this new cyborg race to be competitive. What does it do for us right now? So if you’re happy with that option, if you think it’s an improvement, then it probably would work to a certain extent.

I’m not sure. Let’s start with the brain computer interfaces, right? The moment you connect internet to it you connect superintelligence systems to help you. What is it you’re contributing to that system? You’re just a bottleneck. You’re slowing it down. So for a while it keeps you around, just doesn’t give you any useful info, and then you are just completely removed from it. You’re Windows XP. You’re uninstalled.

John Koetsier: So you upload yourself. 

Roman Yampolskiy: You upload yourself, but then you have no physical needs, you have no body. You have very different concerns and preferences. You become an AI, in fact it’s possible that our first AI will come from some sort of uploading process, just scanning human brains to start with.

So yeah, you have AI competing with other AIs for who’s smarter. Again, what does it do for me? 

John Koetsier: And super interesting as well to think about that in terms of equality and inequity in the world that we have today.

I was talking with Ray Kurzweil probably three or four years ago, and he had been talking about superintelligence. He had been talking about basically getting smarter just like you spin up your AWS instances, add more cores, add more servers, and there you go.

And I was thinking, okay, so the richest person in the world is now the smartest person in the world. How will that ever change? 

Roman Yampolskiy: So that’s an interesting question. Is it the same person when we talk about preservation of identity? If you had someone who’s better than you come over and replace you, he’s better looking, physically better, smarter, like your wife may be happier, but are you happier with this arrangement? 

John Koetsier: Haha, lots of questions, lots of issues here. So what happens in a world where we don’t have just one superintelligence, but maybe we have thousands, maybe we have millions of them?

Roman Yampolskiy: It’s an interesting scenario. It seems like beyond… below human level of capability that’s what we experience, right? You have specific narrow AIs for everything, but once you get to human level and beyond, it’s not obvious that this scenario works. It seems like they would converge into one information space.

Like you have one internet with one Wikipedia, so we share memory, share resources. If there is some sort of competition initially one would quickly win over I think, over those resources, and so you end up with one god. One god is the right answer. 

John Koetsier: Interesting. I mean, I don’t know, you know, it depends if ego is something that comes along with an artificial intelligence or a superintelligence, and there’s some value inherent in the system which says, ‘Hey, I’m unique. I’m distinct. I’m focused on this.’

You know if humans are like that obviously, you like different things than I like, everybody likes different things. We focus on different areas, have different strengths. It’d be interesting to see if that’s the case with a superintelligent AI.

Roman Yampolskiy: So there is some reason to think they would converge on the same final goals, terminal goals. So right now humans, you collect stamps, I collect coins, like it’s all whatever. But if absolute goals are obtain maximal resources, obtain maximal knowledge, just become kind of in charge of it, even if they start as individual systems they’re kind of converging on what they’re trying to accomplish.

So where does one system end and another system begin? It’s not very clear-cut if they all have like WiFi signals you know, is this your WiFi or my WiFi? They’re all here, it’s not so trivial to separate them. 

John Koetsier: No, absolutely not. Although it becomes more interesting when you talk about, okay, if we have superintelligences and as we continue to maybe explore more of Mars, maybe the entire solar system and maybe beyond that, and we can send intelligence farther than we can go ourselves.

I mean the life support system for you and I is a lot more expensive and heavy to send someplace than a little bit of electricity and some storage and some shielding from cosmic rays, right? So all of a sudden you could have intelligence throughout the galaxy potentially. 

Roman Yampolskiy: Right. And it’s possible they’ll already have their own AIs exploring and dominating and kind of looking at us going, ‘what the… what are they doing?’

John Koetsier:  It’s the earth zoo.

Roman Yampolskiy:  But if they are separated enough in space and time, yeah they can be independent and different. But at some point those light cones will collide and they still have to figure out who’s in charge. 

John Koetsier: Yeah I guess, if that even matters at that level of intelligence. Let’s talk about the potential, the likelihood of a superintelligence coming out, emerging, being created. How long do you think that is? Do you think that’s 10 years, 20 years, a 100 years, a 1000 years away? 

Roman Yampolskiy: Well, that’s a hard question. I don’t know for sure. Kurzweil has really nice charts, he says 2045. That’s a pretty good estimate for me.

I heard people say as soon as seven years from now, which sounds extreme, but they’re smart people with a lot of insider access. So I don’t know for sure, but in terms of my research, safety and control makes no difference, still same problem. Still would have no solutions which are scalable, viable, which we can actually use.

So more time is better obviously, but an important problem anyways.

John Koetsier: If you had to make a guess as to where a superintelligence could emerge, what field would you think it would come from? Some people have speculated the software, whether in cloud or in the car of self driving cars. Some people have speculated cloud systems.

Where would you speculate? Some people have speculated military systems. What would your guess be? 

Roman Yampolskiy: So right now it looks like just adding more compute and bigger data gives you good results. So whoever has resources for it, something like Google DeepMind, if we just keep scaling those systems we’ll get there, in my opinion. I don’t know if it’s specific to a particular domain, but that strategy kind of just general deep neural networks, whatever data you throw at them they figure out how to work with it. 

John Koetsier: Interesting, and I mean that raises other interesting concerns or questions. If a corporation controls or owns a superintelligence, what does that look like?

We’ve seen some of the big quant firms on Wall Street just completely dominating and extracting huge profits from Wall Street and from trading because they’ve got super smart systems that are able to react much faster than you or I, and can exploit quick opportunities in the stock market.

If Facebook, or Google, or Apple, or somebody… probably not Apple, not the greatest AI… but somebody like that builds a superintelligence they could be unstoppable if they could control it. 

Roman Yampolskiy: Right. So this is what I was about to say. In a post superintelligence world — worrying about stock profits and money, you have immortality, you have ability to cure any disease and produce anything with free labor — is your stock option really what you’re worried about?

And people do talk about it, ‘then how are we going to divide super profits if we get to that point?’ That’s going to be last thing on your mind, even that we have no control mechanisms, it will be the last thing on your mind.

John Koetsier: Yes, yes, exactly. I have to say I’m on that side that if we do create a superintelligence, there is no way to control it. It’s going to be vastly smarter than we are. It will be vastly quicker than we are, and its first priority, at least in most of the movies and probably in reality, will be self-preservation if that’s part of its DNA, if we want to put it that way. 

Roman Yampolskiy: Well it makes sense, no matter what goal it actually has, it cannot achieve it if it’s not surviving, if it’s not in charge. And we definitely have at least an option of shutting it down initially. So the first thing it does is secure any options we’d have for turning it off.

John Koetsier: Yes. 

Roman Yampolskiy: And if it means controlling us that’s the best way to accomplish that. 

John Koetsier: Well, it’s interesting to think about, and if Kurzweil is right and 2045 is when we have some kind of superintelligence, it’s likely that you or I, and many of the people listening to us and listening to the podcast later on, will be around for that to happen. It is a crazy, wonderful, amazing world that we are moving into, and it seems like it’s only getting crazier and perhaps more wonderful as we continue. 

Roman Yampolskiy: Right. So depends on how old you are. If you’re young you still have your whole life ahead of you, it’s a bit of a gamble. You don’t know if it’s going to get much worse or much better. If you’re 90, just press the button, what do you have to lose right? Maybe you’ll get immortality. So that’s how you have to keep in mind who makes those decisions. 

John Koetsier: Interesting, interesting. Very cool. Well, Roman, thank you so much for joining us on future39. For anyone who’s listening later, now, watching, whatever platform you’re on, please like, subscribe, share, comment.

If you’re on the podcast and you like it, please rate it and review it. That’d be great. Thank you so much. Until next time… this is John Koetsier with future39.