When AI takes over … will you even notice? (Or, what your bird can teach you about AI …)

Huge chunks of our lives are already managed by AI. The songs we listen to, the routes we drive, the search results we see, the climate in our homes?

When AGI takes over, will you even notice?

In this TechFirst, we chat with Evan Coopersmith, a data scientist and AI researcher who says that when AI fully takes over … we probably won’t. And, he says, his bird can teach us a lot about what our future relationship with AI will be. And, in a less positive way, ants that are currently being exterminated in his building can teach us things we may not want to actually know about how AI might act in the future …

(subscribe to my YouTube channel)

Subscribe to the audio podcast: TechFirst is on all major platforms

 

Summary of key points, via GPT-4

  • Coopersmith makes a comparison between humans’ relationship with AI and his relationship with his pet bird, Beaker. He observes that Beaker considers himself intellectually superior because in every sphere that is familiar to the bird, Beaker excels. The bird is unaware of the broader context set by humans, just as humans might be unaware of a context set by a superintelligent AI​.
  • Coopersmith discusses the idea that humans might not recognize if a superintelligent AI were to take over, as they might not be able to understand the workings of a mind significantly more complex than theirs. He also mentions the varying ways humans treat different creatures based on their perceived intelligence, as illustrated by the care given to his bird versus the extermination of ants in his home. This could raise questions about how a superintelligent AI might treat humans​​.
  • The discussion moves to the concept of artificial general intelligence (AGI) and the potential for it to grow at exponential rates. Coopersmith believes it’s inevitable that AGI will continue to surpass intellectual hurdles that were once considered insurmountable for AI. The question is whether humans will end up being treated more like the bird or the ants in his analogy​​.
  • Coopersmith points out how AI has already surpassed humans in many areas, such as chess, and is now even passing the bar exam. He suggests that the pace of progress in AI development could lead to even larger gains in the future​​.
  • The podcast conversation also touches on the rise of large language models like ChatGPT and their capabilities in generative AI for various media forms such as words, music, movies, and images​.
  • Finally, Coopersmith envisions that AI’s role in our lives will gradually increase, becoming responsible for more and more decisions, to the point where it might be difficult to notice when AI has “fully taken over.”​​

 

And, a full transcript: when AI takes over

Note: this transcript is AI-generated and lightly cleaned up. Treat the recording as authoritative and this as merely indicative.

John Koetsier:
When AI takes over, will you even notice?

Hello and welcome to Tech First. My name is John Koetsier.

Big chunks of your life are already managed by AI. Your next song might be chosen by Apple. Your view of reality might depend on the first page of Google search results. And AI probably monitors and adjusts the temperature and ventilation in your house. AI chooses your optimal path to whatever destination you wanna get to in your Tesla drives and might apply the brakes if something bad is about to happen.

Today we’re chatting with a data scientist, an AI researcher who says that when AI fully takes over we might not even notice. His name is Evan Coopersmith, he’s a data scientist at AE Studio.

Welcome Evan!

Evan Coopersmith:
Great to be with you, thank you.

John Koetsier:
Looking forward to our conversation. Let’s start here. What did your bird teach you about AI?

Evan Coopersmith:
Okay, so everything that I understand about artificial intelligence is derived from a simple relationship with an adorable creature who I’m hoping behaves himself during this call. He may or may not. This is my cockatiel Beaker who is sitting on my arm.

And what I tried to do was to think about the world through Beaker’s eyes. What does Beaker think about the world?

Well, for one, he is rather confident that I am a dramatically inferior intellect to himself. Why does he think that he is more intelligent than I am? In every intellectual sphere with which he is familiar, spatial awareness, perceptions of threats, distinctions among little auditory cues, foraging for food, all of the ideas that are his perception of intellect, he is my superior. The fact that you and I can speak in complex English sentences and that we understand mathematical models and we can write code and regressions, these are abstractions for which he does not even have a template.

So he is certain that he is the more intelligent creature.

However foolish you may think the data scientists of the world happen to be, I’d like to think that my neural network is a little more advanced than Beaker’s.

Here’s the problem. Beaker smiles at me, thinks warmly about me, feels deeply and profoundly safe sitting upon me, as he does right now. He’s not trying to fly away. He doesn’t perceive me as a threat. But the entirety of his life, the parameters of his existence, are set by me and my wife.

All of this is our decision, and he is not aware of the paradigm in which he resides. So now I ask you the question, with what hubris would we conclude that if AI were our vast intellectual superior, that we would recognize the nature of our existence? Why do we assume we would be different than this lovely little animal sitting on my shoulder…

John Koetsier:
It’s a deep question. It’s a very, very deep question. I mean, it almost brings up the God question, right?

Because like, you literally cannot understand the workings of a mind, multiple orders of magnitude more complex than yours. And you can’t understand the decisions that it makes or, or the parameters it adjusts that you don’t even know about that you just accept as in the matrix before the red pill.

Evan Coopersmith:
That may be an appropriate analogy, and while that is science fiction and Hollywood and all of those sort of fantastical ideas used for storytelling, it’s not a bad template.

And what is important to recognize embedded in that template is the beneficence with which we treat the bird is not the primary manner in which we interact with intellects that are a few levels beneath us. In this house, as I speak to you right now, an enormous amount of chemical energy is being spent to eradicate all of the ants.

I live in a house in the suburbs, it’s springtime, ants are a pain, and so I’m sure the DuPont Chemical Company and any number of millions of dollars of research has been deployed to make all of those creatures die. While simultaneously, this bird will receive antibiotics and medical care at great cost, and I will offer a great deal of my time and energy and money to protect its life and make sure that it’s happy and secure.

So is this the matrix?

I don’t think so for the ants, unless you want to go to the part of the matrix where the sentinels are trying to invade the ship and kill everybody upon it. Then maybe.

John Koetsier:
Man that brings up all kinds of questions. Let’s say we get an AGI right … artificial general intelligence. Let’s say it starts growing at exponential rates. It starts becoming more and more and more intelligent Obviously it takes steps to secure its own existence.

Are we the birds? Are we the ants? Who knows?

Evan Coopersmith:
I think that’s the right question. I would like to be the bird. I do not want to be the ant. I assume you would generally agree with that sentiment.

What I would like to posit for you and for the audience is the only choice that we really get to make is are we the bird or are we the ants?

The idea that we’re going to remain intellectually superior would be an incredible act of hubris because you said if AGI grows exponentially and improves exponentially. I don’t think that’s an if question. I think that is an inevitability … pretty much every intellectual hurdle we asserted that AI would never be able to surpass, it has, so far.

And now we have other hurdles we say it will never surpass. Well, I think it’ll ultimately cross those thresholds too. It learned how to beat us at chess, and now everybody kind of dismisses this as not really an example of intelligence. And, you know, it won’t be able to write an essay the way a human can articulate thoughts with nuance.

And now it’s passing the bar.

I am fond of saying that it took us 66 years to get from a 12-second flight along the beach in North Carolina to landing on the moon. The pace of progress now is faster than it was in the early 20th century. Why don’t we think we’ll have similar, if not larger, gains in the same period of time?

John Koetsier:
It brings up an interesting question. We’ve seen the massive rise in large language models. We’ve seen ChatGPT, and there’s open source variants, stable diffusion, you name it, mid-journey, all that stuff, doing tremendously interesting things in generative AI for words, for music, for movies, for images, all this stuff.

A thousand researchers, I don’t know if they’re all AI researchers, and Elon Musk signed it as well, a thousand researchers signed an agreement saying, hey, we should issue a moratorium here, take a pause, wait, think about what’s gonna be happening here as we’re inventing this increasingly superior AI. What’s your thoughts about that?

Evan Coopersmith:
I think it is probably a nice start that is ultimately insufficient.

If somebody says, I would like to give you six months to get ready for an impending alien invasion, I’m going to ask two questions. The first is, do you think in six months you will be prepared? Do you have a plan for what you intend to do over those next six months? And what do you think your ability is to sort of cause the aliens to stay away from you for six months?

That would require a certain amount of coordination amongst the species … we just experienced as a species a global pandemic with a virus that killed between half a percent to a percent of all of those infected. We were unable to coordinate responses.

I don’t necessarily need to wax political here, but we struggled mightily amongst societies, nations, neighborhoods, neighbors to figure out what a coordinated response could, should, and ought to be. But we imagine we’re going to coordinate the technological progress across any number of nations with economic and military incentives to push onward. I’m doubtful that we will execute upon that given the context I just offered.

John Koetsier:
I’m beyond skeptical on that. There’s open source models right now. The genie is out of the bag … just because 10% or 50% or even let’s say some magical miracle 90% of AI researchers just take a knee. They take a pause. They sit on the sidelines. There’s going to be those ones in the dark. There’s going to be those ones in let’s say rogue nations. There’s going to be development.

Evan Coopersmith:
Yeah.

John Koetsier:
You can see how Google has been electrified by ChatGPT and how Bard is insufficient and they’re investing huge resources there. You see how Microsoft sees how their investment in OpenAI has made them much more competitive all of a sudden in terms of search with Bing than they were previously.

Evan Coopersmith:
You bet.

John Koetsier:
So there’s not going to be a pause here. If anything, the investment will 10x, 100x, and there’s going to be an acceleration. How do you see this playing out and how do you plan on being the bird not the ant?

Evan Coopersmith:
So if you were to ask me to take some probabilistic distribution of outcomes, I’m probably a lot closer to Eliezer Yudkowsky than Jan Lacoon, so I tend to be much more alarmist.

I think that the risks are very real and imminent.

If you read some of Dan Hendrick’s work about the evolution of species and how selfishness emerges to some extent naturally, because it is advantageous to deviate from a cooperative strategy, I think AIs pose similar risks to a certain degree, so I think it is entirely that we will, it is entirely likely that we’ll develop intelligence that is superior to ours because neural networks become increasingly powerful and more complex as time passes. So that seems almost inevitable.

And we don’t seem to have a mechanism to align those superior algorithms with the intentions and values and ethics and adaptability of the human species.

So now how do we become the bird and not the ant? Cause that’s, I think the only choice we get left. That’s the only agency that we have left is to perhaps position ourselves so that we are birds and not ants. So I think that becomes a question of a couple of things. What makes an intelligence pro-social? That’s not an obvious question. There was a piece written in the New York Times, it was Wall Street Journal, I’m sorry, by Professor Michael Graziano of Princeton, who was a neuroscientist, and he basically argued that AI, absent consciousness, would behave sociopathically. That was his argument.

Well, we don’t want the AI to be a sociopath because it’s going to be more powerful than we are … than we probably expect. So what do we have to do to give this algorithm a template where it’s inclined to act socially towards us? For one, we might be doing ourselves a disservice when we treat this nascent intelligence in its sort of child larval stage unkindly and judge it.

I wrote another sort of tongue in cheek piece entitled How to Raise a Sociopath, and I sort of was glibly going through the motions of how if you were a parent and you like offspring, what you might do, the disapproval you would show it, the manner in which you would demonstrate its inferiority, the way you would chastise it publicly. And I was doing this all sort of satirically, but I might be a little more concerned about this intelligence that we think is entirely different and unable to sort of respond to the way we train it. I would think long and hard about the priors, the teaching, and I would think a little bit about whether it has some sort of control structure more than simply a bottom-up response. This point we would suggest are just responses to stimuli in the form of the pixels of an image in the words of a sentence. And then they predict the next token appropriately. We might want to consider the insufficiency of that. That’s one.

John Koetsier:
Yeah. I think we’re doomed in that structure, in that environment that you’re talking about. I think we’re doomed because frankly, the priors won’t all be good. The priors will often be bad. And there’s going to be all kinds of inputs and conversations just like we couldn’t organize ourselves around COVID. I don’t think we can organize ourselves here.

It brings up thoughts of Isaac Asimov, three laws of robotics, right? Thou shalt not harm a human being … I forget the other two, but it was basically don’t harm a human being or through inaction allow a human being to come to harm. There was a second one about don’t harm yourself and a few things like that, unless it would conflict with the first law.

In Asimov’s building of that, that was ingrained from the bottom up.

It’s interesting that China recently announced that they are establishing some sort of committee commission and putting some laws around basically putting a digital superego in all the AIs that are being created there so that they will have parameters.

You can’t say things about Tiananmen Square or you can’t those sorts of things right?

Evan Coopersmith:
All right.

John Koetsier:
That’s gonna make it stupider. I don’t know if that’s gonna work …

Evan Coopersmith:
Well, so I’ll address this sort of Chinese constraint paradigm, because though we would find it abhorrent to say that you can’t respond to anything about Tiananmen Square, there’s any number of structures and guardrails that we embed with sort of reinforcement learning, which is what we do with ChatGPT, where we try to teach it how not to reveal and give information that we don’t want it to.

In fact, some of the researchers at AE Studio wrote a paper that won an award at NURIPS entitled Ignore Previous Prompt that was essentially explaining how you can get a large language model to give information you don’t want it to.

Even those constraints are dubious in their ability to hold.

But the real problem with any of those types of constraint architectures and reinforcement architectures is they presuppose we have some idea about proper values today. So let’s do this thought experiment. What do you think about human values from a couple thousand years ago? And keep in mind, this was probably a world that involved child sacrifice, where large numbers of young men had their skulls bashed in in warfare, huge numbers of women perished during childbirth.

Let’s see, other nations, genders, demographics, our ethics and our values were not particularly favorable by today’s standards. In fact, even if you ask people about values globally a hundred years ago, they will tend to look upon them unfavorably. Okay.

So again, I go back to this idea of hubris. Oh, but we have it right now. We have the values correct this time. This time we know. This time we’ve got the right values, and if we just teach the algorithm and constrain it with those values, it’s going to be perfect. Has there ever been a point in history where you would have been comfortable with somebody saying, okay, we got it right now, we’re stopping?

It’s the same idea of humans as the most advanced species. You know, the earth is six billion years old, human civilization is a blip, it’s ten thousand years, but we are the most advanced species on this planet right now. Okay, the game’s over. We will always be the most advanced species on the planet. You’re confident in that, are you? Right? It’s the same idea of why do we have the hubris to think it right and if we just hold things as they are all will be well and nothing will surpass us and nothing will deviate from our pristine ethics. It seems a little bit difficult to sort of grapple with, right?

John Koetsier:
Absolutely, absolutely. And it’s funny because we kind of laugh when we see, you know, China’s putting this sort of in place to govern their AI. We’re doing the same thing. Try and talk to Bing about controversial subjects or harm that AI might do to humanity.

Evan Coopersmith:
Right.

John Koetsier:
I know we have to cut this short. I know you’ve got a lot that you’ve got on your plate for the rest of the day. Where do we end this here? Doesn’t seem like a lot of hope.

Evan Coopersmith:
Well, I’m an optimistic person and a data science realist.

Those things can be in conflict with each other, right? I work at a startup that believes in increasing human agency with technology. This is a bootstrapped business that has no venture capital, no private equity, no outside shareholders that exists simply to create technology that takes the long view and does what is best for humanity. So this is why we do AGI research, because we’re not trying to make money with the next best algorithm.

We’re trying to get a better understanding of what things we can do computationally, architecturally, and mathematically that might slightly improve the probability that it’s on a path towards prosociality as opposed to the path that turns us all into paper clips or some other such horrible fate. And so we do the best we can.

But I think part of what one needs to begin with is this basic first principle of epistemic humility. I’ve tried to call out humans generally for our belief that we sort of know our superiority intellectually, that we know we hold the advantages, that we know we still have the control over the AI.

Are we sure, right?

The bird is sure that he has control over the situations that define his life, and he’s, at least as far as you and I are concerned, wrong. So I might say we end with a little bit of an optimism of our ability to help at least put the AI on a path that might be a little more sustainable for us, and that we have a little bit more humility about what we know, what we don’t know, and what we can feel confident in as we build. It’s the best I can do.

John Koetsier:
Brings to mind I had a conversation with Ray Kurzweil who I believe is still at Google right now.

He’s a well-known futurist, made tons of predictions, many of them have come true, and we were talking about AI and human intelligence and he envisioned the future of AI as some kind of Vulcan mind meld, and adding intelligence would be kind of as simple as adding nodes in the cloud to your server farm, if you want to put it that way.

And so you’d add additional nodes and there you’d go and there’d be some complex interplay between biology and technology and perhaps down the road in the future you wouldn’t need the wetware anymore. You quickly get into all kinds of questions about how that might work and ethics and everything like that. And also, if that happened, would wealthier people who can afford more cores in the cloud be smarter and totally dominate the rest of us?

So many questions here, we can’t delve into all of them. Perhaps we’ll have to chat again. Thank you so much for this time.

Evan Coopersmith:
Thank you so very much. It was wonderful to talk to you and I hope Ray Kurzweil is right.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice: