Have we already achieved AGI?
OpenAI just released GPT-4o. It’s impressive, and the implications are huge for so many different professions … not least of which is education and tutoring. It’s also showing us the beginning of AI that is truly present in our lives … AI that sees what we see, doesn’t exist just in a box with text input, hears what we hear, and hallucinates less.
What does that — and other recent advancements in AI — mean for AGI?
In this episode of TechFirst, host John Koetsier discusses the implications of OpenAI’s GPT-4 release and explores the current state and future of Artificial General Intelligence (AGI) with Roman Yampolskiy, a PhD research scientist and associate professor.
They delve into the rapid advancements in AI, the concept of AGI, potential impacts on different professions, the cultural and existential risks, and the challenges of safety and alignment with AGI. The conversation also covers the societal changes needed to adapt to a future where mental and physical labor could be fully automated.
00:00 Exploring the Boundaries of AI’s Capabilities
01:36 The Evolution and Impact of AI on Human Intelligence
03:39 The Rapid Advancements in AI and the Path to AGI
06:38 The Societal Implications of Advanced AI and AGI
09:27 Navigating the Future of Work and AI’s Role
14:52 The Ethical Dilemmas of Developing Superintelligent AI
19:22 Looking Ahead: The Unpredictable Future of AI
Subscribe to the TechFirst audio podcast
Get a transcript of this episode of the TechFirst podcast …
Roman Yampolskiy: If you look at all possible. Tasks, humans, engagement, it speaks every language.
It can write poetry, generate art, play games. No human being can compete in all those domains, even very capable ones. So truly, if you average over all existing and hypothetical future tasks, it’s already dominating just because it’s so universal. So. Beyond what a typical human is expected.
John Koetsier: Have we already achieved a GI? Hello and welcome to Tech First. My name is John here. We saw OpenAI just released GPT-4. Oh. It’s impressive. The implications are huge for so many different professions, not least of which is education tutoring. It’s also showing us the beginnings of AI that is truly present in our life.
It sees what we see. It doesn’t exist just in a box with text input, and here’s what we hear. It hallucinates less. What does that and other recent advancements in AI mean for a GI chat? We have Roman Polsky. He’s a reacher research scientist. He’s a PhD in computer science. He’s an author. He is an associate prof at the University of Louisville.
Welcome Roman.
Roman Yampolskiy: Thank you so much for inviting me.
John Koetsier: Hey, super pumped to see you. Last time we saw each other was at the Beneficial A GI conference in Panama, which was great fun, and Panama was a wonderful place to be. Hope you enjoyed that.
Roman Yampolskiy: Oh yeah, I loved it. I discovered Panama. That was awesome.
John Koetsier: Exactly.
And the Panama Kamel, which was cool. I wanna kick this off by just reading what you recently posted. You recently posted, in my opinion, current ais when their performance is average across various tasks already surpassed the intelligence of the average human. While top individuals still outperform AI in many areas, this gap is rapidly shrinking.
That sounds AGI ish.
Roman Yampolskiy: Well, the best definition of intelligence in my opinion, comes from shame leg. And to simplify it, he says its ability of a system to win in any environment. So we’re not talking about narrow systems. If you look at all possible. Tasks, humans, engagement, it speaks every language.
It, can write poetry, generate art, play games. No human being can compete in all those domains, even very capable ones. So truly, if you average over all existing and hypothetical future tasks, it’s already dominating just because it’s so universal. So. Beyond what a typical human is expected.
We also, for some reason think very highly of humans. I don’t know if you get to interact with average people and half of them are below average, that’s not an impressive level of performance. I.
John Koetsier: You remind me of the quote. I forget who it was. I’m not sure if it was Robert Hyland or somebody who said no, it’s probably George Carlin.
The average person is a moron and half or worse or something like that. Of course, we all think we’re above average. Everybody thinks they’re above average. That is not always true, especially across different domains. I’m sure I’m below average in many areas, so in any case,
Roman Yampolskiy: of course, absolutely true, but we do have measures of general intelligence and those you can pretty accurately assess if you are at the top or not.
John Koetsier: Yeah. So. This is it’s a massive big deal, right? and we’ve always had this thought that after a GI, the singularity after a GI, everything changes. And it changes instantly, like quickly. The chart of progress just goes straight up. And is that a false thought, perhaps?
Roman Yampolskiy: Well, it is happening pretty fast. The. Change is happening weekly, every week there is a new model with new capabilities, improvements. It may not seem like it’s going straight up, but if you zoom out, let’s say there is 70 years of research and ai, most of the progress is within the last 5% of that timeline.
So it is starting to look pretty steep and as it gets more general, I think it will accelerate in terms of being able to observe. New knowledge, new capabilities. So, it may not be instantaneous in a sense of one second after this model is released, but like, it takes, 21 years to raise a human.
I. And they’re known to be general intelligences. So if this takes three years to get to super intelligence it’s pretty quick.
John Koetsier: Yeah. What’s another prerequisite for a singularity type event? Because you can have this super intelligence, but if it’s only like, a genie in a bottle that we summon and put back in the bottle, it has to have some life of its own.
Does it not?
Roman Yampolskiy: So the switch from Tool AI where it just listens for your commands and tries to fulfill them to agent. Kinda entities with ongoing sets of plans and goals and can create additional plans and goals. I think that’s the game changer. And quite a few of those companies are now talking about creating agents for businesses, agents for societies of agents to interact and get better performance by having so many of them kind of wisdom of artificial crowds.
John Koetsier: You also just quoted Sam Altman in another post. You should bring up that quote right now. The one about it, it’s not a
Roman Yampolskiy: quote of Sam, to be fair. It’s a journalist writing a very not funny headline. Sam did not explicitly say the. Phrase. It’s basically, what was the phrase? So the common dream of everyone is to have a killer app.
John Koetsier: Yes. And they’re
Roman Yampolskiy: talking about agents being a killer function of ai.
John Koetsier: Yes. Killer paraphrase of what
Roman Yampolskiy: Sam said, but still not the best choice of works is, could be really happening very soon.
John Koetsier: Yeah, we did just go to the beneficial a GI conference, not the killer, a GI conference. Not saying those don’t exist.
The, there are people building military ai, so I. Talk about what this changes, right? Because if we’ve seen the internet, we’ve seen the price of information, the price of data approach zero, right? As you look at the field of robotics, you can extrapolate out. And while it’s very expensive right now to build and field and ship and use robots, you can see that the price of physical labor will approach zero.
With a GI, it looks like it will beat. That it’ll beat blue collar labor, white collar labor could be at risk. First price of mental labor could approach zero. How does that change everything?
Roman Yampolskiy: So it really depends on how far in the future you’re trying to. Make your prediction. We used to say, long term was 20 years, 30 years until a GI, short term problems, technological unemployment were more immediate.
But now most predictions, prediction markets and top people are saying we’re three to five years away from HGI. So that completely changes our concerns. For me, it’s existential risks. If you’re still concerned about technological unemployment, then we’re really looking at all jobs a automatable. It’s not just low level or specific occupations.
Really anything can be automated and it looks like robotics industry is catching up. There are multiple humanoid robot models, which are quite capable already, and the progress is also exponential. So even the physical labor, the difficult task of being, a plumber or something like that, could also be automated.
John Koetsier: That’s a pretty good future if we do it right, that’s a pretty awful future if we do it wrong.
Roman Yampolskiy: Well, even if we do it right in a sense of not getting killed by it, it’s not obvious that people are happy with nothing to do. We all depend on having a place to go in the morning, and a lot of people derive meaning from being.
A speaker, a writer, a comedian, whatever it is, you are self-identifying as. And if all that is gone, it’s a really cultural crisis. We’re not prepared for. We talk about, very commonly about existential risks, suffering risks. We coin the term eye risks. Ikigai risks, meaning your meaning is stolen from you.
John Koetsier: My hope and dream is that we’ll find different ways of creating meaning that are not necessarily related to a job that provides the necessities of our life. But obviously that remains to a point in question. I wanna talk about one of the things that we brought up at the beneficial a GI conference in Panama.
We talked about LLMs and most people there. I think maybe this is my perception, maybe I’m wrong. Most people there seem to think that LLMs by themselves were insufficient for a GI, that you needed some other components, whether it was like a super ego to the ego or whether it was like, a, an agent type mechanism to direct.
What are your thoughts on that?
Roman Yampolskiy: Well, we haven’t seen diminishing returns yet. Every new model is a lot more capable than the previous one. Like nobody even knows what GPT one was able to do. GPT two was like, oh, that’s really cute. Put some money in it. But three, we, we really were impressed and now we at four and five.
Sounds like it’s going to be pretty close to a GI if it’s not there yet. The same process with just tying it in with, you have perfect memory. You have access to internet, you have multi-agent architectures. You can brute force a lot of narrow domains through two ai that will already again, be able to out compete most people in most occupations.
I think a lot of jobs today. Don’t have to exist at all. They’re BS jobs and they’re there for historical reasons.
If we truly wanted to automate a lot of low skilled labor is fully automatable today.
John Koetsier: You have
Roman Yampolskiy: your, USPS mail delivery. You have your. Take orders at McDonald’s, all that we can do today.
It would be nice if we had a plan for what happens then All the jobs are gone. It’s a big cultural paradigm shift. You cannot just do it overnight. You have to really change society. You have to change opportunities for people to engage with something productive. So those are big problems and I think no one’s spending enough time looking at them.
John Koetsier: Let’s just amplify those last words that you just mentioned because we’re in a. Incredibly diverse, divisive era right now. There’s so much anger and hatred even politically, and that’s globally, right? That in the United States where you live. I see that in Canada where I live. We see that in Europe all sorts of places.
Different ideas about what should happen with immigration, different ideas about what should happen with culture. The woke mind virus that some people are complaining about. This whole culture war that’s going on. We’re focusing on all these things and our politicians are focusing on many of these things as well, including regional wars and other things like that.
And there’s this, all these things are these small little issues. If you see this massive wave of change that is totally going to reinvent human society, what it means to work, what it means to think, what it means to do, have a job, how our economy is structured, how we allocate resources, who’s who has power, who does not have power?
It seems like 99% of the planet has no clue that there’s this. Wave that’s about to hit,
Roman Yampolskiy: that’s about right. And that’s why I never waste my time on any of those issues. I will not be on the internet debating, local governance,
John Koetsier: Smart man. Let’s chat a little bit about OpenAI. You mentioned the GPTs that they’ve come out with.
I led off by talking about GPT-4 Oh, the latest. They’ve had some shakeup. Obviously. Sam Altman was briefly out, what was that, a year ago? Half a year ago. Then back in now the CTO or Chief Scientist Delia Suit. Skiver Gaver, his is out, and a couple others as well. There was some talk. Back when Sam was initially turfed that people were revolting, a few, not many, because they felt like we’re approaching a GI here and it’s uncontrolled and we don’t know where this is going, and we’re freaked out.
Do you see any of these current shifts as part of that fear?
Roman Yampolskiy: So I have no insider knowledge. I don’t really know why so many top safety people at those groups resigned. They also don’t disclose it, and I think that’s not good. They sign NDAs and they’re not allowed to really say what happened there. I would be happier if they were unhappy and something like a GI was internally developed if they stayed behind and did something to.
Mitigate the risk from inside rather than just quit.
John Koetsier: If you
Roman Yampolskiy: have a security guard at the mall and shooting starts, you don’t want him to quit. You want him to take responsibility, and if he does run away, we can hold him legally responsible for failing at his duties, I would hope.
John Koetsier: It’s the
Roman Yampolskiy: same here.
John Koetsier: You were
Roman Yampolskiy: hired to do safety work. You were letting all of us down.
John Koetsier: Yeah, well, I guess we’ll learn more about that in the future. Hopefully not as a singularity begins, but we’ll find out. I recently chatted with Dan Fagel on Tech First, and of course he was at the Beneficial A GI conference, and he talked about different approaches to HEI and some approaches are like, forget it.
Don’t even start. Other approaches are, hey, do it, but we need oversight. Other approaches are go full bore. No. Worries, no protections. Where do you fit on those spectrum?
Roman Yampolskiy: So we need to be specific about what type of AI we’re referring to. Narrow AI systems are incredibly useful. They are tools we should develop them.
They are great for research, for medical work. Strongly encourage monetizing them. Deploying them is wonderful. Creating super intelligence, a truly general, more capable system, which we cannot control. Sounds like a dumbest thing we can possibly do. We’ll build our own replacements. So unless you have a working safety mechanism in place, which no one claims to have, just don’t work, and more capable general ai.
John Koetsier: I don’t see any way to put the genie back in the box. I don’t see any way to stop development there. Certainly not across companies. Certainly not across all nations. And there’s very open questions of if you actually did develop, I. A super intelligent agent, how would you Totally what does safety even mean in there?
You can have all the systems you want, but if you’ve got something that’s 10 times, a hundred times as intelligent as you are, well, we’ve seen that pretty much every security system that we’ve ever built is hackable and for a super intelligent agent good luck.
Roman Yampolskiy: Right, and it makes perfect sense of a personal self-interest.
No one should be developing those systems. They would get you killed. If you are a young, rich guy, you have a billion dollar startup, why would you wanna destroy all that? It sounds like if there is enough of those convincing arguments that should convince them not to go in that direction. They would not be known even as a bad guy in history because they’ll destroy history.
John Koetsier: Yeah, we also invented the atom bomb and the hydrogen bomb and many other things like that. And I don’t think that many of the people who are building these have the ability to stop. They’re wired to keep opening the next door.
Roman Yampolskiy: I’m not saying you are wrong, but it seems like this is our best chance to present convincing enough proofs of impossibility.
And deploy it to people who are smart enough to comprehend them.
John Koetsier: They’re
Roman Yampolskiy: smart enough to build those systems. They should be smart enough to understand you cannot build a perpetual safety device. It’s like a perpetual motion machine. You need to always have it right, GPT 5 6 7 400. It can never be unsafe.
Never have a single bug. Despite learning, despite self-modifying new hardware malevolent users, nothing should ever produce a single bug forever. That seems like a difficult challenge to me.
John Koetsier: I a hundred percent agree. I a hundred percent agree. And given that it’s unlikely we’ll produce just one a GI, if we do actually produce an agent and self-aware, even, I don’t even know if that’s, we’ll get into that.
I, but if we’re gonna produce many of them and some of them are going to be different. They’re going to have different ideas and goals. Let’s talk about that consciousness thing. Is that a requirement for a GI Is that is It does somebody does an entity need to know it exists and be con capable of contemplating its own existence to be an A GI probably not.
’cause you already said that, you think GPT-4 0.0 is pretty much a GI as it stands.
Roman Yampolskiy: So those are two different concepts. I think self-awareness in a sense of, you understand you are an agent within a world model and you understand how you impact the world and how world impacts you is necessary, and I think those systems can do that.
Internal states of experience, quality, pain. Completely unnecessary for being capable optimizer. I have no idea if you feel pain. I never tested it. I trust you when you say you do, but that’s not relevant to anything.
John Koetsier: Love it. Okay. Let’s turn our eyes towards the future a little bit. Peer into the crystal ball. You’ve said the three to five year prediction that many have, given that the prediction markets are saying, Hey, that’s a GI, you’ve already said, Hey, what we have right now, I. Pretty close basically in some senses.
What do you think the next three to five years look?
Roman Yampolskiy: It sounds like they’re gonna continue releasing more and more capable models like watching a kid grow. You had a 5-year-old, now it’s a 7-year-old. What’s the difference between the two? It’s hard to pinpoint specific major milestones at that age range, but clearly they becoming more capable and by some point they become smarter than you, hopefully.
John Koetsier: Well, and it sounds like that’s a reality we are entirely unprepared for as a culture and as a world,
Roman Yampolskiy: and again, I don’t think you can prepare for something smarter than you. The whole point is there are unknown unknowns. If you were capable of making those predictions, you would be that smart.
We know the systems are unpredictable. They’re too complex to understand. We cannot comprehend sufficiently or large explanations, so they’re well-known limits to what can be done in this space.
John Koetsier: David Bryn science fiction author, also astrophysicist was at the beneficial a GI conference as well and felt like, Hey AI is coming.
A GI is coming. It will be dangerous. The best option for us is that AI polices ai agree, disagree.
Roman Yampolskiy: I’m not sure how that could be implemented. You’re basically requiring the Sketch 22 where you have a friendly super intelligence to help you develop Ava friendly super intelligences, monitor them, supervise them if there is adversarial relationship now where collateral damage in this AI wars.
But the bigger problem is we don’t have already aligned police officer ai. And you cannot have narrow systems monitoring general systems. And all we can verify are narrow systems.
John Koetsier: Yeah. Pretty challenging the problem of intelligence, right? It’s like, our cat trying to police us. They can influence our behavior, but only as far as we want to be influenced.
Roman Yampolskiy: Yeah, I haven’t seen examples of where lower level intelligence can indefinitely control, not influence, control, higher level intelligence.
John Koetsier: Agree. I agree. Excellent. Well, thank you so much for taking this time, Roman. Do appreciate it. Thank you for inviting me again.
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.