AGI in 3 to 8 years

agi in 3 to 8 years

When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence … the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?

According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.

  • 00:00 Introduction to the Future of AI
  • 01:28 Predicting the Timeline of Artificial General Intelligence
  • 02:06 The Role of LLMs in the Path to AGI
  • 05:23 The Impact of AI on Jobs and Economy
  • 06:43 The Future of AI Development
  • 10:35 The Role of Humans in a World with AGI
  • 35:10 The Diverse Future of Human and Post-Human Minds
  • 36:51 The Challenges of Transitioning to a World with AGI
  • 39:34 Conclusion: The Future of AGI

(consider subscribing on YouTube?)

We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity.

We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet’s evolution, and envisages the role of humans in a world where AI surpasses human intelligence.

Subscribe to the audio podcast

 

AGI in 3 to 8 years: transcript

John (00:02.474)
Hello and welcome to TechFirst. My name is John Koetsier. TechFirst is about smart matter, sure, but part of that is AI, distributed AI, centralized AI, but intelligence that permeates the things we surround ourselves with. When will that intelligence approach and surpass human intelligence? Something it already has, many think it hasn’t, some think it’s decades away.

To chat, we have the CEO of Singularity Net, Dr. Ben Goertzel. He’s the chairman of the Artificial General Intelligence Society. He’s the chief scientist at Mozi Health. He was the chief scientist at Hanson Robotics, has his PhD in math from Temple University, and was a professor of computer science at the University of New Mexico, among many other things. Welcome, Ben.

Ben Goertzel (00:48.482)
Hey, good to be here.

John (00:50.666)
Let’s start with the big question, when will we get AGI?

Ben Goertzel (00:55.938)
Well, if we wanted to find AGI as the creation of machines with the general intelligence of a really smart human on their best day, I would say we’re three to eight years from that if I want to put a range on it. So I think we’re pretty close. On the other hand, we’re not there yet. And as we’ve seen,

In one year, we can have a lot of really material advance in the AI field at the current time.

John (01:28.106)
You’re talking about LLMs, I’m guessing, when you say one year we have a significant push forward. Where do you think LLMs fit on the path to AGI? Are they a part of it? Are they an interesting avenue? Are they a dead end? There’s lots of different opinions on that.

Ben Goertzel (01:47.01)
Yeah, as you’ve seen, some people believe LLMs basically just need a bit of improvement to become human level AGI. Then on the other hand, you have say, Jan LeCun, who runs Facebook’s AI division, who said on the road to AGI, LLMs are an off ramp. Right? And I mean, my view is somewhat…

in between those two. I mean, I don’t think you need LLMs to get to human level AGI. They’re not a critical technology for it. And I don’t think that just adding a few more bells and whistles to LLMs or making them bigger or something is going to get you a human level AGI. On the other hand, I think they can be a powerful accelerant toward the creation of AGI.

both sort of serving as information feeders and information oracles to help teach early stage AGI systems and serving as components of AGI systems in various ways. Say one of the issues that an AGI system faces is what to pay attention to and applying an LLM just…

Training a transformer net on the histories of what goes on in AI’s mind can help that AI mind decide what to pay attention to, right? So I mean, I think there’s a lot of obvious and non -obvious ways. Transformer neural nets, which is the main technology behind LLMs, can help build AGI’s. But still, if I had to put a number on it, I mean, transformers may end up being 20 % of your ultimate AGI architecture, like not.

not 90 % but also not 1 % would be my current best guess. On the other hand, my opinion is subject to revision based on what we learn. I think the perplexing thing about LLMs is in a way, they’re narrow systems that look to us like very general systems. They’re narrow because they can’t go that far beyond.

Ben Goertzel (04:10.946)
what they’ve been trained to program to do. On the other hand, what they’ve been trained to program to do is huge compared to what can fit in any human being’s brain, right? So being, it’s like they’re narrowly constrained to a little halo around the huge field of stuff, which is like most human knowledge created so far. So it’s a very, it’s a weird kind of system in a way it’s an unnatural kind of system.

So it is very artificial, right? But it’s super cool and powerful. The other thing is you might be able to obsolete 80, 90, 95 % of human jobs without getting to AGI because most of what we do to earn a living is repetitive of stuff that has been done before. Like maybe the LLM couldn’t figure out how to do it the first time, but not that much of human.

productive labor is doing something for the first time. It’s you’re shown how to do what other people did, right? So there’s that that’s been something I didn’t quite foresee. Like I sort of thought you’d have to get to AGI to have such a broad economic impact. But now you can see by making this different sort of system, you can have a huge economic impact, you know, even without having a system that can imagine and pivot and learn wild new things the way that people care.

John (05:42.314)
It’s really interesting because you gave what seems like a fairly aggressive estimate for the onset of AGI three to eight years, I believe you said. And we talked about LLMs. LLMs have been the most noisy, hyped, busy area of development in AI in the last year, what are we not paying attention to? If LLMs are not necessarily the thing that gets us to AGI, what are we not hearing about that is still happening, innovation that’s still happening in AGI and other areas?

Ben Goertzel (06:20.354)
So I don’t think there are any big secrets here, actually. The field of AI has been around since the middle of the last century with that name. And it’s been around since a bit before that in terms of preliminary work that was happening. And deep neural networks, as we now call them, have been around at least since the 1960s, arguably the 50s. But there have been other AI.

paradigms around almost as long. So you’ve had logic based AI systems doing sort of formal logical reasoning and uncertain reasoning, common sense reasoning have been around since the sixties. You’ve had evolutionary learning systems that try to create stuff by mutating and combining what they’ve seen before. And these have been around since the 1970s. And I think these sorts of systems are going to come into their own

in the next few years for similar reasons to what has driven the growth with deep neural networks. I mean, more data, more processing power, more people banging on the problem, right? So what I think we’re going to see is sort of hybrid systems with a deep neural net aspect, a logical reasoning aspect, an evolutionary learning aspect, and combined together in integrated

systems and you can see what role that would play very clearly by looking at the shortcomings of current LLMs, right? So LLMs and other deep neural nets right now, they’re not very good at reliable complex multi -step reasoning like you need to do to write a high quality original scientific paper or something. Well, logic systems are good at complex multi -step reasoning. If you look at creativity,

LLMs are good at recombining stuff, but they’re quite derivative in what they create. Well, evolutionary learning, I mean, this is an algorithm that already has a bunch of patents to its name in various forms. I was using evolutionary learning to create music and imagery, I mean, back in the 90s, which in a way was more creative than the stuff we’re seeing out of deep neural nets now, right? So you…

Ben Goertzel (08:44.674)
You have other algorithms with a long track record behind them, other classes of algorithms that are good at exactly the things that LLMs suck at, right? And so, I mean, combining them together with LLMs in the hybrid architecture is an extremely natural thing to do. There are practical obstacles because the way our whole software and hardware stack is created now has been very well refined for deep neural networks.

and less so for these other sorts of algorithms. On the other hand, there are companies and teams of researchers working on addressing precisely this problem and people working on these problems are finding it way easier to raise money to hire people to help them than was the case before chat GPT, right? So I mean, we’re well, yeah, the bulk of AI resources are going into deep neural net stuff.

that enthusiasm for AI is now so big and so broad that I think other species of AI are also having a much easier time than they used to, even if not as easy as deep neural nets.

John (09:57.738)
It’s fascinating to hear you talk about that, that AGI is probably going to be created as we combine all these different methods together. And that makes a sort of intuitive sense because if you think about how we process, there’s different types of things. There’s a background level of knowledge that’s stored. There’s immediate attention that is paid to what’s going on. There’s spatial intelligence. There’s other types of intelligence. There’s intuition. There’s reasoning. There’s logic.

There’s logical gaps and fallacies that we fall prey to, but there’s different kind of engines that combine to form whatever is going on inside our brains.

Ben Goertzel (10:40.77)
Yeah, absolutely. I mean, a human brain contains many different sub networks with different architectures, different mixes of neuron types, different mixes of neurotransmitters. And each of them was evolved over a period of time to serve certain functions. And the deep neural nets that we’re looking at now in computer science are a model of just a few regions of the human brain.

really, I mean, a lot of it came from modeling sort of primary visual cortex and mostly just like feed forward activity from the sense organs into the cognitive center, just mostly feed forward activity of visual cortex, a bit of auditory cortex. So there’s a lot going on in the brain that is not touched by the neural network models currently being used. And I think I have no doubt you could make a human level AGI with just

formal neural networks, but they won’t be transformer neural nets per se. It would take a variety of different neural components with different architectures connected together. And I think it’s also an interesting approach to make a system where some of the components are formal neural models and some are just different sorts of computer programs. I mean, I think that’s another point about AGI though is you wouldn’t expect there to be just one.

approach to making a human level AGI. I mean, the well -worn but still apt metaphor is to flying machines, right? I mean, you got airplanes, you helicopters, you got blimps. Freeman Dyson schemed up a starship that explodes nuclear bombs behind itself. Boom, boom, boom, boom. I mean, you got a lot of backpack helicopters. You got a lot of different ways to fly. The thing is, if you have a theory of aerodynamics, then…

Then with your theory of aerodynamics, of course, you can try to understand the strengths and weaknesses of all these flying machines. We don’t have that sort of fully fleshed out theory of general intelligence yet. On the other hand, even in aerodynamics, in the end, you’re doing wind tunnel experiments to see if your thing is going to fly, right? I mean, even with a solid theory, there’s a lot of experimentation. So I think…

I think there’s going to be a variety of approaches to AGI. On the other hand, the dynamics with AGI and its development are a little different than with flying machines because the first really successful flying machine didn’t build even better flying machines, which then built even better flying machines, right? Whereas with AGI, there’s a sense in which whatever gets first being a full -on human level AGI, then that AGI itself,

John (13:21.93)
Yeah, exactly.

Ben Goertzel (13:33.442)
can build the next level of AGI faster than the competing human teams are going to be able to do it, right?

John (13:40.682)
But what’s really interesting about how you’re saying AGI could develop, which is a combination of multiple different methods and multiple different types of reasoning, thinking, memory, other things like that, compute, joining together in sort of a simulacra of actual or biological intelligence, the implication of that.

is very different than sort of the popular conception of what AGI would be, which AGI would be sort of this omniscient, never wrong or almost never wrong, coldly, logically processing machine, because if it has these multiple components and these multiple competing in some sense as engines, similar to what’s going on in a biological brain, it almost starts to have some sense of there’s some subculture,

conscious in an artificial general intelligence. There’s some competing initiatives and what wins out and you almost wonder is there a super ego? Is there this sort of humul humul – humul –

Ben Goertzel (14:48.098)
Well, I think once you have what I would call a super intelligence, meaning an AGI that is significantly more intelligent than the totality of the human species at its best, right? In the same way that in some ways you or I is more intelligent than the…

you know, the totality of all pygmy shoes on the planet or something, right? So once you have a super intelligence, I think in many senses, a super intelligence will be always right relative to human understanding anyway. I mean, it will be able to resolve matters of fact that are confusing to us very, very simply, right? Now, on the other hand, that doesn’t mean…

it will be coldly rational in the sense that we are thinking that it’s not going to necessarily be, you know, Mr. Spock from the original Star Trek or something. It may have all manner of complex dynamics that we cannot understand any better than a mouse, a worm, or even a chimpanzee can understand.

you know, the politics inside Microsoft or something, right? So, I mean, I wouldn’t say having a superior ability to make hypotheses and evaluate hypotheses relative to data, that doesn’t necessarily apply not having an unconscious, doesn’t necessarily imply not being intuitive or being unemotional or any of that. I mean, I think there’s a tendency to anthropomorphize.

and think, well, what would I do if I was in the AGI’s point of view? But that’s not really meaningful. You see the same thing with people’s worries about, you know, once an AI takes power, it won’t need us. It will turn us on to batteries. It will mulch us to make synthesis gas, to feed its engine, right? But I mean, the whole idea that power corrupts and absolute power corrupts absolutely, it’s a statement about…

Ben Goertzel (17:11.958)
human nature and human psychology, right? It doesn’t have to be the case for every type of intelligent system. I mean, we evolved as we did for specific and well -known reasons, which are not actually relevant to the life of an engineered AGI system, which is not to say we have a guarantee that super intelligence will be nice, friendly, and shiny. It’s just to say that…

We shouldn’t assume the opposite just because we think like, what if Donald Trump had an IQ of 10 ,000 and superhuman powers, right? Because I mean, we’re engineering different kinds of systems. And unlike having a human baby, I mean, we are engineering the system. There is an emergent and spontaneous unpredictable aspect. Yes. On the other hand, we are architecting and building its mind, right? So I mean, there’s a certain level.

certain level of design and control we have that just isn’t the case with human beings.

John (18:16.266)
But that we is a very complex we because that’s kind of a general we like humanity we and there’s very different people. I mean, just the other day, gab, the ultra right wing social network released about a hundred different LLMs, EIs, whatever you want to call them, GPTs.

Several which say the Holocaust never happened others similar things like that and I’m not saying that’s gonna be the general reality out there but there’s there’s AI teams in Russia and China in Africa in North America South America that are gonna come from very different ideological perspectives

want different things, demand different things. For instance, in China, it’s not gonna ask questions about Tiananmen Square or things like that. So we are very involved in creating this intelligence and that’s gotta have some impact on what it evolves to be.

Ben Goertzel (19:14.594)
Yeah, I mean, I think so. It’s interesting you mentioned all these places because in my own organization SingularityNet, I mean, we have people in all these places contributing to the same system. I mean, we’ve had an AI lab in Aghastababa, Ethiopia since 2014. And I’ve worked with a whole bunch of AI developers from Novosibirsk and St. Petersburg in Russia. Now, I lived in Hong Kong for 10 years.

John (19:30.25)
sweet.

Ben Goertzel (19:41.986)
My wife is an AI PhD from Shaman University. So we have a bunch of connections in the Chinese AI community. Then we have an office in Belo Horizonte, Brazil, with people I’ve worked with there since… When would it have been? Since 98, I think, in Brazil. So, I mean, we have people from all around the world, and I haven’t given the full list because it gets boring, but we have people from all around the world contributing.

to a common open source decentralized proto AGI platform right now. And yeah, there are, of course there are differences. And I mean, I know we’re working with a couple of groups in mainland China who were very interested in getting Yiching and Lao Tzu and sort of classical Chinese thought into a knowledge graph to condition the dialogue and learning of an LLM. And it’s like,

How do you make a logical reasoning system to help interpret the I Ching hexagrams in the context of natural language dialogue? I mean, there are pretty cool cultural differences that come up. On the other hand, I mean, if you look at the baseline, everyone we’re working with in every country wants to make machines that are compassionate to human beings and that advance…

science that advance medicine that will take care of old people that will help educate children. I mean there’s quite a lot of commonality there, right? So I mean of course…

John (21:18.41)
I agree, but you remember Isaac Asimov and the three laws of robotics and how certain robots were given a different definition of human in order to be able to harm certain, what we would call humans. So yeah, but there’s, I agree with what you’re saying, makes perfect sense, but there are still challenges.

Ben Goertzel (21:38.562)
I think a point though in terms of geopolitics, you can see government is following the evolution of open science and open software and not vice versa. Science has been open, new AI algorithms are on the ArcSci .org, new AI code is on GitHub and GitLab and other such repositories from Russia and mainland China, even from Iran.

You have open science published. You have open source software, which has driven the AI revolution. Now, not all trained models are open. A lot are. I mean, Google just released Gemma, which is a new model. Facebook has released a lot of open stuff. Alibaba has released a lot of open stuff also in China, right? So on the whole, so far, we’d have to say governments are following what happens.

in the open AI community, much as has happened with the internet, right? In the internet context, governments have followed what happened in the open networks and companies are being forced to play along with open networks or else be left behind, right? So, I mean, it’s quite different than in the development of weapons technology or something, right? Where, I mean, the development of space lasers is primarily by governments who are…

wanted to defend their borders. The development of AI could have been that way. Like if you remember the old movie Colossus the Forbidden Project, which might have been the late 60s, early 70s, like Cold War era movie, the US and Russia built these huge AI supercomputers. And that was the main nexus of AI on the planet. They sent them to destroy each other. In the end, the AI supercomputers made friends and took over and shut down the war or something. But that’s how people thought AI would develop then though. Like it would be… military supermind versus military supermind could have been, I mean, AI was founded in a way by US military and DARPA and all that, but it’s not what’s happening, right? It’s unfolding more like the internet or Linux with sort of an ambiance of open networks than companies and governments have to play along with that, which I think is very, very positive from my own sort of a…

crypto libertarian anarcho socialist sort of perspective, right?

John (24:11.69)
That is a good segue to talk about AI and who owns it and who controls it, which becomes complex when you bring AGI into it, because owning and controlling an intelligent or superintelligent system is either awful or horrible or just insanely laughable because it’s impossible. But let’s start with where we are today in terms of owning and controlling AGI.

Because as it stands, most of the AI systems that the average person interacts with are owned by a company or organization and are built and maintained and crafted to serve the interests of that company or organization. Much of the science fiction that we read revolves around people having engagements and interactions with AI systems that either they control or that are

allied with them in some way, shape or form. And we’re starting to sound very science fiction here, but who owns AI as it gets developed? And how can I ensure that an AI system that I want to use for, let’s say, finance or maybe just managing my life is actually operating under, for my best interests and not the best interests of whoever wrote it, created it, maintains it, runs it.

Ben Goertzel (25:39.586)
Yeah, I think you could ask who owns the internet or who owns the Linux operating system. And in both of those cases, the answer is basically nobody or else a lot of parties of different sizes and orientations in a very complex way, right? Like obviously some big companies own more of the internet than the average person.

No one owns enough of the internet that they can take it over or shut it down unilaterally. And Linux operating system, pretty much the same way. I mean, there’s a kernel team, but if that whole kernel team was kidnapped by aliens, I mean, there’s a lot of us out there who could form a new kernel team if we had to do it, right? So, I mean, I think these are models for how the ownership and control of AI could develop.

and how I think it should develop by my best understanding. And I think there’s some pointers in that direction, like the openness of the research and code underlying the vast majority of AI systems. But of course, it doesn’t have to be that way. It could go a different direction. It could be that once there’s a real breakthrough,

human level AGI, then governments try really hard to crack down on it. And AI researchers are not allowed to get out of their country. They’re not allowed to cross the border. And it becomes illegal to upload AI code to a repository. I mean, you could see a fascist crackdown once AI gets really, really serious. I don’t think we’re going to see that. But I mean, it’s not.

It’s not entirely unthinkable that this would happen. I also think there are a lot of subtleties to open this and decentralized control in AI context, because it’s not just about the code and it’s not just about the algorithms. It’s what happens when the code and data and processing power come together. And right now,

Ben Goertzel (28:01.538)
while big tech companies are opening their algorithms and code, they’re not opening the data. And often they legally can’t because the data is stuff they’ve collected with confidentiality agreements in the course of their business model, right? Yeah, well, it’s, it’s a mix. No, but I mean, if you look at Google, Google has a lot of data like that they got from our chats or something or Google voice, Google turned their voice models on Google voice conversations. I mean,

John (28:13.61)
Or they’ve just grabbed it from wherever and they don’t want to reveal that.

Ben Goertzel (28:31.458)
Even if I tend to believe them, they’re not using our private communications in any untoward way. They’re using the sound of the voice, right? But I mean, they still, that’s how they got their voice models to work well was from what we said through Google Voice and Hangouts and all that, right? So yeah, OpenAI seems to have just, they just use common crawl, which is a huge spider of the web, right? So, I mean, so anyway, the data.

Data is so much data that it’s hard for an average person to download. Like I could download the code and the algorithms, they’re pretty small files. The data, I would need like a huge bunch of hard drives to store. And then like I know how to train, not GBT -4 exactly, I know how to train Mixed Strobe, which is an LLM almost as good as GBT -4, right? The code is open, it’s all pretty clear.

Ben Goertzel (29:30.914)
I mean, you need at least tens of millions of dollars of hardware, right? So, I mean, this is one issue for openness, is that actually doing AI yourself at the modern scale requires more than the code and algorithms, which of course is part of why big tech companies are so willing to open up the code and algorithms, because they know you need to have a shitload of money to download all this data and…

and buy or rent all these servers, right? Now that on the other end, that still is freeing compared to not having the algorithms and the code, right? Because there’s, I mean, there’s a lot of companies and countries that could set up a big server farm and there’s a possibility to glom together a whole bunch of decentralized resources owned by a large number of parties and pull them into a data and compute network, which is what we’re doing in SingularityNet. So what…

Ben Goertzel (30:26.786)
One thing we’re looking at with the SingularityNet project and other projects like Hypercycle in our ecosystem, we’re getting crypto mining farms to repurpose some of their machines to running AI processing rather than mining crypto. Because I mean, they’re already set up. They’ve got electricity. They’ve got cooling, right? And they just need to upgrade the GPUs and CPUs a bit. Then the network of crypto mining farms becomes a huge network of computers that can be used to run AI, right? So I think…

Yeah, yeah, I mean, so it’s not trivial to glom together the data and compute power. On the other hand, it’s also not impossible. It doesn’t require rare earth materials or plutonium or something, right? I mean, it’s just commodity hardware plugged into the wall, network together, download code, write a spider to download some data. So it does take money, but it…

And the other hand, it’s kind of something anyone can do. And this is part of what I think will make it hard to have an autocratic crackdown on AI, even if governments want to. Because if one country tries to seal itself off, but nobody else does, that country will then fall behind. Even if it’s the US, which invented AI, it would still fall behind if it turns itself off to overseas researchers. And…

And then in that case, American researchers would either sneak out of the country in disgust or just stop doing AI in disgust. Like you’re not, and you can’t put a gun to a researcher’s head and say like, be more creative than the enemy, right? It doesn’t, it doesn’t, it doesn’t work. Like the Manhattan Project worked, but that’s cause the scientists actually believe what they were doing was right and made sense, right? But, but I mean, AI researchers don’t believe AI should be siloed off like that. So I just.

I don’t really see the dynamics coming together to make that happen, which means I think it’s most likely AI will develop in some form in this complex global open networks of like Linux and the internet, which is not to say that some companies or governments won’t have a lot of power, but I don’t think any of them is going to have like autocratic power or even that there will be an oligopoly of like…

Ben Goertzel (32:51.65)
three companies ruling everything. I think it will be more heterogeneous than that, which has pluses and minuses from the big picture.

John (33:01.962)
It does. We have to start bringing this to a close. Maybe let’s end here with the role of humans and humanity in an ever -evolving AGI and AI scenario. I met and chatted with Ray Kurzweil a few years ago, and we were talking about AI and what humans would become.

He thought that eventually we’d kind of add cores to our brains, sort of like adding servers to a server farm or an Amazon cluster or something like that. And so that we would boost our compute power, our available recall, other things like that, and be able to kind of compete in a machine -dominated era.

You see others like Elon Musk with Neuralink seeing, hey, how can I connect myself to a machine? How can I add power to a brain? Of course, the first attempts there are more around medical and assistance -related objectives. How do you view humans in a world with AGI and in a world of artificial intelligence?

Ben Goertzel (34:18.37)
I think the answer will be quite diverse and heterogeneous rather than there just being one answer. You may have many, many genres of human and post -human mind, unless we have endless genres of music on the internet now, right? So I mean, I think you will see subcultures that want to remain like legacy humans, such as we have the Amish and others on the planet now.

You will see people that want to remain legacy humans, but without death and disease and mental illness and all that, just like live your best life as a human. Certainly you will see cyborgs who will want to pack extra cores into their brain. I think you will also see people mind upload, just upload your brain into a digital substrate, live in a virtual world, but then let that be the seed and let yourself grow. And then within some number of cycles, you may become something utterly different than how you started.

And you might even put the memories of your original human life into cold storage somewhere, because you don’t need to consult them any more than we now need to consult like what we did every day in preschool or something, right? So, I mean, I think you will see all of these options. And it might be that any one of us has a chance to partake in numerous of these options. Like you could have one of you living its best human life, another one mind upload and that, you know, romp around in virtual reality and, you know,

multiply its intelligence by a hundred times, right? Because once you’re copying into a digital substrate, there then, I mean, there doesn’t have to be just a single instance of a person either. So I mean, I think the possibilities are wide open and will be quite incredible. And I’m a big optimist about how amazing post -singularity life will be. I’m more worried about the path there.

which is one of the big topics at the beneficial AGI summit and on conference that I’ve convened and that we’re holding a few days from this conversation what we’re now having. Because I think once we get to an AGI, I think it probably will be beneficially oriented toward humans as we’re creating it. And we pretty much all want that. I think that’s pretty much what we’re likely to build. But it’s going to come step by step.

Ben Goertzel (36:45.826)
I mean, if it takes eight or 10 years to happen, that’s a blip on the historical timeframe, but it’s meaningful in our lifespan. Like I have a six year old and a three year old kid, it’s more than their whole lives, right? So during that five, eight, 10 years, while AGI is getting smarter and smarter, it’s obsolete jobs here and there, it’s disrupting supply chains and taking over industries here and there. We already have a rather…

screwed up global social and economic system. We have major wars in different parts of the world. I mean, the developed world is ridiculously unwilling to help literally starving children in the developing world. Like, how does all this mess get disrupted on the path to a singularity? That’s much less clear in my mind than how amazing things can be once you have a beneficial human level AGI and

The inability of our national and industry leaders to deal with much simpler things that we face now on the planet doesn’t give you a lot of faith that they can deal with the advent of AGI in a well -planned, rational and compassionate way, which leads one to think it’s just going to unfold and self -organize. Look how badly we dealt with the pandemic, right? The only good thing we did was develop vaccines. Obviously, I’m not an anti -vaxxer, right? But that, I mean, that was…

That was Triumph of Science that was done by national lab, university, and industry researchers collaborating in the science way, right? So, but I mean, other than that, our global political systems dealt with that ridiculously badly. It’s a much smaller thing than the advent of AGI, right? So, I mean, I think there’s a lot, there is a lot to worry about.

Ben Goertzel (38:40.322)
particularly as regards impact on the developing world during the transitional period, I think. On the other hand, there’s a lot to be excited about also because I mean, if we have a system that’s well disposed toward us and that’s 10 times smarter than us, it’s gonna be able to rapidly solve a lot of problems that seem intractable to us at the present time, which is…

a very logical conclusion, but emotionally very exciting to consider.

John (39:16.81)
Ben, there’s one thing we could say about the future and that is it will be interesting. Thank you so much for this time. I look forward to seeing you and being part of the conference next week and talk to you soon.

Ben Goertzel (39:30.498)
Thanks a lot.

TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech

Made it all the way down here? Wow!

The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.

Subscribe to my YouTube channel, and connect on your podcast platform of choice:

Subscribe to my Substack