What happens when the race to build thinking machines turns into a contest for power? Will AGI kill us or save us, or will those who make it enslave or impoverish us?
In this episode of TechFirst, I sit down with visionary AI researcher Ben Goertzel, the founder and CEO of SingularityNET. We talk about the future of AGI and super intelligence, and map out the four contenders vying to win the first true artificial general intelligence (AGI).
From China’s state-corporate alliance to U.S. tech giants, from government labs to open-source coalitions, the stakes couldn’t be higher. Because AGI isn’t just another tech milestone. It’s the launchpad for superintelligence, and whoever lands first will define the rules of a new world.
Watch to listen here … check it out, and subscribe to my YouTube channel:
AGI podcast: summary
Why this matters now
Goertzel argues that AGI — systems that can generalize beyond strict training regimes — may be just years away. On our podcast he stated:
“Could we get there in two years instead? We absolutely could. But … if it takes four years … I won’t be utterly shocked either.”
That accelerates what many assume is a distant future to an urgent present.
Defining AGI and superintelligence
In his 2005 book Artificial General Intelligence, Goertzel defined AGI as “a system that can generalize pretty far beyond its training or its programming … at least as well as people can.” He draws a clear line between AGI and superintelligence:
“A human-level AGI would be … a system that can generalize beyond its history … And then you get … superintelligence, which is systems that can generalize … way, way better than people.”
In other words: AGI is the launch point; superintelligence is the leap.
The four players in the race
According to Goertzel, four major camps are racing toward AGI — and they’re not equivalent.
-
China (government + major companies)
China’s model: state-backed, mission-driven, tightly integrated. If China wins, you might see exceptional infrastructure and optimization — but also surveillance, central control, and state priorities baked into the intelligence.
-
United States government
With research funding, defense contracts, and intelligence infrastructure, the U.S. government is a frontrunner. But a government-first AGI could emphasize military, intelligence and security uses — raising global risk if adversaries respond.
-
Big tech in the U.S.
The tech giants hold data, compute, talent and distribution power. If they win, AGI could be tightly integrated into apps, platforms and services — optimized for profits, platforms and consumer reach rather than purely for public good.
-
Open-source / decentralized challenger
Goertzel champions this “dark horse” path: AGI built as distributed, open, blockchain-enabled infrastructure. If successful, the outcome could be democratized intelligence. But with broad access comes broad risk: bad actors, rogue states and misuse become far easier.
Different winners, different futures
Each scenario maps to a distinct future:
-
China wins → Efficiency + control. Prosperity bound with surveillance.
-
U.S. government wins → Dominance + security. Global force projection plus risk of arms race.
-
Big tech wins → Commercial AI everywhere. Convenience, but concentration of power.
-
Open-source wins → Broad access + innovation. Empowering many, but also exposing many.
Optimism & risk
Goertzel is not a doomsayer. He told me:
“We can’t predict with certainty … but there’s every reason to believe we can create artificial general intelligence systems that will be beneficially disposed toward us … Now is the time to make it happen.”
Still, he stresses the risk:
“We do not have a clear picture regarding how much influence our specific actions will have on the nature of the Singularity we get.”
Regulation on the horizon
Echoing the urgency, a recent open letter signed by 800 + business and tech leaders called for a pause on developing superintelligent AI until public consensus and safety protocols are in place. The message is clear: the race is on, and some are calling for the brakes.
Take-aways for business, policy and society
-
AGI may arrive in years, not decades — boardrooms need to internalize that reality.
-
Who builds AGI will shape the values, architecture and control of the next intelligence era.
-
Centralization vs. decentralization is not just a tech question — it’s a system-design question for civilization.
-
Risks are not hypothetical. Human economic disruption, power shifts, surveillance and conflict are real possibilities.
-
Optimism matters — a beneficial future is possible — but only if we choose design, rollout and governance intentionally.
What you’ll learn in this episode
-
The meaning of AGI vs. superintelligence, and why the distinction matters
-
Why Goertzel believes AGI is “basically inevitable”
-
How four distinct power blocs are vying to build the first AGI
-
What the four future scenarios might look like — and which one you might prefer
-
How individuals, companies, and governments can get ahead (or back) in this race
-
Why the open-source ecosystem could be the wild card — and what that means for you