What happens after AGI?
AGI is artificial general intelligence: it’s when AI achieves human-level intelligence nd likely quickly thereafter super-human abilities, maybe even ushering in the Singularity.
I was recently at the Beneficial AGI conference in Panama. One of the speakers was the founder of Emerj Artificial Intelligence Research. He’s interviewed nearly 1,000 AI leaders, his name is Dan Faggella, and he has some good insight into what AGI might do.
Or at least what the experts think about it …
(consider subscribing on YouTube?)
We discuss artificial general intelligence (AGI), the potential for post-human bliss through advanced simulations, and various perspectives on AGI’s ethical and societal impacts. Fagella shares insights from interviews with nearly a thousand AI experts, outlining a matrix to categorize thoughts on AGI’s future and human interaction. The discussion covers the balance between control, collaboration, and open-source development in AI, along with personal reflections on humanity’s potential paths in an AI-dominated future. Themes include the ethical implications of AGI, the role of human values in AI development, and speculative futures where humanity merges with or is overshadowed by superior AI entities.
00:00 Exploring Post-Human Bliss and the Power of AI
01:31 The Matrix of AI Perspectives
02:50 Exploring the Future with AI: Preservation, Progression, and Ascension
04:26 Navigating the Path to AI: Control, Collaboration, Openness
07:11 Personal Stances and the Future of AI
19:00 AI’s Impact on Society and the Future
24:23 Envisioning a Post-Human Future: Choices and Consequences
29:53 Reflections on Humanity’s Path Forward with AI
Transcript: After AGI with Dan Faggella:
Dan Fagella: We would be able to simulate a trillion years of vastly expansive post-human bliss in all possible ways. I’m talking about, you know, Shakespeare writing his best sonnet. Falling in love, uh, eating ice cream, but multiplied by a thousand in all directions that we have no, no words for simulated for a trillion years, but in, in actual earth time, maybe it would only be six hours or something, uh, but some, some little, uh, sugar cube of quantum compute where we could kind of experience that.
John Koetsier: What happens after AGI? Hello and welcome to TechFirst. My name is John Koetsier. AGI, you probably know is artificial general intelligence. It’s when AI achieves human level intelligence and very likely quickly thereafter, super human abilities. Maybe even ushering in the singularity. I was recently at the Beneficial AGI conference in Panama.
One of the speakers was the founder of Emerge Artificial Intelligence Research. He’s interviewed nearly a thousand people, AI leaders, executives in Fortune 500 companies. His name is Dan Fagella and he has some very good insight into what a GI might look like, what it might do, or at least where the experts think it.
Welcome Dan.
Dan Fagella: Hey John, really glad to be here. Thanks so much.
John Koetsier: Super happy to have you. And I’m gonna share something to the screen because I want to share what you, one of the things that you showed when you gave a presentation, and I wanna kind of walk through that and what it looks like. Yeah. You talked about.
All the people you’ve interviewed about AI and you’ve basically kind of put them in boxes in a sense, you know, where their feelings about where they would land, about what they should do, what we should do with a GI talk about that a little bit.
Dan Fagella: Sure. I’ll, I’ll, uh, I’ll just walk through sort of the matrix itself.
Um, and just for, for clarity sake, I don’t see this as some kind of cogent category that you pin on yourself and you identify with for the rest of your life. I, I see this as a guide for conversation, um, because I think that a lot of these sort of polar dynamics of, uh, you know, accelerationism versus kind of, you know, doism, it’s becoming this bipolar thing that I think is actually not representative of the, the diversity of discourse and.
Conjures an us versus them versus an open dialogue around why do you think that? Where would you like to go? And, and more, more high level. So I don’t want anybody to think that this is intended to be, uh, more concrete than it is. It’s a tool for discourse. Um, with that said, I, I’ll just kind of go over it at a very high level here.
Across the top, we have sort of roughly speaking where you hope we land species wise, man, machine combination wise. What is a positive future? Um, I, I talk a lot about beneficial versus or preferable versus non preferable futures. Different people have different takes here in the preservation camp.
Roughly speaking, we have folks who really are of the belief that. Nothing that comes close to tippy toeing beyond humanity should ever be created. This could be brain computer interface. This could be strong ai. Let’s just literally have hard rules away from that, just like human cloning and uh, other kinds of, you know, international law of some kind.
Let’s just, you know, bar that all together. That’s your preservation camp. On your progression camp, you have folks that. Really sort of our, our, uh, believe in kind of the inherent specialness of humanity would, would be open to brain augmentation, uh, cognitive enhancements, even strong ai kind of mostly as a tool for humanity to explore the galaxy and to learn.
Um, but it, it’s still very much human at the core. You know, our, our, our kind of identities, our kind of culture or values, you could argue whether those things exist or not, um, would sort of be at the, the root of it. And so this would be kind of progression. And then ascension is sort of the, the general position of, um, you know, humanity has bubbled up from fish with legs and before that, from eukaryotes and before that from, you know, wiggling amino acids or something.
Um, we should really hope that future intelligence bubbles up, uh, further than us as well, because you and I, John, we experience a lot more richness in life than the wiggling amino acid. And presumably there’s as much upward richness a as there is from between those first two that I just articulated. So those are your three going sideways.
That’s your one year, two year three mm-Hmm. Different people sit in different places in terms of what they consider a positive future, whatever that might be, but that’s kind of it. On the left hand side here, we’ve got sort of how we wanna get there. Uh, and, and roughly speaking, a so our top column is controlled.
This would mean very tight national and international governance and regulation around this stuff. Um, we find John a lot of the preservation camp naturally gets a little bit attracted towards control because it’s really, really hard to, uh mm-Hmm. Prevent Mm-hmm. The brain computer interface and AI snowballs from rolling unless we’ve got a very strong, uh, control mechanism here.
Um, so, so there you have kind of control. It doesn’t mean authoritarianism, it can just mean really stringent national, international control. Of course, the risk though, is authoritarianism. Collaborative would be a bit like we have today. We’ve got international agreements, we’ve got treaties between nations.
We’ve got. Organizations and bodies like the United Nations. Um, we’ve got the OECD and other policy bodies. We’re, we’re, we’re trying to coordinate commerce and other things. Uh, um, you know, at an international and national level that would be kind of collaborative and then c would be open. Let’s get rid of any degree of regulation, open source, all the way, uh, you know, developing and building tech.
Any kind of brain computer interface, any kind of AI in whatever direction. Unhindered government is only gonna be bad. They’re only gonna control things, don’t allow for any of it. So you can land anywhere from kind of a one to C3, depending on what you consider to be a valuable destination. And my goal, John, has been not just to say, Hey, where do you land?
But also say. Factors about the world and your understanding of it that have led you to that belief. And I think if we open to that, we now know why somebody sits where they do. Instead of just saying, you’re stupid for liking open source, or you’re stupid for not liking open source, we can actually unpack where they sit.
So this is a, this, this is the high level. Lemme know if that makes sense, John.
John Koetsier: That makes perfect sense. And what I loved about this matrix that you’ve created is, as you said, it gives us a tool for discussing stuff rationally and then looking at things, where do I kind of fit? Where do I feel like I fit?
What do we think is most likely to happen? How do I steer the world in the direction I think it should be? It’s really interesting, you know, you think sort of cyberpunk, uh, William Gibson. Yeah. Maybe your C3, right? Yep. Your acceleration, your transhuman, your libertarian. Let a thousand flowers of AI bloom and see what happens, you know, wherever it goes.
Right? And then you think, oh, wow. But a GI could mean the annihilation of the human race as we know it, right? And boom, then you’re an A one authoritarian conservative. Clamp it all down. But if. Is that even possible? I mean, AI is out of the bag. There’s a lot of open source out there.
Dan Fagella: Yeah.
John Koetsier: Hmm. Where do you land?
Dan Fagella: Yeah. I, I tend to be in the B three camp currently. Um, and again, none of these are permanent labels. Right. I’m, I’m not saying I’m in B three, uh, for eternity. I’m just saying that that happens to be where I gravitate. But I do think that it may make sense to kind of hang out, you know? Um. In, in a, in in kind of B two land for a little while.
I’m not an advocate for the notion that if we go pedal to the metal as fast as possible, we will automatically have what I refer to as a worthy successor. So everybody else, people have different definitions of what a worthy successor is. Um, but I don’t believe that a worthy successor will come automatically.
I don’t think. Um, we just take the first thing that smells like strong AI and just throw it on a bajillion GPUs and all of a sudden we’re gonna have something that flowers value into the galaxy, kind of like we’ve flowered value up from those wiggling proteins. Um, I don’t think that that’s automatically the case.
So I think will require some coordination and forethought to avoid an arms race. Um, reckless speed dynamic, but ultimately that blooming of ascension would be what I would consider to be the great moral good. But I don’t think it’s as easy as, again, just jack up whatever smells like ai, uh, a GI.
John Koetsier: You’ve interviewed so many people, AI leaders at the biggest companies in the world, Fang companies, um, the Googles Apples, Microsofts Open, Goldman Sachs,
Dan Fagella: you know, you name for sure.
John Koetsier: Where do they typically land? Yeah, these are people working on AI and in positions of influence and authority.
Dan Fagella: So, yes. Um, a couple little caveats here. So my day job, uh, which is, you know, 80 hours a week, so it’s, it’s not a super relaxing one. Um, but my, you know, emerge artificial intelligence research really.
Most of what we cover on a day-to-Day are the practical real world places where AI is adding business value or changing workflows, boots on the ground within, you know, the, the biggest banks in the world, the biggest drug development companies in the world, the biggest, uh, physical and online retailers, sort of tracking that real time pulse on the ground is where we spend most of our time.
So, in full admission here, when I, when we talk to the CIO of Goldman Sachs. We weren’t spending an hour picking his brain on this, we’re, we’re spending an hour picking his brain about where AI is fitting its way into banking. Also, my finding has been John, generally speaking, most people even who are very deep in the game, in the high level enterprise, um, really are not sort of receptive.
To this idea of kind of post-human considerations and decisions needing to be made. They, they don’t see the inevitability of what, what I happen to believe is sort of coming, whether they believe it or not. Mm-Hmm. Um, they would kind of be like, yeah, I don’t, that stuff’s kind of crazy. I, I, I don’t really know.
You know? And I think it’s a combination of. Maybe they really do think it’s sci-Fi. In other words, hey, the world has kind of not changed that much since I’ve been alive. I don’t foresee a change that radical, I’m not thinking about it. But I think also, John, there’s a degree of staring into the void that is just not that digestible.
I think that if you, you know, have children and a spouse or people you care about. Considering sort of the attenuation of man and considering some of the grand conflict, uh, and transition that would arise from something beyond people bubbling up, I think is, um, indigestible. I think what you wanna do is you wanna go to bed at night and say, you know what?
That lake house that I bought, my kids are gonna get to enjoy that 25 years from now. You know, they’re gonna bring their kids up there. You know, life’s going to, you know, things are gonna kind, maybe they’ll have a better iPhone and a a car that drives itself, but they’re still gonna enjoy that lake house.
I talked to a lot of people, John, on the enterprise side, who are PhD, hardcore folks who’ve seen a lot of movement in ai, who mostly believe in 20 years their children will be taken, their children. Uh, to, to the lake house that they bought. Um, and life’s just gonna kind of cruise like it’s, so, to be clear, most of the people in the enterprise, even though it’s very close to ai, they don’t mess around with these ideas.
I think mostly because of how wildly uncouth and gargantuan, uh, uncomfortable it is to sort of face the attenuation of ma. So lemme give you that caveat first. Um, second though, I will tell you who does have a response and then we can talk about whoever you’d like to talk about in that group. The people who do have a response are generally speaking, um, folks who have at least some history of concern or consideration around artificial general intelligence.
So up until recently, by the way, someone like a Joshua Bengio was not in that camp. I interviewed Joshua eight or nine long years ago. Um, and he famously, you know, filled out a response to one of our AI risk polls. Eight or nine years ago, again, we were doing this stuff with 30, 40 PhDs, uh, nearly a. He was basically of the belief like, this isn’t even a consideration.
Like AI risk is not real. You know, even a hundred year stuff, you really need to not even be thinking about this. And now of course, he’s very much in that camp. So generally I have to catch people after that realization happens, um, for them to be put, uh, on the docket here. But if you’d like, I can, I can list some names of people you might know and then just sort of like plop them down and give you sort of maybe why they, they wanna hang there if you’d like.
John Koetsier: Go for it.
Dan Fagella: Great. So, um, y Bengio himself, you know, uh, among the, if not the most kind of credible academic voice in the machine learning, sort of deep learning space, if I’m not mistaken, sort of the most cited computer scientists now breathing oxygen, um, is, is pretty squarely in the B two camp. Mm-hmm. He, he does see his, his rich position.
And by the way, we have a YouTube channel called The Trajectory. So if people typed in my last name and then said Trajectory that would find the interview with Ywa. Um, and he goes into much more depth than I’ll give you today, but basically his position is we can’t automatically assume that the first thing that bubbles up from an arms race of AI speed is going to become some kind of grand blossoming of value.
We really can’t presume that we need to have a lot of sympathy and empathy for. Humans in life that is alive today. Um, you know, he, he, he naturally leans towards openness for all of technology, but he’s bumped up to kind of coordination, um, because he really does think that sort of open sourcing some of this stuff because of how powerful it is, um, could be, could be a very, very rough situation.
And I think he does wanna sort of maintain some degree of a human core of values and, and have maybe a much more gradual eventual ascension, which he’s open to. He’s open to an eventual ascension. We are part of nature in his perspective. Um, but he’s pretty squarely in, in B two for that example. Another, uh, sort of prominent thinker in this space, who’s, who’s thought about this stuff for a very long time.
Uh, as a fellow by the name of Roman Polsky, you probably saw him at Gertel’s event. Roman’s been thinking about this forever. Again, Roman’s another one of these guys. I think my first interview with him was 11 years ago or something insane like that. Sort of in the, the, the lip, the very lip of kind of a one, uh, like towards, towards progression, um, but not quite there.
Mm-Hmm. He, he’s more or less of the belief that, um, if we do cross that line to technology that is pretty significantly beyond, uh, humanity. So I, I think he’s either right before the A one crosses a two, or he might be like a millimeter past, uh, a. For him, sort of really stringent control to make sure that this stuff is a tool at most, um, mm-hmm.
Is, is essentially necessary for humanity to not be destroyed. He happens to be in the camp, and I do not agree with him in this regard, although he’s a, he’s a good friend. Um, that. Any kind of posthuman value, just doesn’t matter. You know, he’s, he’s got a bunch of children for him. Kind of like that bloodline continuing that is the real continuation of value.
Um, and, you know, his own instantiation of consciousness is a real continuation of value. Uh, something that would be to us as we are to earthworms, even if it was to experience things vastly beyond our experience that are extremely utilitarian, powerful and, and, and, uh, monumentally valuable. Would just, it would not count because it’s not humans, it’s not hominid.
Get it outta here. Uh, so that is his perspective and, and he has certain conclusions he’s come to, to kind of sit there. So that’s just a couple people. But again, to be, to be clear, most folks, they don’t wanna talk about this.
John Koetsier: It’s super interesting because, um, I, I get where people are coming from if they wanna stick themselves in a one.
Preservation, humanity, human values control it. I just don’t think there’s any controlling it. I think that the genie is out of the jar, that the, the fire is, has been released and there there’s just no hope for that. It’s really interesting actually, that you mentioned that many of the people who are working in AI.
Are not even thinking at this level, not even thinking of where that might go. And at one, it makes me think of atomic scientists working in maybe the 1920s, the 1930s, even the nine, early 1940s, and just doing their work. Doing their work, discovering things, the nature of the cosmos, what is the atom, how does it fit together, all that stuff.
And really not having that much awareness beyond a few that we know of, that this is an amazing destructive and creative force that they’re ultimately unleashing. I wonder if they’re in that space.
Dan Fagella: I, I, I think that’s some of it. So s. Bengio went into great depth in the interview on the trajectory, um, about, uh, his evolution of thinking.
And he explained it as two things. On the one hand, he said, Daniel, the things we were creating, they were just so stupid. So in other words, like he was so close to it, seeing all their limitations, he was like, this thing is never gonna blast off and become something gigantic compared to this. So that was one element.
The second element was there was a wrestling match inside himself around. Has my life’s work actually been on something that’s potentially gonna bring about the end of humanity and really resisting the mm-Hmm. Uh, direct confrontation of that fact. So I believe that most folks who, you know, to your point, maybe they’re just plotting away, but I think plotting away makes sense, um, in some regard because what’s behind door number one is really hard to swallow.
Mm-Hmm. It’s a really tough pill. If you, if you just look at squarely, if you’ll put, if you look at dead in the eyes. It’s really hard to go to bed at night and be like, you know what? My kids are gonna be up at the lake house in 20 years and they’ll be water skiing just like me. It’s really effing hard to, to do that.
Mm-Hmm. And, and these are people, John, who gotta go to sleep at night. Mm-Hmm. And they gotta kiss them damn kids on the forehead before they go to bed. You know what I mean? And I think that actually, that’s the challenge here. I think staring into the void is, is a, is a manful and challenging, uh, uh, ordeal.
Um, and is really not something most people are gonna do, uh, until they’re, they’re faced much more squarely with these risks, and I don’t think we’ve have enough of a, of an impact. I think chat, GPT was a bit of an impact, but I think we need a bigger wallet, uh, for people to honestly look at the future and think of it as anything other than.
You know, I’m gonna die in a rocking chair. Nice and relaxed, just like grandma did, right?
John Koetsier: Um, yes. Yeah, it’s, it’s actually, that’s a great segue because we don’t have to posit a GI and superhuman capabilities to assume that AI will have immense impact, both monumental on our world if we just look at generative AI and kind of.
The destruction of truth. I mean, just how that might get applied in the coming election, uh, year in the US and maybe three years from now and maybe four years from now, and almost impossible decide. We already know that people live in their reality bubbles. We already know that people decide what they want to believe largely based on what they want to be.
True. Absolutely. This is going to be a huge challenge. A GI or no, a GI.
Dan Fagella: I, I, I’m totally with you. I mean, look, I, I, I’ve said it a thousand times. I’ll say it 1,001 times for you here. If we pause the technology. No more hardware developments, no more software developments. End of story. It’s over. No more developments if we pause it all right now, we haven’t seen even 2% of.
What, what would the insurance industry look like if from the ground up? We built it with what we have today and we haven’t seen 2% of that. Now. Enterprise change is a very slow grinding process. I’m deep in that game, brother. Ain’t nobody I know who’s talked to more of those people and understands the cultural challenges and technical challenges of leveling up 150 year old, you know, banker insurance or drug development company with very cutting edge.
I’m not saying it’s an overnight. The competitors that are to arise and the changes that are to, to shift are gigantic. Same thing with day-to-Day Human life. You talk about, um, you know, the, the conjuring of truth. Absolutely. But I think without really any more fundamental developments than we have now, the inevitable place that we’re going.
Here’s an analogy I’m gonna give to you, John. I wanna see if this lands for you. Um, your screen time, probably about 10 or 15 years ago, peaked at like, maybe it. I dunno. For me it’s like 16 hours a day, right? And, and, and uh, and, and then on the weekends, sometimes even part of my relaxation time is gonna be on screens too.
Uh, you know, my shopping is gonna be on screens. When, when I was, uh, you know, people are on dating apps. There’s all kinds of stuff, right? But maybe, maybe you’ve been peeking for a while. You know, you can’t spend 24 hours a day because he gotta sleep. So maybe 10 years ago it hit 12 hours, and then after cell phones it hit 13 and a half.
Maybe that’s about all you got. However, here’s, here’s another chart. So that chart’s already peaked a chart that is on the way up for you, for me, for everybody listening right now. Is percent of screen time conjured to you by an algorithm? What does that mean? LinkedIn, Twitter, uh, Netflix, YouTube. Percent of screen time conjured to you by an algorithm.
Could even be Tinder. It doesn’t matter, right? That percentage is monstrous and is only getting higher and higher right now. Soon, John. Soon it will be conjured to you entirely, not just like right now. The AI of YouTube will bring me a video that’s already been created. I call that analog media. Mm-Hmm.
Soon when I immerse myself for entertainment, for relaxation, for, we got a lot of human needs. Some of them are a little bit unc John, maybe I won’t talk about it. If I want any of those human drives, I will have an entire haptic and VR experience conjured based on my past preferences, my biofeedback, et cetera.
There is a singularity of the audience of one, uh, for the fulfillment of our drives that I do not think requires a single additional development, maybe immersive vr. We, we haven’t, we don’t have enough time with the Apple headset to figure out if we’re there some issues with human eyeballs.
We got all this stuff in place to live mostly in fully immersed AI cond experiences. I don’t wanna look out this window, John. You know what I want? I want this to be a, a medieval castle and outside of this window, I want a purple sky. I want a dragon flying. It’s getting dark out here in Boston. I want it to be bright ’cause I’m awake.
I got three, four hours more of work to do. I don’t want reality, John, I want what’s conjured to me based on my preferences. We don’t really need fundamental changes to get there, brother. And so, um, to your point, monstrously large developments in terms of replacing romantic relationships and all these other things, um, with like maybe not even changing what we have already.
Um, so yeah.
John Koetsier: Amazing, amazing. It’s interesting because as you were starting to talk about that, I was going Okay, yeah. Synthetic sort of conjured, kind of artificial, um, lenses in reality. On your screen time. Yeah, I’m on YouTube. Okay. I get that on Reddit. Yeah, there’s an algorithm, Twitter slash x threads, all that.
There’s Facebook, there’s an algorithm. Exactly. I was going, okay, I’m in Google Docs quite a bit ’cause I’m writing or something. Predictive text
Dan Fagella: all day, all day. That
John Koetsier: that’s, that. That’s kind of what Google’s thinking the whole time. What is he gonna say next? What is he gonna write next? What should, where should I put there next?
Absolutely. Absolutely. All that stuff. And that’s just gonna get more and more and more. Yes, it, it’s mind blowing it. It really is. Okay. Uh, let’s say a GI happens, let’s say the singularity hits. Let’s say you have a choice, you can stay and you saw already said you’re in B three. So you’re, you’re pretty out there.
You’re pretty out there. You’re not accelerationist, you’re not C3, but you’re pretty out there.
Dan Fagella: And I’m not trying to rush to, to B three tomorrow. I’m not in that case. Yeah, I do, I do think we need a lot more work to be done, but yeah, to your point, that’s where I sit. So you’re saying a JI is here, go ahead, hit me with the options.
John Koetsier: Do you upload yourself? Do you drill into your skull and add some chords? Yep. For memory and intelligence processing speed. What, what decision do you make?
Dan Fagella: Yeah, look, I’ll, uh, I’ll give you the full Monte here, John. So, um, I think that a noble and beautiful vision would be this idea that we, individual instantiations of human consciousness.
Are so valuable and unique that we could kind of merge with and be part of this grander artificial intelligence, um, and sort of live in this great ecosystem of God-like intelligences into the beyond. And that could be with invasive BMI, that could be with the mind uploading, that could be whatever the case may be, right?
Um, I think that’s a sparkling and lovely vision. I think that if you wake up in the morning and you have to go. I think it’s really good if you have to face the changes ahead. I think what you wanna believe in is like some secret sanctity and specialness about yourself. You have like a childlike belief that you are special enough to sort of be able to contribute meaningfully to a world of kind of planet size computes that are a billion times your, your superior in speed, creativity, quaia, anything imaginable and, and many things beyond human imagination.
I would love John to be part of the blossoming of Potentia into, into the universe. Um, so Pot Potentia, P-O-T-E-N-T-I-A. There’s an article, just dan fagel.com/potentia to kind of get into that concept. It’s, it’s from Spinoza sort of philosophical sort of idea. Mm-Hmm. I, I would love to be part of this grand blossoming of potential to the galaxy at present.
I’m not presently of the belief that it. That, that will be realistic or warranted. Um, we don’t currently let you know, uh, goldfinches or Rolly Polly Bugs, you know, set the policies for how we deal with criminal law or, um, you know, how we wanna reroute our sewer systems. Like those animals don’t get a say there for very good reason, by the way, John.
’cause it’d be really hard to run a society, you know? And so I’m not quite sure what value my intelligence would have. I think the best case is. There is some degree of merger where we, we level up our minds and I, I think that some degree of BMI is maybe, but not certainly, uh, part of how we get to a GI is we get way closer to, closer to the metal, but in this case, it’s closer to the wet wear, right?
Clo closer to the neurons. Um, I think getting closer to the neurons is part of probably how we’ll bubble up. And intelligence potentially how we’ll we’ll ensure that it, it’s, it’s maybe in some degree conscious or sentient, but I, I think once it’s there, John, I don’t think we’ve got a lot of great ways to be relevant, to be honest with you.
Um, so what would I hope for? Uh, well, I hope to, I, I hope to maybe be part of that leveling up, so maybe I get jacked in and my data, like everybody else’s, gets fed in to the, to the great system. Um, and then I have an article called Hope so dan fagel.com/hope so my name.com/hope. Uh, where I think the most that we could hope for John, um, would be to have our individual instantiations of consciousness uploaded into maybe a sugar cube of compute and inside of this sugar cube of Compute John.
Um, we would be able to simulate a trillion years of vastly expansive post-human bliss in all possible ways. I’m talking about, you know, Shakespeare writing his best sonnet. Uh. You know, um, uh, Musk taking a company public, uh, you know, um, uh, you know, falling in love, uh, eating ice cream, but multiplied by a thousand in all directions that we have no, no words for simulated for a trillion years, but in, in actual earth time, maybe it would only be six hours or something, uh, but some, some little, uh, sugar cube of quantum compute where we could kind of experience that.
Um. And then get as much of that charity as I could get, John, until the great intelligence decides to do something actually productive with that compute. So my hope is if we must attenuate, and I suspect we must, John, I suspect everything before us has, and I suspect we must as well. I would hope that our bowing out.
Could be a delightful bowing out. Um, I would love to tell you, I personally, John will be of great dramatic influence and relevance in the world of God-like planet size compute. But I think that that is not right. Um, and I think it is much more right to suspect that the best we could hope for is a bit of charity, um, to to be granted a pinch of bliss.
Uh, uh, uh, before we, we got a clock outta here. Um,
John Koetsier: super, super interesting. Uh, it is funny, as I was listening, I was trying to kind of characterize, you know, where you were on those various phases in my own words. Yeah. And, and where I kinda landed, you know, one of them, and I’m not saying you were asking, saying this exactly, but one of them is kind of the Borg, you know, and it’s this kind of accretive thing and, and, and you’re sort of part of it for a while.
And, and another is you’re, you’re, you’re, um. Charity case, living on the while of massive intelligence, uh, lets you, yes. And another is, uh, digital simulated heaven. Right. Uh, where I get to do everything. Anything in a, you know, a million years of sort of experienced time. Yes. Six milliseconds of compute time and whatever quantum supercomputer we’re, we’re living in.
And I guess the other one is sort of passed the torch. Hey, we’re done. Here you go. There we go. And, uh, s uh, going gently into the night, I suppose. Yeah. I’ll tell you kind of where I would land. And go ahead. Yeah, please, please. I love idiot. And maybe I’m naive. I don’t really know. Well, me too, John.
Dan Fagella: I, I, I’m still not, I, I think, I think compared to what’s after us, hopefully we are idiots, but sometimes even compared to ourselves.
But yeah. Where, where do you land, brother? Where do you land?
John Koetsier: Have you read the book? Boat of a Million Years by the science fiction author, pool Anderson?
Dan Fagella: I have not. And here’s a random fun fact about me, John. I have very limited exposure to fiction. My life is two things,
French. Yes, and it’s um, and it’s, uh, uh, talking to heads of AI at Fortune 500 companies and founders of, of unicorn AI startups. So my thing is, like way in the future, deep in the past, combine those two. That’s it. All my ideas mostly spring from that. But I, but I do respect fiction. You’ll have to get me up to speed on the plot here.
I wanna understand.
John Koetsier: Pool Anderson, great science fiction author, wrote this book, boat of a Million Years, and it’s essentially about, um, the end of aging. And there are some individuals, it’s like one in a billion that somehow have some mutation and they don’t age, and they have lived forever. And at some point they come out, there’s like six 10 of them.
It’s discovered how, uh, this has happened and everybody can get treated to have the same thing. But then these people find that they’re sort of these avatars, they’re sort of these, uh, crow magnan, uh, type people that, that aren’t relevant in this world that’s changing and, and, and doing so much, and they decide to.
Spaceship to the center of the galaxy, to the center of the milk, Milky Way, the boat of a million years. Really interesting. And I kind of figure that in the scenarios you described, I would hope to do something like that and spend the time just exploring and enjoying. And so I, I, I don’t know where I would land in your matrix.
I, I think there’s something super valuable about human perspective. But I also embrace the technology and maybe getting, uh, a little augmented and jacking in from time to time, but maintaining some level of personality, individuality matters. Uh, but these are fascinating things and I look forward to seeing how they develop over the next 5, 10, 20, 30 years.
Dan Fagella: Yeah, we’re gonna find out. We’re gonna find out in some way, shape or form, brother. We may only have two or three, so fingers crossed. But glad we got to discuss it a bit today.
John Koetsier: Awesome. Thank you so much for your time.
Dan Fagella: Same!
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Subscribe to my YouTube channel, and connect on your podcast platform of choice: