AGI: kind of nonsense?

Is AGI just a really dumb idea?
Is the concept essentially meaningless?
And are we entirely barking up the wrong tree?

In this episode of TechFirst, host John Koetsier interviews Neil Lawrence, the DeepMind Professor of Machine Learning at the University of Cambridge and senior Fellow at the Alan Turing Institute about his latest book ‘The Atomic Human: What Makes Us Unique in the Age of AI.’

Lawrence explores the complexities of human intelligence, the misconceptions around artificial general intelligence (AGI), and the implications of large language models (LLMs) like ChatGPT. We also discuss the limitations and strengths of human decision-making, the potential risks of AI, and the importance of preserving human culture and diversity in intelligence. We dive into the role of AI in enhancing human capabilities, the challenges in deploying AI in sensitive areas, and the necessity for regulatory interventions to maintain a balanced technological ecosystem.

Watch the show here, and subscribe to my YouTube channel

You can also get TechFirst on your favorite podcasting platform:

 

Transcript: is AGI real … or kinda nonsense?

Note: this is an AI-generated and lightly AI-edited transcript. It may not be 100% accurate.

Atomic Human

Neil Lawrence: I think the notion of LAGI is kind of nonsense because it’s a sort of misunderstanding of the nature of intelligence. when you look at people’s definitions, and they rarely define it, but they’re very simplistic.

John Koetsier: What makes humans unique in the age of artificial intelligence? Hello and welcome to TechFirst. My name is John Koetsier. I’m super excited about today’s guest. We’ve had some great ones on Tech First, we’ve had billionaire technologists like Nathan Myrvold. We’ve had the engineer who made the first commercial cell phone call happen ever.

We’ve had robot company CEOs building humanoid bots and inventors building drone delivery companies, and just had the chief scientist of Roblox. Today we have an OG in AI or machine intelligence. He’s the DeepMind Professor of machine learning at the University of Cambridge. He’s a senior AI fellow at the Allen Turing Institute.

Heard of Allen Turing anybody? He’s a visiting professor of machine learning at the University of Sheffield. He’s Amazon’s former director of machine learning, has a list of published articles and books that is extremely long. I scrolled it for a couple minutes, and he’s a new author of a book called The Atomic Human, what Makes Us Unique in the Age of AI.

His name is Neil Lawrence. Welcome Neil.

Neil Lawrence: Thank you John, for having me on.

John Koetsier: Let’s start with your book. If you had to summarize it, what’s the key idea behind the atomic human?

Neil Lawrence: So the notion in the title of the atomic human is taken from, the Greek philosophical notion of the atoms.

Democrats 300 BC says, if I took something in the physical world and cut it in half, and I take one half and cut it again. And I say one half and cut it again. Is there a point where I can’t cut? Or does this process add infinitum? Now, they were philosophers, so they didn’t actually cut anything, but you can still ask this question and they came up with the answers.

Oh no, there should be a point that you can stop. So with the age of machine intelligence, you can ask the same question if every time the machine’s doing something and going right back to the steam age like movement, that we considered something that humans have. And then every time it plays chess or can translate languages or hold a conversation, it’s like cutting into us.

Is there a point at which it can no longer cut? Because that would tell us something about the essence of what we are.

John Koetsier: Wow. I had to think of thesis ship, when you were talking about that.

Neil Lawrence: It’s not in the book, but Oh yeah. Yeah. I mean, in fact, you know, going through the book made me think very differently about Thesis’s ship because if people know cthe ship is this sort of ship that I guess every part is replaced.

I don’t actually know what THESIS did, but it presumably was a significant ship.

John Koetsier: anything back then. They just thought, I mean, nobody actually did stop, just thought experiments,

Neil Lawrence: But let’s presume that they were replacing the parts on this ship Because it was a significant ship.

And then the question is, well, is it the same ship? at the end of writing the book, when this question came up in my launch thing, I was able to give my immediate answer, which would’ve been very different from the answer I gave at the beginning. Yes, it’s the same ship. If people believe it’s the same ship.

Oh wow. Because thesis’s ship is not just a physical construct, it’s a mental construct. And if that ship is still at the head of the fleet and thesis is a great admiral like Nelson or whatever else, even if he’s dead, people will be inspired by that ship It’s the idea of it. And, to me is like, I mean, I’m not religious, but you’ve gone.

And that’s like kind of a spiritual nature. It’s it’s spiritual nature that’s coming about from other human beings belief and a shared belief in something. so my answer to that is it remains the same shit. Even if, so, you can do this with your golf club if you’ve replaced the handle of the golf club.

If you’ve replaced the head of the golf club, is it the same club? Well, if you still believe, it brings you luck. It’s the same club.

John Koetsier: Exactly. Let’s go back in time. you’ve had a long career already in ai, machine learning. how did you start? Why did you start?

Neil Lawrence: I started as a mechanical engineer, I had a father who was an engineer.

I loved playing with Lego. I wanted to build and fix things. when I graduated, in 1994 in the uk, I looked around at all the jobs I’d want to do. I saw that no value was being given at all to the things I was passionate about. Like how do you design cars or, you know, or, or building plants

The salaries for engineers in the UK in particular were very low, so I ended up. saying, well, I’m gonna go do a different job. I ended up working on oil rigs doing oil exploration, what’s called wireline logging for a company called Schlumberger. ’cause it was exciting, you know, it was a intense job, but I knew I didn’t wanna do it forever.

while I was on these oil rigs, you get a lot of downtime. I read about these things called neural networks and this is about 19 94, 95. And so, you know, it came to a point where I realized. I wanted to work on these things because it looked like they could solve problems that as a trained engineer, I knew I didn’t have access to solutions for

So I went, left the oil rigs, went to do a PhD in, machine learning in these very early days when no one else was interested.

John Koetsier: Interesting. And it’s all history. Since then, let’s go forward to the present time. AI’s had, a massive resurgence, especially that term artificial intelligence.

I know sometimes you prefer machine learning, over the past. What shall we say, five years, seven years? Maybe more? Maybe four. everybody’s been investing like crazy. We’ve had the emergence of LLMs. How do you characterize our present state in the world of AI and progress to a GI, artificial general intelligence?

Neil Lawrence: Well, I think the notion of LAGI is kind of nonsense because it’s a misunderstanding of the nature of intelligence. when you look at people’s definitions, they’re very simplistic. things like, oh, it’s an economically rational thing. the point of my book is to highlight that, take a step back.

So think of Henry Ford Apocryphally. He’s supposed to have said if he’d asked people what they wanted. Out of a car, they would’ve set a faster horse. And we are at this sort of Henry Ford of what do you want out of this ai? And what we’re hearing is a more intelligent human That’s interesting because if we just say what does an artificial general horse look like?

I mean, horses aren’t just used for transport. They’re used for, young kids with autism or disabilities as a way of building their confidence, they’re used as a way of people forgetting about their daily lives. Horses provide all sorts of things.

Beyond transports. They’re much more complex than that. But even if we ask the question, what does an artificial general vehicle look like? is it a bicycle? Is it a Ford F-150 truck? Is it a rocket SR 71? it’s an absurd question because you have to know what are you trying to do?

Are you trying to spy on the Soviet Union in a plane that is faster than any missile they have? Well, it’s not a bicycle, are you trying to get to work in a way that improves your health Then maybe it is a bicycle. And this challenge we have of, very intelligent people who think in quite an unsophisticated way.

So ironically, we see it right there. These are some of, in some ways, the most intelligent people in the planet and in other ways. They’re some of the stupidest, but when everyone’s hearing them say things around this notion of a GI or singularity, because humans have respect for other humans with apparent expertise, they’re falling into this trap.

The sort of Henry Ford, they’d have asked, a faster horse track and the idea of the book is to explain why that’s not the case. why our intelligence. Cannot ever be fully replaced by the machine. It can be augmented. We can work with machine intelligence. We can build a better society or machine intelligence, if used incorrectly, can undermine our society, can, can damage the things that we hold most here to us, but which of those paths we are taking is a complex thing.

what we’re likely to see is in some domains it’s helping us in other domains. It’s undermining us. That’s what technology’s done in the past. Mm-Hmm. And that’s what this technology’s likely to do in the future.

John Koetsier: It is pretty interesting to hear you say that. it makes perfect sense. I was just, I think three, four months ago at an artificial general intelligence conference in Panama City and, should have had you along What a made for a great keynote.

Hey everybody, this thing that you’re here for, eh, doesn’t what I think what you’re saying doesn’t actually

Neil Lawrence: One of the things about it is, you know, Let’s talk about some really interesting general principles behind intelligence, I think that’s why very intelligent people get hooked into this stuff.

just like if we talk about vehicles, we can talk about some theory or even some practical implementations of vehicles that are common like the wheel. aerodynamics for aircraft, the wing, or engines. in different applications, we get different, forms of motive power.

On a bicycle, it’s my legs or an electric motor in a regular jet, it’s a turbo fan. in a car, it’s an internal combustion engine, and it could be diesel or petrol or increasingly electorate. So even.

Even in, when you look at those general principles, there’s variations in them and then you start to combine them to give you something that does the job you need. intelligence is the same. there wouldn’t be a field of machine learning research looking at these things if it weren’t the case that there aren’t general things that you can pull out.

I think the problem you find with a lot of GI people is they’re generally people that haven’t done a lot of deployment in practice. They might have designed algorithms that are successful. those are the most practical ones, but they’re a little bit like those philosophers that don’t like to cut anything.

they’re imagining what would happen if intelligence was unidimensional. But it’s quite dangerous because this notion of intelligence being uni dimensional and, and us being able to sort of climb a ladder of intelligence, where does it come from? It comes from the eugenicists, you know, general intelligence.

The term is from Spearman, trying to support Galton in his theories that intelligence is like height and that in Britain paranoid about the rise of Germany. We need to breed more intelligent people to economically compete with Germany, United States. followed suit and big into eugenics. Right now, you can see this is slightly different because we’ve got the same geopolitical considerations going on at the moment.

Fear of China, we need a better AI than China. But instead of looking down and saying, oh, the population isn’t intelligent enough, let’s change the way they’re breeding. People are looking up and saying, oh, the AI’s more intelligent than me and we need to think about. Building on that in order to compete with China.

it’s the same echoes of the same type of argument. Now, you might think, well. Does it matter? Well, it matters because I think it’s misleading people in the general population and is misleading governments in terms of what sort of interventions we need to make the best use of this technology.

it’s undermining many of the people in society who we are reliant on to deliver the future society we want because instead of it being our teachers, medical professionals. even our lawyers who are an intrinsic part of a modern democracy in delivering the society we want, instead of them being empowered, it’s the people who think of intelligence in this very simplistic way who are being empowered.

John Koetsier: Part of what I think you’re saying there is that the term a GI doesn’t make a ton of sense because intelligence is not one thing. It’s many different things. There’s, numerical intelligence, there’s logical intelligence, which is quite connected. there’s physical intelligence there.

There’s many, many different things. And we see parts, we see areas where machine intelligence, machine learning does amazing, super human things. Essentially, yeah, and, and that’s really, really, really cool. The idea has been that if we combine enough of those superhuman components and also put some, sort of super ego in place so that there’s some kind of general management of all this stuff, that data from Star Trek will pop out.

Neil Lawrence: Yeah, it’s like the chitty chitty bang bang of artificial intelligence. The flying car does all the things that you want, and you just have to imagine we just need to put wings on that sucker and that’s gonna fly as well. this is the sort of thought experiment you can have if you don’t do stuff.

why doesn’t that come about? in vehicles it’s because there’s tensions between, things like, carrying capacity, ’cause that increases the mass of your vehicle, damages the aerodynamics and speed. and what you have at the moment is like Ferrari, like intelligence is looking around.

So saying everything needs to be more like a Ferrari. But to be frank. We kind of need more sprinter vans. that type of intelligence, which isn’t celebrated and glorified in society is the backbone of everything we’re doing. And unfortunately, the people who are often thinking about these things,

Don’t really recognize the qualities of that intelligence. So it’s remarkable, isn’t it, that when you look at what has AI delivered, it continues to deliver on things that highly intellectual people think are important, like playing chess or playing go or translating things or being literate.

And how much progress has it made on, you know, I, I don’t know. Reimplementing a Patrick Hones No look pass. Well, not because actually the people who design it are less interested in sport. They’re less interested in soccer, they’re less interested in American football. They don’t even have a respect for that form of intelligence.

Mahomes always comes across quite bright. These guys may not always come across as bright when they’re. Trying to articulate what’s going on inside them, but you put them on that pitch and they are doing things that are mind blowing if you understand what’s going on in their bodies.

So there’s a genius to that, that we find incredibly difficult to reconstruct, It’s clear that there are some tensions between this, some people are better improvisers, some people are better planners. Now if you go into a task deciding to plan it or deciding to improvise it, which one works best?

Depends on how that task pans out. It depends on how much uncertainty there is in that task, right? Sometimes the planner will win. Sometimes the improvise will win, and this type of tension is sort of at the heart of intelligence.

That means you don’t get to have both because the improviser has gone into the problem with a certain perspective and a certain set of ideas about how they’re gonna approach it. The planner has gone in with another, and they didn’t know what the nature of the problem was gonna be in its details.

When they were setting up to do this, so how do you react? Well, guess what? You look around humans. We have a spectrum of intelligence, a spectrum of capabilities. There is no wandering to rule them all. There’s a diversity of intelligences.

John Koetsier: It’s fascinating to hear you say that because I’ve interviewed about.

Five, maybe seven CEOs of robotics companies. more than that, just general robotics, but I’m talking about humanoid robots, which is our current, super sexy tech dream right now. To have a humanoid robot that walks, maybe talks, works in the warehouse, maybe comes into my house and vacuums and all this stuff,

Of course, many people are skeptical and saying, you need to design for purpose. the humanoid form isn’t suited for that specific job. It’s very general But it’s interesting because some are approaching it from. The planner perspective building all the pieces to arrive at this ultimate design and some are saying, I don’t know what it’s gonna be.

I’m building to do one simple thing and we’ll add a little thing and we’ll add a little thing. Interestingly, the ones that have robots implemented in the workforce right now, not humanoid, are building that way. I don’t know what that says, but it says something about, different ways to approach the same goal.

Typically the improvisational approach works when the uncertainty’s higher, and you can imagine in early product development when we don’t understand markets, that ability to improvise and pivot. Is more useful as a market matures, efficiency becomes key because there’s multiple market players and you’re trying to compete on price or something like that.

Neil Lawrence: So at scale you tend to see more planning coming in. working at Amazon, Corporate cultures have personalities. And it is interesting how, for example, the corporate culture of Amazon is constantly trying to keep that agility in place.

But the truth is just the heft of the company makes that harder. people do try and work like that. You’re a company of a bunch of startups, but. a startup doesn’t have to worry about the corporate reputation of the company when deploying some product that’s gonna look at personal data.

a startup doesn’t have to worry about, if you are breaking the sort of salary structure by employing some person Whereas in Amazon you do. So you sort of found that there’s things around human resources, things around legal things, around PR that you don’t get to behave like a startup no matter what anyone’s saying.

Yeah, they don’t tell you this. I just noticed it. it’s because the company’s too big for that. So the reason why these companies get disrupted is because there are small companies that aren’t constrained by those things, who can reimagine ways of doing work. Just as Amazon did that 25 years ago or more.

and disrupted, companies like Sears And that cycle goes again and again because, over time, the intelligence of these companies, which is associated with their culture and their way of doing business, falls out of data.

Circumstances change around them.

John Koetsier: I wanna talk about LLMs a little bit. And, and, and, and how you view something like, chat GT’s latest version, and, and what it can do. It’s interesting if I reflect on my use of it, I Google stuff way less. I call it a curiosity machine because I’ll be walking around, I was in Toronto, a couple months ago and noticed

Falling concrete, on the overpasses and exposed rebar. And so I, I sent chat, G-P-T-A-A question of that, a, a picture of that and said, why does this happen? Gave me an answer. It was pretty cool. Probably even correct. yeah. And now I give sources, so that’s kind of nice as well. how do you view chat, GPT and LLMs in general?

super interesting. Next wave, dead end.

Neil Lawrence: I think they’re absolutely extraordinary, but not because of any of the things you’re reading on Twitter at the moment. I kind of don’t agree with because I see them in a very different way.

This isn’t some road to artificial general intelligence. This isn’t, oh, if we can just increase the planning, then we have on a GI, no more parameters. More parameters, yeah. More data. This is something that is a totally new technology that is utterly transformational. one of the key things I highlight in the book is the difference between our intelligence and the machines is our limitations.

So you get to the atomic humans by finding The thing the machine can’t have our limitations and vulnerabilities. one of our main limitations is around our ability to communicate because we communicate with the speed of sounds, which is a million times slower than the speed of light, and yet we compute internally with things approximating the speed of light with this really intelligence architecture because we’re a distributed intelligence, but one.

Where you’ve got extraordinarily high powered compute nodes and little straws of bandwidth between them. we’ve never built anything like that because as soon as you come up with electricity, you use that for communication as well? Well, this is the beauty of evolutionary.

there’s no hidden hand behind that. It’s just working with what it has in front of it, This architecture means there are particular things about our intelligence. if you look at the, my estimate of the difference in communication bandwidth is not a million, it’s 300 million.

So to give a sense, that’s the difference between walking speed and lights speed. Right? Wow. So it’s like your, your intelligence is walking around, the computer is moving at the speed of light, you know? And the refactor of additional 300 is ’cause of, you know, clever coding things and how they’re communicating,

John Koetsier: Mm-Hmm.

Neil Lawrence: So there’s an interesting follow up question. Why don’t we spend all our time telling each other stuff when we’ve got such a little bandwidth? Some people do do that. It’s called micromanagement.

What we do tend to do more of is we try and share with other people ideas about who we are and where we sit in a broader cultural landscape If I know you, John, as a sort of person who I’m very familiar with, I don’t have to ask you stuff.

I can do it before you ask. You have an internal model of me. I know. he likes a. sort of Starbucks cold brew latte with a thousand different syrups to destroy the

John Koetsier: flavor of coffee. But I get your point or

Neil Lawrence: so this is the problem, right? I can get it to you without you

asking. So we don’t need to use the communication bandwidth. What we have is this extraordinary intelligence capable of second guessing each other and collaborating without communicating to a large extent. Now, of course, we do communicate. But when we communicate, we use a landscape of ideas that are shared, what we might refer to as our culture.

the book tries to communicate an enormous amount using analogies to get these ideas across. no one understands all the technical details of how an A algorithm’s working, but no one understands all the technical details of how a car’s working.

But they have a sense of what it does. And, you can use the car as an analogy, I can say. Artificial general intelligence like an artificial general vehicle. What a silly idea, you know. And whether you like the analogy or not, I’m communicating quite complex ideas around thinking By relating an unfamiliar concept to a familiar one.

large amounts of our intelligence are based on that shared sense of familiarization the notion of Democrat is atom. And when we mention them, we are able to communicate very sophisticated ideas that we each think independently about. Now when we. Discussing or thinking or whatever we’re doing reasoning, you know, that cultural cognitive landscape defines, the spectrum of our ideas and how we communicate

Up until very recently with artificial intelligence, it just didn’t have access to it. And the constant failed project was, oh, but we can reconstruct these ideas from first principles, so we can sort of have a first principle definition of beauty or obscenity or intelligence, you know, and just build that from logic like Russell tried to recreate the whole of math from logic, and then have a machine that’s always right.

It doesn’t work because these ideas are not true in the way that mathematics are true. They’re shared cultural artifacts that by believing in, we get to collaborate together so they feel just as true to us as individuals. But some cases, you know, they’re provably not true, but it doesn’t mean they’re not useful.

It’s a bit like box. You know, some models are wrong, but some are useful. Well, all cultures are wrong, but all are useful. So what’s the LLM given us? by forcing that machine to try and reconstruct the next word across such vast quantities of text, in order to do that, it’s had to develop, and I use the word loosely, some sense of our culture.

in the book, I refer to it as a human analog machine, because I sort of build on the way an analog computer. These machines from the 1950s used to work to sort of say, what it has inside the digital machine is a set of analogous states that are analogous to this cognitive landscape and they unleash the most extraordinary possibilities.

forget this thing, it did the maths wrong, or even with the latest. GPT. Just that ability to have a communication channel going from my little walking pace to be able to scale it up to the machine’s. Light speed, but communicating through our cognitive landscape that’s transformational.

Forget everything else,

John Koetsier: everything

Neil Lawrence: and the society in the future would look unimaginable. we could just say no more. You just have to use the existing large language models. no one’s gonna do that. you could say that, and that’s already transformational.

it just takes time for humans to assimilate this technology it’s not just the printing press, it’s like the simultaneous invention of writing and the printing press. what you’re doing is giving access to ordinary people, to access the digital computer in a way that isn’t mediated by a big tech company.

That, gives you an Excel spreadsheet that is mediated by your imagination.

John Koetsier: Mm-Hmm.

Neil Lawrence: Now,

John Koetsier: Mm-Hmm.

Neil Lawrence: That’s mind blowing. And it has extremely large potential and extremely large, problems associated with it. most of my work is sort of like, if you extend that idea and you see, you know.

It Sure. People might put reasoning capabilities in inside Chatt PT or inside Claude or whatever, and might improve its ability to do certain things, but actually I’m more interested in this entity as an interface. instead of using a gooey and clicking on stuff, you have an interface where the machine is giving you an analogy of a human as a way of communicating with it.

That’s extraordinary.

John Koetsier: It is extraordinary and the speed up is extraordinary because if you remember the old fashioned way you needed to answer something, you Googled it and you maybe found five different web pages or sites and you had to look and construct it. Maybe there’s a Wikipedia article and you had to bring these things together to get a full, complete answer that you thought was well, and nowadays,

Neil Lawrence: Google seems to struggle to give anything of depth.

all it gives you is Five clickbait media articles on the topic written by grad students trying to earn a buck on the side. that’s my impression of Google. user experience may vary.

John Koetsier: Yes, exactly. but now, it feels

Neil Lawrence: to me like we are back in like 1995 or six with Ulta Easter where you can never, the thing that you know should be on the internet you can’t find because it’s just been overwhelmed with.

Dross. I sort of feel in some sense the internet we knew in 2006 has already disappeared.

John Koetsier: No question about it. you’ve said in the book, and you just said as well, in this conversation that in some ways human flaws are our biggest strengths.

Why is that?

Neil Lawrence: if you look at, If you start to strip down what people are talking about as intelligence to terms that we can refer to as general when it’s decision making on the back of information. Right. And then, you know, you see that everywhere, whether that’s groups of social insects or whatever. I. And that’s marvelous in itself.

you watch ants. I was, on vacation in southern Italy, and, our house was constantly overwhelmed with ants. And however I dealt with ’em, they just sort of reappeared, you know? And then I realized at some point that this type of ants seemed to have multiple nests. it was different than regular ants that fight across nests.

my mother-in-law was looking at them, and she’s Italian, so that was her immediate reaction. Aren’t they intelligent? It’s like, well, individually, no.

John Koetsier: look

Neil Lawrence: at how I can fool this ant, but as a collective intelligence. Yes. Yeah. Now the strength is because the key is about making decisions about ourselves.

This comes back to that ship of feces point that actually what is important, what is our objective, what it is, whatever we decide as a group of humans, it should be. Because there is no sort of ultimate answer. You know, I have deep thought in there with the answer to the question, the life, the universe and everything.

But you know, there isn’t a solution to life. All there is is muddling forward in the best way we know how with individuals making decisions, hopefully that improve society, hopefully those individuals brought up in the right way to care about others, to care about our futures, to bring about a slightly better world with the hardest problems in the world.

Things around, our health, our education, our social security, the way we do civil administration, the way we govern ourselves, they don’t have solutions.

John Koetsier: Mm-Hmm.

Neil Lawrence: notion that they have solutions is deeply problematic. it’s super important that humans can empathize with other humans that they don’t lose touch with, the struggles of a single mother or what it means to lose a loved one.

what it would mean to lose a child. even when I say that, the hairs on the back of my neck go up, it’s unimaginable. I haven’t had it happen to me. But you want the entities making those decisions to have their hairs go up on the back of the neck, and to be subject to those same limitations themselves.

So that’s kind of how society works, that we hold people to account. you might be a judge. you might make a decision, well, you’re gonna be held to account by the judiciary, by the system of judiciary in your country around that decision. And you’ve got a risk of loss of professional reputation if you are corrupting around that decision.

Mm-hmm. We hope in the best operating societies.

John Koetsier: Yeah.

Neil Lawrence: So, as a result of that, you need the weakness and limitation to be at the heart of the decision. It doesn’t mean it can’t be improved. or it can’t work with the machine, but it needs to be coming from the human in a meaningful way.

What we can’t afford to have is, an example I use in the book is going back to, sort of 1200 BC when they’re using the code of Rabi 0.2 in the code is wow if the, if the legal cases is a little bit difficult, throw them in the river and we’ll see what the river got to say about it, It’s an interesting principle.

You can see how the judge might like it ’cause there’s a while. This way I avoid making an error myself. I’m handing it off to the river gods, you know, So we are fine. And you can see that tendency and Ai, AI is the river

John Koetsier: Gods

Neil Lawrence: AI is the River God. Trial by ordeal is trial by ai.

City’s got large enough that we could no longer work. Under the sort of moral obligations to each other within small groups to handle individual’s behavior. We had to have a process behind their behavior. You can see them instinctively go, oh, when the process gets difficult, let’s hand it off.

one of the things we understood with jury trials, with judiciaries is that you want people who understand the circumstances of that individual under trial to have a sense of responsibility And we have mechanisms in society for how we do that. And the danger with AI is.

These highly sophisticated mechanisms, which of course don’t work well all the time, most of the time you don’t notice that they’re working beautifully. And then some idiot comes along with some machine learning algorithm and in one swoop takes us back to, 1200 bc.

John Koetsier: It’s funny how history rhymes.

we see that in multiple places in society. If we only get a smart system there, if we only get, you know, When we get an ai, president, I’m Canadian, but a lot of, AI researchers are in the states and want an AI president or an ai CEO

Right? Reducing our responsibility. I also love, I have to mention you brought up ants. I was recently in Honduras and I saw leaf cutter ants and it was so cool. I just stayed there for like 15 minutes and watched them they’re going along with their little leaves and they’re taking down to their nests and they’re gonna let them rot there and they’re gonna eat the fungus that comes from them.

Yeah. it’s a sophisticated, amazing process that, that, you know, just none of those individually knows how it works or why it works or what to do, but somehow. We are the ants and somehow we create these cultures and countries that sort of work.

Neil Lawrence: And they’ve come down to us, they’re bequeathed to us over a hundred and 300,000 years as homo sapiens, 2 million years before that as homoerectus, 9 million years as primates, 350 million years as animals that emerged from the ocean and started plotting about on the land.

All of those things are within us. I don’t have to be religious to marvel at the computation that took place and the information that passed slowly down the generations. if we are going at walking speed compared to evolution, we go at faster than light speed as we converse. But the complexity of that ecology around us that created all those things is well ahead of our culture.

And us looking at those things, trying to watch that generation of ants. Evolve into a generation that doesn’t fight across nests or whatever they’re gonna do next. That’s equivalent to the machine watching us talk to each other. And when we destroy our ecology, we destroy it because of our ignorance of the scale and complexity of those processes that are operating on timeframes that are unimaginably slow.

John Koetsier: Mm-Hmm.

Neil Lawrence: And when the machine undermines our culture by handing off too much decision making to it. It undermines us because it has no understanding about the complexity of our culture. So you get this situation where the speed is on the machine side, but the complexity is on our side.

So when we think about threats from these technologies, the right way to think about them is not some sort of superint intelligence, existential techno risk. It’s to think about them in the same way that we undermine. our ecology around us by making decisions that span across 200 years. Right?

Digging out the coal, starting to burn it up, it goes all the way back to when those first animals came outta the oceans. And, you know, sort of reintroducing carbon that was buried so long ago into the atmosphere. when he built the steam engine, he was worried about the pressure of the steam and where the boiler would blow up and kill people.

he had a sense of safety, but he could have no conception of what it might mean to be introducing that carbon. carbon dioxide was only just recently discovered by his friend Joseph Priestly. He couldn’t have any sense of the processes he was interacting with.

So, it’s not to blame him, not to blame any of us individually, but it’s the cultural practices that come down to us over time that operate across those times that have these learnings in them that we don’t understand the origin of, we don’t understand the origin of these behaviors, but that, that they somehow are embedded with learnings across time.

And, and I think, you know, that’s. Why? One of the principle challenges we face when we think about our political systems is this, tension between, and it’s always complex ’cause the United States uses these terms in slightly different ways, but traditional small c conservatism and traditional small c liberalism.

those traditions are small. Sea conservatism is, no, let’s keep doing it the way that we’ve been doing it in the past. there’s all sorts of good reasons. We don’t understand why having a king ruling the United States is a good idea, you know?

and small, our liberalism is, but we know new staff. Let’s change it up. Let’s get a president in, which is the right answer? Well, it depends. It depends on the circumstances and you don’t know for a few hundred years. we’re only a few hundred years into these experiments.

Not that I’m advocating with turn to monarchy, but, but fundamentally, we don’t know on, on ecological timescales. We don’t know how this is gonna pan out.

John Koetsier: Which is a good segue, to talk about safety and ai, and how humans can control what, in some vectors at least, will be. Either smarter than us or super human in capability because we’ve already seen much dumber systems, not even remotely intelligent systems have already wreaked havoc on us.

there’s been systems for buying stocks that have caused stock markets crashes, right? Yeah. Flash crashes. Systems for national, the

Neil Lawrence: excellent example.

John Koetsier: Flash different systems for national defense that have made the USSR think they were seconds away from being obliterated by nuclear missiles from the United States.

Right? How can we help try and protect ourselves from these things?

Neil Lawrence: Yeah, it’s difficult, but I, think the current conversation is going in utterly the wrong direction because this sort of idea of, oh, we need to have a group of thinkers who think about this centrally and worry about what might go wrong and intervene.

never works. you need people. you might need some central stuff going on, but it can’t be inward looking and thinking we are going to control the future of safety. No, you need to look out, look outwards to the, the people who are gonna be needing to use these systems in sensitive roles.

You know, whether that’s medical staff, teachers, security defense experts, you need to be working closely with ’em to understand what their role is. At the moment what the nuances of this role is, which takes time and Understanding and listening. It doesn’t involve you sitting centrally in an office thinking, I just need a neural network to detect criminals, and then we’ll be fine and then we’ll deploy.

Because what then happens is you go to the police stations around the country and they have no idea about what AI is, what it’s limitations are, and they’ll deploy it and naively follow what it says. you need to get a deep understanding of the roles of those key people in society is and understand how this technology can best support ’em in those roles.

And unfortunately, that’s sort of the opposite of what we have with digital technologies because the economies of scale mean that it’s. Better to build a one size fits all solution that you can distribute uniformly across the world. then wait for Cloud Strike to do a quick update on their, software and watch a third of the world collapse.

this is clearly, you know, this is, you do not see this in evolution. You do not see like, well, you do actually, The Elms in Britain, for example, spread by rhizomes. So they duplicate themselves, almost identically, genetically, and they go through these, 200, 250 year cycles of growing rapidly and spreading to create lots of elms and then all being killed by a variant of Dutch elm disease.

 

John Koetsier: almost all being killed.

Neil Lawrence: Almost all being killed. That’s the trick. You need a couple to survive. Almost all being killed, a cycle that was just occurring, in the last 50 years in the uk in terms of the degrading side. you wanna go for the shared genetics approach.

You are also going for the shared vulnerabilities. It allows you to scale and build quickly, but it also leaves you vulnerable to, the unforeseen. Intervention, whatever that is, that, brings things down. We also saw the same, with our supply chain, we talk about flash crashes in, finances, well supply chains, I used to work in supply chain with Amazon.

They’re becoming heavily automated. And you’ve got the situation, with the evergreen, the ever grande, you know, being stuck across the sue canal, bringing mass disruption. Well it’s because the efficiency of. Supply chains is so great that they don’t leave much room for robustness.

Exactly. And there’s this tension between efficiency and robustness. that is another one of these fundamental things you can’t get away from. So these things are sort of disturbing because. we actually try and regulate against them.

We have antitrust legislation or whatever else to try and prevent companies exploiting economies of scale and actually adding great fragility to our society. But what we see in the case of digital technologies is that that’s not really working. we do see regulatory interventions, which aren’t the exciting ones people are talking about,

No, the most important bill. That recently passed in the UK was the digital markets consumers and, competition act, which was about what it looks like to dominate a digital market and how you regulate, because there’s sort of economies of scale that were, are unfamiliar and from the past that are allowing companies to grow and dominate market sectors in ways that aren’t really healthy for our economy or, or our robustness of the future.

So a lot of the serious interventions are of this form. even if you look at the existential risk idea, the existential risk idea, conflates two things, power asymmetries and automated decision making. So it’s like, well what? What if you have a really powerful entity that dominates society?

And what if it’s making decisions automatically without human intervention? Well, that’s pretty bad, but already it’s bad to have massive power asymmetries in society. I don’t think you even get to that. Technical existential risk because the fragilities you’re introducing with these enormous power asymmetries, like with CrowdStrike, at some point they start to overwhelm you.

So the interventions are there, but the most interesting interventions are not the ones easy to write headlines about because they’re more along the lines of, okay, we need markets where there are more players who are, you know, we want a situation would be a teacher who is passionate about teaching.

Passionate about his or her students, passionate about building the next generation that are gonna lead us forward, but also is interested in AI and has a really cool idea about how they can make their students’ lives better and their lives a little better. And they want to take that idea. And you know what, maybe that idea only applies in the province of Quebec because it’s about French Canadian kids

We want that to be deployable. We want that teacher to be able to succeed with that. We don’t wanna wait until Google suddenly decides that the province of Quebec is an interesting place to do business. And that’s in Quebec. You know, you go into the e Tigray era of Ethiopia or you go into Uganda or any of the countries we would love to see, join us with better futures for their kids.

these companies just can’t service that. so we need an ecosystem. Of entities working with these technologies that are much closer to these sensitive areas of deployment that we really wanna see improve, like health and education, we’re getting there because the regulations are coming in,

Broader awareness of these challenges and where they’re coming from isn’t really there. We, need to carry people and citizens along this journey and their understanding of where things can help. And, I think the question is really mainly about the speed at which we can start bringing about that different kind of society.

John Koetsier: Super interesting stuff. I recently did a, a tech first on, an entrepreneur who’s bringing an AI innovation center into Bhutan, because. Nothing was being done there, period. And the royal family, there still is a royal family there that still is in charge. wanted their people to be able to have some of the, benefits of advanced technology, AI technology in their own language.

super interesting. It makes me think in terms of the risk factors you’re talking about, is that antitrust and breaking apart. Massive companies, which frankly control AI to a large extent. Right now we’re talking the, the metas of the world, Googles of the world. Apple to some small extent, open ai, which is not some massive conglomerate, but has incredible technology.

Amazon. Breaking those apart makes us safer. because we’ll have more competing models, more competing, varieties and more diversity market.

Neil Lawrence: whether it’s breaking them up or regulating in a certain way. it’s about access to that capability in a fair way, I think is the key.

And it’s hard to know what the right regulatory interventions are. And you also don’t wanna punish people for being innovative

John Koetsier: mm-Hmm.

Neil Lawrence: we know we don’t want those single points of failure. It’s not in those company’s interests either.

in some sense, it’s not like they have to be scheming to be doing this. It’s just their incentive structures push them in this direction. the seriously problematic position we’re in at the moment is those companies.

If you look at their share values, they’re predicated on future earnings. Which imply that they’re going to own ai. You know, they’re telling you that there’s gonna be this world where all decisions are gonna be made by AI and they’re gonna be the ones that own that. And that’s what their share prices and investors are saying.

And inside those companies, there’s lots of great people but the pressure then becomes, the incentive becomes whether those individuals believe in a GI or not, that a GI has become a key part of the narrative of their company. The culture of their company and their ambitions are.

To sort of capture that market. some of their CEOs even explicitly said that

John Koetsier: it’s horrific, mark Benioff, Salesforce. who literally said that last year at Dreamforce.

Neil Lawrence: Yeah. I mean, it’s. Utterly horrific. It really is.

I understand, you know, each of us individually can’t see the whole picture. I understand what his role is in his company and what he’s trying to do, and he’s got investors But, you know, that’s the point. there’s a great book by Papa called the open society and its enemies that highlights the way we bring about change

democracies assimilate changes. through piecemeal social engineers, people who understand the complexity and difficulty of the systems in front of them working, with good intent to bring about, you know, better systems. And the problem we have with AI is we are replacing and information well, and it’s not just ai, but with digital technologies in general.

What we’re doing is we’re getting the previous information, topography, the printing press, paper, books, filing systems, and the people who understood how to wield that technology. Who were the scribes from Ancient Mesopotamia initially, the people who could read and write who, who gained alongside the power that gave them responsibilities.

Because they became lawyers and accountants and civil administrators. They didn’t just have power, but society gave them responsibilities too. professional institutions, ways of holding their power in check, whether it’s being done well or not. What you’re seeing is a transition away from the written words into the digital. And the modern scribes are the software engineers.

John Koetsier: Mm-Hmm.

Neil Lawrence: And their guilds are the big tech companies. But these guilds and engineers have not yet gained the social responsibilities that we associated with these other guilds and professions.

So you’re in a position where they’re busily exploiting. That power to make enormous quantities of money. and we are having to fast forward how do we deal with that? But this is where LLMs potentially really help because, you know, and this is my provocation, I’ll just upset all the software engineers listening, probably already have, but let’s go a stage further, John.

But my provocation would be, okay, you want this technology to replace people on their jobs. How about we start by replacing software engineers? It’s how about we start by eliminating the software engineer? It’s close. It isn’t that close. ’cause there’s so much software engineer does. There isn’t just coding and there’s so much knowledge you need.

True. But it’s clear that in some sense. all software engineers want me to think, well, it’s not just about coding. I do this. They’ll come back with all these things. Well, isn’t that true for accountants, for doctors, for lawyers, for teachers? You know, I had someone in a talk the other day say, well, surely we can do all the early teaching with LLMs.

And I came back with this story of. Ian Wright, the former Arsenal footballer, who when he was a young kid, from a single parent family, didn’t have a father figure living in the same room with his family. Bit of a tearaway, wasn’t very academically gifted, as he says, you know, potentially on the wrong path.

There was one physical education teacher who believed in him as a person, you can see Ian Wright talking about this on a YouTube video, and he’s saying, and he was a Spitfire pilot. He’d fought in the Second World War, and you can see what it meant to this little boy. and how that turned his life around.

And it’s the most emotional video ever because the teacher appears behind him and he hasn’t seen the teacher in 30 years.

John Koetsier: seen that video.

Neil Lawrence: That’s the role of a teacher.

John Koetsier: Mm-Hmm,

Neil Lawrence: They understand arithmetic. that’s the role of a teacher.

John Koetsier: Not

Neil Lawrence: this

John Koetsier: Yeah.

Neil Lawrence: Believing in a little boy that they saw something in and turning that little boy’s life around and having that little boy exist in society and inspire other little boys and girls.

John Koetsier: Mm-Hmm.

Neil Lawrence: That’s the role of a teacher. That’s something that can’t be replaced by the ai and those are the people that we need to be supporting.

But we are undermining them if we’re centralizing the decision making. Once again, I don’t wanna say these people are evil but the systemics are wrong.

John Koetsier: We are just doing what we’re doing. I. We are in the box. We’re in, yeah. We’re optimizing our, we’re in the box conditions and, and a great box.

Neil Lawrence: It’s a key part of society as well. it brings many good things to us. It’s just gone a bit rogue in this case. needs to be pulled back into a place where it’s doing its job better.

John Koetsier: Everything has rhythms and rhymes, and I have some faith that we will find that there as well.

I have to call this to a close. I have to assume you’ve got things to do and gotta run. I thank you for taking this time.

Neil Lawrence: it was great. Yeah, it was a lot of fun. Be great to talk to you, John. And I’m optimistic like you because I believe in humans.

I believe in that teacher and I believe that people like that can work together and we can deliver the future we all want. I’m looking forward to seeing it happen.

John Koetsier: Have a great rest of your day. Thank you so much.

Neil Lawrence: And you, John.

Subscribe to my Substack