zero-sum

Can We Avoid a Franken-Future with AI?

by Lynn Parramore | Oct 31, 2024

Picture this: Dr. Victor Frankenstein strides into a sleek Silicon Valley office to meet with tech moguls, dreaming of a future where he holds the reins of creation itself. He’s got a killer app to “solve death” that’s bound to be a game changer.

With his arrogant obsession to master nature, Mary Shelley’s fictional scientist would fit right into today’s tech boardrooms, convinced he’s on a noble mission while blinded by overconfidence and a thirst for power. We all know how this plays out: his grand idea to create a new species backfires spectacularly, resulting in a creature that becomes a dark reflection of Victor’s hubris—consumed by vengeance and ultimately turning murderously against both its creator and humanity.

It’s a killer app all right.

In the early nineteenth century, Shelley plunged into the heated debates on scientific progress, particularly the quest to create artificial humans through galvanism, all set against the tumultuous backdrop of the French and Industrial Revolutions. In Frankenstein, she captures the dark twist of the technological dream, showing how Victor’s ambition to create a god only leads to something monstrous. The novel is a warning about the darker side of scientific progress, emphasizing the need for accountability and societal concern — themes hit home in today’s AI debates, where developers, much like Victor, rush to roll out systems without considering the fallout.

In his latest work, Mindless: The Human Condition in the Age of Artificial Intelligence, distinguished economic historian Robert Skidelsky traverses history, intertwining literature and philosophy to reveal the high stakes of AI’s rapid emergence. Each question he poses seems to spawn another conundrum: How do we rein in harmful technology while still promoting the good? How do we even distinguish between the two? And who’s in charge of this control? Is it Big Tech, which clearly isn’t prioritizing the public interest? Or the state, increasingly captured by wealthy interests?

As we stumble through these challenges, our increasing dependence on global networked systems for food, energy, and security is amplifying risks and escalating surveillance by authorities. Have we become so “network-dependent” that we can’t distinguish between lifesaving tools and those that could spell our doom?

Skidelsky warns that as our disillusionment with our technological future grows, more of us find ourselves looking to unhinged or unscrupulous saviors. We focus on optimizing machines instead of bettering our social conditions. Our increasing interactions with AI and robots condition us to think like algorithms—less insightful and more artificial—possibly making us stupider in the process. We ignore the risks to democracy, where resentful groups and dashed hopes could easily lead to a populist dictatorship.

In the following conversation, Skidelsky tackles the dire risks of spiritual and physical extinction, probing what it means for humanity to wield Promethean powers while ignoring our own humanity—grasping the fire but lacking foresight. He stresses the urgent need for deep philosophical reflection on the human-machine relationship and its significant impact on our lives in a tech-driven world.

Lynn Parramore: What is the biggest threat of AI and emerging technology in your view? Is it making us redundant?

Robert Skidelsky: Yes, making humans redundant — and extinct. I think, of course, redundancy can lead to spiritual extinction, too. We stop being human. We become zombie-like and prisoners of a logic that is essentially alien. But physical extinction is also a threat. It’s a threat that has a technological base to it, that’s to say, obviously, the nuclear threat.

The historian Misha Glenny has talked about the “four horsemen of the modern apocalypse.” One is nuclear, another is other global warming, then pandemics, and finally, our dependence on networks that may stop working at some time. If they stop working, then the human race stops functioning, and a lot of it simply starves and disappears. These particular threats worry me enormously, and I think they’re real.

LP: How does AI interact with those horsemen? Could the emergence of AI, for example, potentially amplify the threat of nuclear disasters or other kinds of human-made disasters?

RS: It can create a hubristic mindset that we can tackle all challenges rooted in science and technology just by applying improved science and tech, or by regulating to limit the downside while enhancing the upside. Now, I’m not against doing that, but I think it will require a level of statesmanship and cooperation which is simply not there at the moment. So I’m more worried about the downside.

The other aspect of the downside, which is foreshadowed in science fiction, is the idea of rogue technology. That’s to say, technology that is actually going to take over the control of our future, and we’re not going to be able to control it any longer. The AI tipping point is reached. That is a big theme in some philosophic discussions. There are institutes at various universities that are all thinking about the post-human future. So all that is slightly alarming.

LP: Throughout our lives, we’ve faced fears of catastrophes involving nuclear war, massive use of biological weapons, and widespread job displacement by robots, yet so far we seem to have held off these scenarios. What makes the potential threat of AI different?

RS: We haven’t had AI until very recently. We’ve had technology, science, of course, and we’ve always been inventing things. But we’re starting to experience the power of a superior type of technology, which we call artificial intelligence, a development of the last 30 years or so. Automation starts in the workplace, but then it gradually spreads, and now you have a kind of digital dictatorship developing. So the power of technology has increased enormously, and it’s growing all the time.

Although we’ve held off things, we’ve held off things that we are much more in control of. I think that is the key point. The other point is, with the new technology, it only needs one thing to go wrong, and it has enormous effects.

If you’ve seen “Oppenheimer,” you might recall that even back then, top nuclear scientists were deeply concerned about technology’s destructive potential, and that was before thermonuclear devices and hydrogen bombs. I’m worried about the escalating risks: we have conventional wars on one side and doom scenarios on the other, leading to a perilous game of chicken, unlike the Cold War, where nuclear conflict was taboo. Today, the lines between conventional and nuclear warfare are increasingly blurred. This makes the dangers of escalation even more pronounced.

There’s a wonderful book called The Maniac about John von Neumann and the development of thermonuclear weapons out of his own work on computerization. There’s a link between the aims of controlling human life and the development of ways of destroying it.

LP: In your book, you often reference Mary Shelley’s Frankenstein. What if Victor Frankenstein had sought input from others or consulted institutions before his experiment? Would ethical discussions have changed the outcome, or would it have been better if he’d never created the creature at all?

RS: Ever since the scientific revolution, we’ve had a completely hubristic attitude to science. We’ve never accepted any limitations. We have accepted some limitations on application, but we’ve never accepted limitations on the free development of science and the free invention of anything. We want the benefits that it promises, but then we rely on some systems to control it.

You asked about ethics. The ethics we have are rather thin, I would say, in relation to the threat that AI poses. What do we all agree on? How do we start our ethical discussion? We start by saying, well, we want to equip machines or AI with ethical rules, one of which is don’t harm humans. But what about don’t harm machines? It doesn’t exclude the war between machines themselves. And then, what is harm?

LP: Right, how do we agree on what’s good for us?

RS: Yes. I think the discussion has to start from a different place, which is what is it to be human? That is a very difficult question, but an obvious question. And then, what do we need to protect our humanness? Every restriction on the development of AI has to be rooted in that.

We’ve got to protect our humanness—this applies to our work, the level of surveillance we accept, and our freedom, which is essential to our humanity. We’ve got to protect our species. We need to apply the question of what it means to be human to each of these areas where machines threaten our humanity.

LP: Currently, AI appears to be in the hands of oligopolies, raising questions about how nations can effectively regulate it. If one country imposes strict regulations, won’t others simply forge ahead without them, creating competitive imbalances or new threats? What’s your take on that dilemma?

RS: Well, this is a huge question. It’s a geopolitical question.

Once we start dividing the world into friendly and malign powers in a race for survival, you can’t stop it. One lesson from the Cold War is that both sides agreed to engage in the regulation of nuclear weapons via treaties, but that was only reached after an incredible crisis—the Cuban Missile Crisis—when they drew back just in time. After that, the Cold War was conducted according to rules, with a hotline between the Kremlin and the White House, allowing them to communicate whenever things got dangerous.

That hotline is no longer there. I don’t believe that there’s a hotline between Washington, Beijing, and Moscow at the moment. It’s very important to realize that once the Soviet Union had collapsed, the Americans really thought that history had ended.

LP: Francis Fukuyama’s famous pronouncement.

RS: Yes, Fukuyama. You could just go on to a kind of scientific utopia. The main threats were gone because there would always be rules that everyone agreed on. The rules actually would be largely laid down by the United States, the hegemon, but everyone would accept them as being for the good of all. Now, we don’t believe that any longer. I don’t know when we stopped believing it, perhaps from the time when Russia and China started pulling their muscle and saying, no, you’ve got to have a multipolar order. You can’t have this kind of Western-dominated system in which everyone accepts the rules, the rules of WTO, the rules of the IMF, and so on.

So we’re very far from being in a position to think of how we can stop the competition in the growth of AI because once it becomes part of a war or a military competition, it can escalate to any limit possible. That makes me rather gloomy about the future.

LP: Do you see any route to democratizing the spread and development of AI?

RS: Well, you’ve raised the issue, which is, I think, one posed by Shoshana Zuboff author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power about private control of AI in the hands of oligopolies. There are three or four platforms that really determine what happens in the AI world, partly because no one else is in a position to compete. They put lots and lots of money into it, a huge amount of money. The interesting question is, who really calls the shots? Is it the oligopolies or the state?

LP: Ordinary people don’t seem to feel like they’re calling the shots. They’re fearful about how AI will impact their daily lives and jobs, along with concerns about potential misuse by tech companies and its influence on the political landscape. You can feel this in the current U.S. election cycle.

RS: Let me go back to the Bible because, in a way, you could say it prophesied an apocalypse, which would be the prelude to a Second Coming. “Apocalypse” means “revelation,” [from the Greek “apokalypsis,” meaning “revealing” or “unveiling”]. We use the word, but we can’t get our minds around the idea. To us, an apocalypse means the end of everything. The world system collapses, and then either the human race is extinguished or people are left and they have to build it again from a much lower level.

But I’ve been quite interested in Albert Hirschman and his idea of the small apocalypse, which can promote the learning process. We learn from disasters. We don’t learn from just thinking about the possibility of disaster, because we rarely believe they will actually happen. But when disaster does strike, we learn from it. That’s one of our human traits. The learning may not last forever, but it’s like a kick in the backside. The two world wars led to the creation of the European Union and the downfall of fascism. A relatively peaceful, open world started to develop out of the ruins of that war. I would hate to say that we need another war in order to learn because now the damage is too colossal. In the past, when you were still in a position to fight conventional wars: they were extremely destructive, but they didn’t threaten the survival of humanity. Now we have atomic weapons. The escalatory ladder is a much higher one now than it was before.

Also, we can’t arrange apocalypses. It would be immoral, and it would also be impossible. We can’t — to use moral language — wish evil on the world in order that good may come of it. The fact that this has often been the historical mechanism doesn’t mean we can then use it to suit our own ideas of progress.

LP: Do you believe that technology itself is neutral, that it’s just a tool that can be used for good or bad, depending on human intentions?

RS: I don’t believe technology has ever been neutral. Behind its development has always been some purpose—often military. The role of military procurement in advancing technology and AI has been enormous. To put it starkly, I wonder if we would have seen beneficial developments in medicine without military funding, or if you and I could even have this virtual conversation without military demands. In that sense, technology has never been neutral in its aspirations.

There’s always been a hubristic element. Many scientists and mathematicians believe they can devise a way to control humanity and prevent past catastrophes, embracing a form of technological determinism: that advanced science and its applications can eliminate humanity’s errors. You abolish original sin.

LP: Sounds like something Victor Frankenstein might have agreed with before his experiment went awry.

RS: Yes. It was also there with von Neumann and those mathematicians of the early 20th century. They really believed that if you could set society on a mathematical foundation, then you were on the road to perfection. That was the way the Enlightenment dream worked its way through the development of science and into AI. It’s a dangerous dream to have because I think we are imperfect. Humanness consists of imperfection, and if you aim to eliminate it, you will destroy humanity, or if you succeed, they’ll become zombies.

LP: A perfect being is inhuman.

RS: Yes, a perfect being is inhuman.

LP: What are your thoughts on how fascist political elements might converge with the rise of AI?

RS: The way I’ve seen it discussed mostly is in terms of the oxygen it gives to social media and the effects of social media on politics. You give an outlet to the worst instincts of humans. All kinds of hate, intolerance, insult, and these things sort of fester in the body politic and eventually produce politicians who can exploit them. That is something that’s often said, and there’s a lot of truth in it.

The promise, of course, was completely different – that of democratizing public discussion. You were taking it out of the hands of the elites and making it truly democratic. Democracy was then going to be a self-sustaining route to improvement. But what we see is something very different. We see minorities empowered to spread hatred and politicians empowered through those minorities to create the politics of hate.

There’s a different view centered on conspiracy theories. Many of us once dismissed them as the irrational obsessions of cranks and fanatics rooted in ignorance. But ignorance is built into the development of AI; we don’t truly understand how these systems work. While we emphasize transparency, the reality is that the operation of our computer networks is a black hole, even programmers struggle to grasp it. The ideal of transparency is fundamentally flawed—things are transparent when they’re simple. Despite our discussions about the need for greater transparency in areas like banking and politics, the lack of it means we can’t ensure accountability. If we can’t make these systems transparent, we can’t hold them accountable, and that’s already evident.

Take the case of the British postmasters [Horizon IT scandal]. Thousands of them were wrongly convicted on the basis of a faulty machine, which no one really knew was faulty. Once the fault was identified, there were a lot of people with a vested interest in suppressing that fault, including the manufacturers.

The question of accountability is key — we want to hold our rulers and our politicians accountable, but we don’t understand the systems that govern many of our activities. I think that’s hugely important. The people who recognized this aren’t so much the scientists or the people who talk about it, but rather the dystopian novelists and fiction writers. The famous ones, of course, like Orwell and Huxley, and also figures like Kafka, who saw the emergence of digital bureaucracy and how it became completely impenetrable. You didn’t know what they wanted. You didn’t know what they were accusing you of. You didn’t know whether you were breaking the law or not breaking the law. How do we deal with that?

I’m a pessimist about our ability to cope with this, but I appreciate engaging with those who aren’t. The lack of understanding of the system is staggering. I often find the technology I use frustrating, as it imposes impossible demands while promising a delusional future of comfort. This ties back to Keynes and his utopia of freedom to choose. Why didn’t it materialize? He overlooked the issue of insatiability, as we’re bombarded with irresistible promises of improvement and comfort. One click to approve, and suddenly you’ve trapped yourself inside the machine.

LP: We’re having this virtual conversation, and it’s fantastic that we’re connected. But it’s unsettling to think someone might be listening in, recording our words, and using them for purposes we never agreed to.

RS: I’m in a parliamentary office at the moment. I don’t know whether they’ve put up any Big Brother-type system of seeing and hearing what we’re saying and doing. Someone might come in eventually and say, hey, I don’t think your conversation has been very useful for our purposes. We’re going to accuse you of something or other. It’s very unlikely in this particular case — we’re not at this kind of control envisaged by Orwell — but the road has sort of shortened.

And standing in the way is the commitment of free societies to freedom, freedom of thought and accountability. Both of those commitments, one has to realize, were also based on the impossibility of controlling humans. Spying is a very old practice of governments. You had spies back in the ancient world. They always wanted to know what was going on. I have in my book, sorry – this is not a very attractive example, from Swift’s Gulliver’s Travels, where they get evidence of subversive thoughts from looking at people’s feces.

LP: It’s not so far-fetched considering where technology is heading. We have wearable sensors that detect emotions and companies like Neuralink developing brain-computer interfaces to connect our brains to devices that interpret thoughts. We even have smart toilets tracking data that could be used for nefarious purposes!

RS: Yes, the incredible prescience of some of these fiction writers is striking. Take E.M. Forster’s The Machine, written in 1906—over 120 years ago. He envisions a society where everyone has been driven underground by a catastrophic event on the surface. Everything is controlled by machines. Then, one day, the machine stops working. They all die because they’re entirely dependent on it—air, food, everything relies on the machine. The imaginative writers and filmmakers have a way of discussing these things, which is beyond the reach of people who are committed to rational thought. It’s a different level of understanding.

LP: In your book, you highlight the challenges posed by capitalism’s insatiable drive for growth and profit, often sacrificing ethics, especially regarding AI. But you argue that the real opposition lies not between capitalism and socialism, but between humans and humanity. Can you explain what you mean by that?

RS: I think it’s difficult to define the current political debates or the forms politics is taking around the world using the old left-right division. We often mislabel movements as far right or far left. The real issue, in my view, is how to control technology and AI. You might argue there are leftist or rightist approaches to control, but I think those lines blur, and you can’t easily define the two poles based on their views on this. So one huge area of debate between left and right has disappeared.

But there is another area remaining, and that is relevant to what Keynes was saying, and that is the question of distribution. Neoclassical economics has increased inequality, and it’s put a huge amount of power in the hands of the platforms, essentially. Keynes thought that liberty would follow from the distribution of the fruits of the machine. He didn’t envisage that they’d be captured so much by a financial oligarchy.

So in that sense, I think the left-right divide becomes relevant. You’ve got to have a lot of redistribution. Redistribution, of course, increases contentment and reduces the power of conspiracy theories. A lot of people now think that the elites are doing something that isn’t in their interest, partly because they’re just poorer than they should be. The growth of poverty in wealthy societies has been tremendous in the last 30 or 40 years.

Ever since the Keynesian revolution was abolished, capitalism has been allowed to rampage through our society. That is where left-right is still important, but it’s no longer the basis of stable political blocs. Our Prime Minister says, we aim to improve the condition of the working people. Who are the working people? We’re working people. You can’t talk about class any longer because the old class blocs that Marx identified between those who have nothing to sell except their labor power, no assets, and those who own the assets in the economy, are blurred. If you consider people who are very, very, rich and the rest, it’s still there. But you can’t create an old division of politics on that basis.

I’m not sure what the new political divisions will look like, but the results of this election in America are crucial. The notion that machines are taking jobs, coupled with the fact that oligarchs are often behind this technological shift, is hard to comprehend. When you present this idea, it can sound conspiratorial, leaving us tangled in various conspiracy theories.

What I long for is a level of statesmanship that is higher than what we’ve got at the moment. Maybe this is an old person’s idea that things were better in the past, but Roosevelt was a much greater statesman and politician than anyone on display in America today. This is true of a lot of European leaders of the past. They were of higher caliber. I think many of the best people are deterred from going into politics by the current state of the political process. I wish I could be more hopeful. Hopefulness is a feature of human beings. They have to have hope.

LP: People do need to have hope, and right now, the American electorate is facing anxiety and a grim view of politics with little expectation for improvement. Voters are stressed out and exhausted, wondering where that hope might lie.

RS: I would go to the economic approach here at this point. I don’t have much time for economic mathematical model building, but there are certain ideas that can be realized through better economic policy. You can get better growth. You can have job guarantees. You can have proper training programs. You can do all kinds of things that will make people feel better and therefore less prone to conspiracy thinking, less prone to hate. Just to increase the degree of contentment. It’s not going to solve the existential problems that loom ahead, but it’ll make politics more able to deal with them, I think. That’s where I think the area of hope lies.

Subscribe to Institute for New Economic Thinking

0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Subscribe to Zero-Sum Pfear & Loathing

Follow Us

Contact Us

Privacy Policy

Sitemap

© 2024 FM Media Enterprises, Ltd.