© 2024 KMUW
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

A new book asks if AI can cause the kinds of nuclear disasters seen in movies

SCOTT SIMON, HOST:

There's a huge question at the heart of Edward Geist's new book. Will the matchless velocity of artificial intelligence to discover, evaluate, calculate and confirm information make the kind of nuclear catastrophes portrayed in popular films, including "WarGames," "Dr. Strangelove" and "The Terminator," more or less likely? His new book is "Deterrence Under Uncertainty: Artificial Intelligence And Nuclear Warfare." And Edward Geist, a policy researcher at the Rand Corporation, joins us. Thanks so much for being with us.

EDWARD GEIST: Well, thank you for having me.

SIMON: I'll begin with the easy stuff. You think some of these popular films have actually made good points over the years?

GEIST: I certainly think that there are aspects of these films that do reflect reality. However, the likely future of the intersection of artificial intelligence and nuclear strategy, I believe, will probably be stranger than fiction.

SIMON: The only thing I can think to say is, how so?

GEIST: So in the films "WarGames" and "Terminator," the reason that these narratives of nuclear-armed computers run amok are so ubiquitous is because it makes for a great story. But it turns out that the would-be buyers of a WOPR, like in "WarGames" or Skynet in "Terminator" - the military is actually not so eager to replace human decision-makers with machines. That doesn't mean, however, that they are not interested. In fact, they are very interested in applications of artificial intelligence.

SIMON: You say that AI makes reason more possible in circumstances of uncertainty. That's a good thing, isn't it?

GEIST: Artificial intelligence researchers, for the last 60-plus years, have been searching for a way to make computers reason under uncertainty more effectively because reasoning under uncertainty is one of the core tasks of intelligence. So unfortunately, one of the things that they've discovered is that reasoning under uncertainty is difficult in the sense of being computationally intractable. So what that means is that you can't solve your knowledge quality problems. Like, you can't make up for knowledge you don't have just by buying a bigger computer.

SIMON: Will artificial intelligence let machines essentially plot to overthrow humanity? They have more brain power. They could be developed to have more strength. I feel my blood chilling as I phrase the question. And it might be impossible to pull the plug.

GEIST: One of the interesting implications of nuclear strategic theory, such as that articulated by Thomas Schelling back in the 1960s, is that it turns out that the more rational actor is not necessarily going to prevail in strategic bargaining that in a sense - that nuclear strategy is about the practical application of coercive bargaining strategies. And Thomas Schelling has all these wonderful examples of how you can be sort of, like, adaptively nonrational.

You make your threat of making this irrational retaliation credible by actively compromising your rationality, like taking out your hearing aid and throwing it away and making sure the other side knows you've thrown it away, and - so that therefore you can't hear what they say and therefore that they will be incentivized to capitulate, in part because they think that you are less rational than they are.

What worries me is less that, like, oh, well, the machines are, like, more intelligent than humans. It's that - could they engage in coercive bargaining more effectively than humans do? - so that therefore, the risk may not be coming because they are, quote, unquote, "more intelligent" but rather that they are equipped to be more ruthless.

SIMON: Because they lack morality?

GEIST: Well, unfortunately, human history suggests that humans...

SIMON: Yeah, that we do, too. Yeah.

GEIST: ...All too often lack moral scruples.

SIMON: We've already seen and you've mentioned how AI can make fakery appear to be more convincing. And those powers are only getting greater. That's alarming, isn't it?

GEIST: Oh, yes. In fact, I believe that this is the key development that we should be concerned about - is that the possibilities of AI for deception, including military deception, are becoming more apparent because of the generative AI revolution that is now ongoing.

SIMON: I'm wondering not about the United States, Russia and China. I'm wondering about somebody, some entity, with accomplished AI skills sitting somewhere else in the world...

GEIST: Yeah.

SIMON: ...That convinces the United States and Russia they are under attack by each other.

GEIST: Yeah.

SIMON: They start exchanging missiles that go back and forth and essentially destroy each other.

GEIST: Yeah. But fortunately, at least for the time being, doing that sort of thing remains pretty difficult. And one of the reasons for that is because of the way that we go about trying to confirm that a nuclear attack is underway.

SIMON: Here's the sentence in your book, I think, that most alarmed me.

GEIST: Yes.

SIMON: And it's a half-sentence, really. Quote - "some nuclear wars are potentially winnable." Now, I think a lot of people in your field, and certainly in the world generally, think the fact that nuclear war is unwinnable. To quote John F. Kennedy, "even the fruits of victory would be ashes in our mouths." That mutually assured destruction has kept the world from nuclear war for the past few decades.

GEIST: Right. As I continue, though, it's, like, some nuclear wars are potentially winnable, but that is only the case when you have an adversary that is willing to let you win. The obvious sort of example of this is where, you know, an adversary starts a nuclear war, and let's say that there is a real nuclear attack, and the president decides that it's not a real nuclear attack. This is a false alarm. He just fails to retaliate until it's too late. But if there's actually no retaliation at all, well, the other side is likely to inherit enough of a world that they could dominate it.

SIMON: You make a case for what you call tempered optimism...

GEIST: Yes.

SIMON: ...When it comes to AI. How so?

GEIST: Being unjustifiably pessimistic about potential outcomes is not doing ourselves as human beings, as Americans, any favors. Looking forward, we need to have, I think, a sensible concern for potential risks but also just not to descend into utter fatalism. It's, like, we do need to think about potential threats. We need to hedge sensibly against them. We also need to be thinking about, like, how do we make policy that's robust against the threats that we aren't currently taking seriously?

SIMON: Edward Geist of the Rand Corporation - his new book, "Deterrence Under Uncertainty." Thank you so much for being with us.

GEIST: Thank you for having me.

SIMON: And I hope you have a happy holidays 'cause if you can manage to have a happy holidays, I suppose the rest of us can, too.

GEIST: (Laughter). Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Scott Simon is one of America's most admired writers and broadcasters. He is the host of Weekend Edition Saturday and is one of the hosts of NPR's morning news podcast Up First. He has reported from all fifty states, five continents, and ten wars, from El Salvador to Sarajevo to Afghanistan and Iraq. His books have chronicled character and characters, in war and peace, sports and art, tragedy and comedy.