Could AI Start a Nuclear War?

May 2, 2018 - 3 minutes read

We’ve heard all sorts of scary things about artificial intelligence (AI). Whether it’s losing our jobs and having to rely on universal basic income or preparing an AI robot take-over, these fears are often bolstered by statements from tech leaders like Elon Musk and Bill Gates.

Fortunately, if we never elevate AI to be in a position of such power that it can press the proverbial “red button”, it’s entirely possible that AI won’t ever cause nuclear war. But we are humans, and we do make mistakes. It’s important now more than ever to discuss ethics and morality as it relates to AI and the control we grant this technology.

How Much Should We Trust AI With?

The RAND Corporation recently hosted a conference with unnamed experts in the national security, nuclear weapon, and AI industries. The experts discussed and speculated about how AI will evolve and what it means for nuclear war. A lot of the discussion revolved around hyper-intelligent computers. The computers run on complex algorithms that can track inter-continental missiles and launch nuclear weapons in retaliation before the first bomb hits.

The military has seen an increasing use of and for computers to advance domestic and overseas capabilities. Computers are in charge of autonomous vehicles, nuclear arsenals, and detection systems. The RAND Corporation also released a paper with more insights. The Los Angeles-based non-profit think-tank researches national security. That naturally includes how AI applications will affect our world.

Roll the Dice

The paper talks about how AI can suggest when to launch missiles to human operators, but keeping a human between the AI and the “red button” isn’t a guarantee with future generations, who will fight about their own politics.

A strong case for keeping AI at least one degree away from the “red button” is the presence of inherent bugs and glitches that come with the development of software as it evolves. Undoubtedly, humans must continue refining the algorithm until it’s safe enough to put in a position of power. But should we even take that gamble? Sure, AI doesn’t have emotions driving its decisions as humans often do, but being emotionless isn’t ideal either (especially in the case of nuclear war).

Addressing the Problem Before It’s Reality

The researchers did mention, however, that AI excels at super-niche topics. So an AI in charge of something as complex as nuclear weapon release will take a lot longer to create than we assume. And then it’ll need more optimizing and refining to get to a place where it’s working well enough to deploy.

The thought of allowing a robot with “its own thoughts” to control the start of a nuclear war is terrifying. But, although scary, opening the floor for this conversation to occur now is better to do than confronting the problem when it’s too late. How much would you trust AI with?

Tags: , , , , , , , , , , , , , , , ,