The Dangerous Potential Exploits of AI

March 2, 2018 - 5 minutes read

The possibilities with artificial intelligence (AI) seem endless. At first read, it’s normal for that sentence to have a positive connotation. But just as with any other tool, AI development could be used for nefarious purposes. And while it’s certainly unpleasant to ponder, it’s absolutely something that’s necessary for techies, developers, and the general public to be made aware of.

An Ugly Truth

We’re all familiar with the sci-fi trope of evil AI: machines hacking into infrastructures, killer robot armies, and invincible superhuman cyborgs can make for fun film entertainment. What’s not so fun is when this fiction is not too far off from reality. Automated hacking, fake videos to sway the general public, and machines controlling missiles are not too farfetched, at least according to the recently released “Malicious Use of Artificial Intelligence” Report.

Spanning 100 pages, 26 authors, and 14 institutions, the paper draws insight from a two-day workshop held in Oxford, just an hour shy of London. Developers and researchers from industry and academia identified three key areas (digital, physical, and political) in which AI could cause detrimental disruption if put in the wrong hands. The authors of the report also found it important to focus on potential misuses of AI that could occur within the next five years.

Miles Brundage, a research fellow at Oxford University’s Future of Humanity Institute, explains the importance of this paper: “AI will alter the landscape of risk for citizens, organizations and states — whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling, and repression — the full range of impacts on security is vast.”

Humanly Impossible Intelligence

One of the biggest concerns in the report is the relatively new development of reinforcement learning in which AI is trained to superhuman levels of proficiency. Brundage elaborates, “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labor.”

Shahar Avin, a researcher at Cambridge University’s Center for the Study of Existential Risk, covers some of the possibilities: Advanced systems like AlphaGo could be leveraged by hackers to find security exploits; AI could streamline the acts of manipulation or impersonation by perfecting the creation of fake video or audio; someone could even theoretically train a drone to target certain people. The list, unfortunately, goes on.

A Call-To-Action for AI Developers

Dr. Seán Ó hÉigeartaigh, the executive director of the Center for the Study of Existential Risk and one of the paper’s authors, explains how important responsibility is with regard to AI: “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call to action for governments, institutions and individuals across the globe.”

The report mentions an array of actions that we can take now to mitigate potential AI misuses. Making AI less exploitable and backing it up with enforced laws and regulations are a few examples. But looking over the numerous approaches makes it clear that two main factors are needed to make AI safe: teamwork and a proactive attitude.

For what is rather a dismal discussion topic, it’s heartening to see the report allude to the possibility for a positive future with AI if all players involved work together. It’s important for all stakeholders to engage with each other to understand the different perspectives on this matter. For this complex conundrum involving intelligent machines, the solution is surprisingly simple in concept. Now, all we have to do is follow through.

Tags: , , , , , , , , , , , , , , ,