Is Cybersecurity Ready for AI and Machine Learning?

August 23, 2018 - 7 minutes read

At this year’s annual Black Hat cybersecurity conference, hosted in Las Vegas, developers were talking enthusiastically about machine learning and artificial intelligence (AI) applications in cybersecurity.

And because cybersecurity is a field that grows a bit larger every time a new software or operating system version is released, anything that cybersecurity experts can employ to save time and effort should help everyone, right?

It turns out some experts are worried about cybersecurity firms that aren’t working thoroughly enough to assess the risks that come with using AI in cybersecurity.

AI for Cybersecurity Without Proper Security for AI

The number of connected devices grows every day. And so does the number of enterprises being hacked. Cybersecurity firms are currently facing a major talent shortage. But Raffael Marty, Vice President of Corporate Strategy at Forcepoint, doesn’t think these conditions warrant use of AI in cybersecurity, especially if AI isn’t secure itself. He alludes to a more sinister possibility: “What’s happening is a little concerning, and in some cases even dangerous.”

Marty’s statements are poignant and sobering. Indeed, if AI doesn’t have adequate-enough cybersecurity protocols as it is, how can we even think about employing it in cybersecurity setups? And would we give AI access to cybersecurity credentials, like you would give the office’s IT administrators?

Are all enterprises’ AI cybersecurity protocols to be regulated and capped at a certain level? Or is it up to the enterprise to decide how much power it gives AI over its cybersecurity policies?

Humans have been interested in AI since we started programming; AI was already a concept in the 14th century. Over the centuries, we’ve fleshed out the four types of AI and its subset fields. But which, if any, would be the right one to employ in this scenario?

Back to Basics with AI

Type I AI is a highly-specialized algorithm that is more reactive than proactive. Type II is a bit more complex; it stores memories of past experiences to use in later decision making. Self-driving cars are an example of Type II AI. With Type III AI, the machine understands thoughts and emotions, interacts socially, and recognizes motives and intentions without explicitly being told them.

And a Type IV AI is still a bit vague because we haven’t reached it yet; we do know it’ll be a self-aware being. Researchers particularly fear the implications and consequences of Type IV AI technology.

When reading about AI, you often hear about the development of machine learning (ML) and deep learning (DL) as well. These terms refer to fields that are a subset of AI. DL is a field within ML, and ML is a field within AI. The three are connected but still discrete enough to warrant distinct categorization.

Following the Hype Train Could Be Fatal

One reason why many cybersecurity firms are adding AI to their solutions is that they know customers are following the AI hype train right now. To most customers who lack technical knowledge, anything that uses AI for the customer experience or behind the scenes must be better than a similar software lacking AI capabilities.

But following this hype can create a false sense of protection and superiority. Firms that are rolling out AI-enabled cybersecurity work off datasets that they manually label for malware. The algorithm gets trained on these datasets, and it’s theoretically designed to help catch malicious software before it hits the enterprise.

But Marty’s fearful that these algorithms aren’t getting trained with enough of the right type of data – the type that data scientists working on in conjunction with AI researchers would feed their algorithms. Basically, many of these data sets are cookie-cutter, ideal representations — not exactly the type that AI would encounter in real scenarios, and certainly not the type to produce robust, versatile systems.

And if a hacker were to get access to the datasets, the consequences would be disastrous. A security compromise could mean malicious manipulation of data could occur for months before getting detected.

Too Many Possibilities to Take On

Some of Microsoft’s cybersecurity experts, Holly Stewart and Jugal Parikh, spoke at Black Hat about the dangers of over-relying on a single algorithm that can become compromised. If the master algorithm is compromised, it would take down the entire cybersecurity setup’s integrity with it.

Microsoft’s Windows Defender has a solution for this specific problem: it uses a set of algorithms that are all trained on different datasets. If one becomes compromised, the others should be looking out for each other thoroughly enough to spot the changes.

But past these one-off solutions, Marty points out that complex algorithms can often give a result that makes no sense. And since most algorithms don’t spit out an explanation and logical steps taken to arrive at the conclusion, it doesn’t seem like a great time for AI and cybersecurity to get together.

While a simple AI algorithm can work wonders for enterprise-level cybersecurity protocols, even these must be monitored closely to lower any potential for disaster.

Where do we draw the line on how much we can rely on AI for situations like these? Can cybersecurity maintenance ever become an autonomous task with tools like machine learning and deep learning? Would you trust AI to protect your enterprise system? Let us know in the comments!

Tags: , , , , , , , , , , , ,