How AI Can Help or Hurt IoT Security

January 30, 2020 - 6 minutes read

If there’s something that almost all emerging technologies are lacking, it’s strong cybersecurity standards and protocols. We’ve been developing innovative applications freely and creatively, but if we don’t keep an eye on hacking trends and enterprise-level security standards, we’re going to see customers, profits, and business value drop quickly.

In fact, lack of cybersecurity in IoT systems is actually hindering the field from reaching its full potential. To make matters worse, security features for IoT applications can vary from developer to developer. It, unfortunately, isn’t uncommon for security to take a backseat in the development process until the end, when there’s no budget or time left to implement a proper solution.

But this approach isn’t cutting it. In IoT applications, it’s not enough to create a set-it-and-forget-it security layer; IoT isn’t like a PC or any previous technology we’ve encountered before. Because there are so many sensors, devices, and a cloud component, it takes a lot of thought and planning to create a robust, secure IoT system.

Could AI help solve this conundrum before it’s too late? The short answer is yes and no. The long answer? Keep reading.

The Bright Side: Helpful AI

Right now, AI development within IoT applications is limited to data analysis, predictive analytics, and generating notifications for a human to take a closer look. It does very well in this area, and there’s still a lot to learn and apply. But even this application results in a lot of false positives for humans to sift through manually. In this case, applying AI to assist with IoT cybersecurity could cause more issues than it helps solve.

Is it possible for AI to train on known patterns of security attacks and breaches? Yes, but we would have to put the IoT system itself through the attacks multiple times so the AI can learn each nuance properly. And when hackers change up their methods and patterns, we’ll have to ensure our AI-enhanced cybersecurity protocols are trained on these changes immediately. Otherwise, this type of AI application can quickly make itself become obsolete and useless.

Whereas a company may have full-time dedicated cybersecurity and IT teams, it would also be imperative that they employ a full-time team of ethical hackers to constantly come up with new ways to breach their company’s security protocols. Even then, it’s not a 100% guarantee that an IoT system is fully secured against any type of hacking attempt.

One major hurdle is the lack of training data available for these breaches; companies who’ve been breached in the past are not likely to openly give out details of how and why their security systems were breached. Because of the nature of the Internet, anyone could use this information maliciously against other companies or against the same company again.

And more importantly, releasing information about a breach implicates personal and sensitive data that could upset customers.

The Dark Side: Malicious AI

AI is what you make it. If you’re developing AI with malicious intent, it can certainly be used to bring down a business’s operations for a few days or leak private data into the open waters of the Internet.

And as hackers get smarter and more creative with the increasing number of tools at their disposal, AI will be used to help breaches successfully occur, rather than help prevent them. This type of AI has already been lovingly named by experts as “enemy-AI”.

Enemy-AI is arguably easier to develop, train (it’ll take any training data it can get because any information can help it facilitate a security breach), and apply to attacks, especially when compared to using AI as a defense mechanism.

When we discussed companies not releasing breach information to the public, we see that with enemy-AI, there is no ethical standpoint. With hackers, we must assume that they have no morals or ethics that they follow; if something can yield losses for a company, we must assume that hackers find that information valuable as a result.

Which Will Take the Cake?

The cynic in us believes the malicious AI can easily win over the defense AI. But there is hope yet because it’s likely that neither “side” will ever truly win. There is a ton of value in using AI for IoT security. As far as what this value is used for depends on who gets to it first and who implements it best.

Ultimately, developers must prioritize security as a main part of IoT systems development. At our Los Angeles-based mobile app development studio, we’re planning for security layers in our applications from Day 1; we believe cybersecurity is so important to connected devices and systems that our own CTO is an ethical hacker!

Although AI has limitless potential for every industry today, we mustn’t expect it to do all of the heavy lifting, especially in the ever-evolving world of cybersecurity. AI is a tool that imperfectly does what we tell it to do, and we mustn’t expect anything more from it — at least for now.

Tags: , , , , , , , , , , , , , ,