4 Problems We Must Solve for AI to Advance

February 22, 2021 - 8 minutes read

In 2020, the pandemic caused emerging technologies like the Internet of Things, artificial intelligence (AI), and machine learning (ML) to advance faster than ever before. At an international AI conference, called NeurIps, AI researchers and enthusiasts explored how AI intersects with our biology and how we can improve AI applications by chaining together multiple AI and ML algorithms, called ensemble learning. The conference also took the time to start discussions on AI’s big picture problems that developers will need to contend with for the foreseeable future, like robustness, bias, and generalization.

In this post, we’ll cover the four areas of concern in AI that we’re watching this year. All four of these problems exist already and stand to cause more issues as AI applications scale over the next decade.

1. Deep Learning’s Greed

Deep learning is a subfield of ML, but machine learning applications cannot achieve the “humanness” that deep learning algorithms have been able to. Unfortunately, deep learning algorithms are incredibly resource-intensive. They require thousands of examples of what it needs to learn, and it wastes a ton of energy and time while learning.

This is probably the most non-human part of deep learning: the act of actually learning takes way more effort than humans usually need to learn something new. And when the deep learning algorithm is done training, it can usually only identify exactly what it’s been trained on. For example, a young child might know that he or she is looking at a dog, even if it doesn’t know the exact breed. The child could also infer what a unicorn looks like without seeing a photo of one if someone told him or her that unicorns are like horses with the horn of a narwhale.

But a deep learning algorithm couldn’t come up with the mental picture of a unicorn, nor could it identify an animal as a dog if the dog hasn’t been seen in its training data before. In AI terms, having a “less than one-shot” type of learning situation, wherein the algorithm could infer and take two steps in the right direction without direct training would revolutionize how we apply AI to our world.

2. Creating an Ensemble with Deep Learning

Deep learning algorithms have great potential to solve very niched-down problems, but they do take up a lot of time and effort. AI experts have been throwing around the idea of teaming deep learning up with another AI method, for example, one that handles search more efficiently to create a more productive ensemble of learning algorithms. London-based DeepMind, a subsidiary of Alphabet, has been experimenting with combining multiple AI approaches with their most recent application that plays any sort of game, called MuZero.

MuZero has shown high levels of success in playing games with a human approach. Instead of memorizing all of the rules or training on previous knowledge, it observes the game’s environment and gameplay. Even after it’s played millions of games, the algorithm has a general idea of how to play the game and how to win. To develop this algorithm, the researchers combined decision trees with a learned model.

3. AI’s Inability to Truly Learn

How can we know something has truly learned something? It can entertain unspoken ideas (inference) and reject and accept new concepts based on what it already knows to be acceptable (common sense). As an example, OpenAI’s algorithm GPT-3 has shown it can create human-like, natural language that’s difficult to know has been generated by a machine.

But the GPT-3’s output cloaks its true language proficiency. It can understand the relationship between certain words or when to use certain words, but it still doesn’t understand what the language is trying to convey. To improve GPT-3, it would require combining the unsupervised training algorithm with another AI technology, like computer vision for example. Allowing algorithms to “see” can quickly upskill the AI to know its environment and relate language with physical objects in its surroundings. While an algorithm like GPT-3 is unsupervised, meaning it doesn’t need human help to learn, computer vision is incredibly tedious because it requires manual labeling of images.

4. AI’s Fragility

Like many emerging technologies, AI is fragile. Sure, its output is commendable and interesting, but one cybersecurity attack can stop an entire computation or algorithm in its tracks. Worse yet, if someone were to muddle training data by injecting incorrect data, the AI doesn’t know that the recent input data is way off compared to the past data. It would train on the insidious data, and this would certainly affect the final output from the AI.

The fragility of AI is called “brittleness”, and it can mean the difference between life and death in the future. For example, if Tesla’s Autopilot was trained on road markers, a bad actor could place fake stickers all over the road which could cause accidents and fatalities. Another example is an AI associating a color with cancer just because it was trained on medical images that had colored annotations from physicians.

This downfall of AI requires us to program a level of flexibility into the algorithms we build. AI also needs to start thinking more critically. Imagine if you believed everything you heard — that’s the exact problem AI has when it comes across malicious training data. Unfortunately, AI doesn’t have the common sense to think twice, and this could be its downfall later on. When it comes to AI applications using sensitive and private data, it’s imperative that we build multiple precautions in our algorithms against hacking.

artificial intelligence app development

2021: The Year of AI?

With the fast lead-up to 2021 for AI, this is the year that we’ll see AI research continue to expand in interesting and novel directions, innovations using ensemble learning, and new applications that improve upon our lives. But unless we start tackling these four major issues, we’ll be spending the last years of the decade trying to band-aid the problem across millions of AI applications.

Tags: , , , , , , , , , , , , , , ,