Is the Era of Deep Learning Coming to an End?

February 4, 2019 - 9 minutes read

Artificial intelligence (AI) development is filled with turbulence. Today’s techniques and schools of thought could easily become tomorrow’s outdated methodologies at a moment’s notice.

To see where AI is going next, MIT Technology Review analyzed the past few decades of research in the field. The result? We may be approaching the decline of deep learning.

The Death of Deep Learning?

By now, you’ve probably heard of at least one technological wonder made possible through the marvel of AI. Maybe you’ve heard about how AI fuels Facebook’s news feed or powers Google’s search engine. Or perhaps you appreciate how it refines Netflix’s recommendation system so you can binge-watch only the best content worthy of your time.

Either way, nearly all the AI accomplishments you hear about are possible due to deep learning. It’s a subset of machine learning development that relies on the application of deep neural network architectures to make decisions or solve problems. And as crazy as it sounds, deep learning may be on its way out.

Don’t worry, you’re not alone in thinking this is absurd. Pedro Domingos is a computer science professor at the University of Washington. He’s aware of how preposterous this all sounds: “If somebody had written in 2011 that this was going to be on the front page of newspapers and magazines in a few years, we would’ve been like, ‘Wow, you’re smoking something really strong.”

The Evolution of AI

While deep learning has been revolutionary for AI, it’s actually only been at the forefront for less than 10 years. And when you examine the entire field’s history, one fact becomes readily apparent: Change is the only constant in AI.

Every decade sees the rise and fall of a variety of techniques and ideas. It can be abrupt, heated, and seemingly random, but it’s as consistent as clockwork. MIT Technology Review wanted to visualize these ebbs and flows more accurately. So they downloaded the abstracts of all 16,625 AI papers available on scientific paper database arXiv and analyzed their words to see how the field has evolved over the past 25 years.

It’s important to note that the term “artificial intelligence” goes all the way back to the 1950s, while arXiv’s AI archives only reach back to 1993. MIT Technology Review also emphasized that these papers were still only a fraction of the actual work being done in the field.

Still, the Boston-based publication did manage to identify 3 major trends: the growth of machine learning growing in the late 1990s and early 2000s, the rise of neural networks during the early 2010s, and a shift toward reinforcement learning in recent years.

Rise of the Machines

Before the early 2000s, knowledge-based systems were the status quo. These are computer programs built off the concept that all human knowledge could be encoded in rules. In the ’80s, these types of systems shot up in popularity due to the excitement around (extremely) ambitious initiatives that were trying to recreate human common sense in machines.

Unfortunately, researchers working on these projects ran into a major roadblock: it was far too arduous and time-consuming to encode all the rules needed to make an even slightly useful system.

Enter machine learning. This emerging technology allowed machines to automatically extract the rules they needed from a mountain of data. Researchers could say goodbye to hours of encoding thousands of rules for little payoff. It didn’t take long after this for the entire knowledge-based systems field to pivot toward machine learning.

As a result, published research experienced a shift in its vernacular. Words related to knowledge-based systems, like ‘logic,’ ‘rule,’ and ‘constraint,’ all experienced a drastic decline in usage. Inversely, terms associated with machine learning, like ‘data,’ ‘performance,’ and ‘network,’ all became much more common.

A Clear Perspective With Neural Networks

While machine learning became the obvious avenue for AI innovation, it would still take a number of years before neural networks, the main vehicle of deep learning, would rise in prominence.

As MIT Technology Review’s analysis shows, the 1990s and 2000s were a time for competition between a variety of AI techniques. Support vector machines, evolutionary algorithms, and Bayesian networks were all being tested, researched, and implemented. While all were different, they each focused on finding patterns in data.

In 2012, a breakthrough occurred at ImageNet that would change AI forever. Each year, this competition aims to advance progress in computer vision research. And it sure did that year! A research group from the University of Toronto managed to achieve the best image recognition accuracy by a wide margin of more than 10 percentage points.

Deep learning, the technique they used to accomplish this, spread like wildfire through the computer vision community. Shortly after, its popularity (and that of neural networks) began spreading to other fields.

The Shift to Reinforcement Learning

Since deep learning’s explosive rise in popularity, MIT Technology Review’s analysis has identified a third paradigm shift occurring: the rise of reinforcement learning.

Besides the many different techniques in machine learning, there are also three different types of learning to consider: supervised, unsupervised, and reinforcement learning. Supervised learning is the process of feeding a machine labeled data. It’s the most commonly utilized type of learning by far.

The process of reinforcement learning mimics the act of training animals through a punishment-reward system. It’s nothing new, but for many years, it just wasn’t viable. “The supervised-learning people would make fun of the reinforcement-learning people,” says Domingos.

But in recent years, reinforcement learning has experienced a resurgence in research paper abstracts. And just like deep learning, it all boils down to one moment. In October 2015, DeepMind’s AlphaGo defeated the Go world champion. Guess what it was trained with? You got it—reinforcement learning. And just as with deep learning, it didn’t take long for the AI research community to give this technique a second look.

What’s Next for AI?

MIT Technology Review’s analysis of thousands of abstracts elucidated the ongoing competition of ideas in the field of AI. Unfortunately, it also unveils one other main insight: “The key thing to realize is that nobody knows how to solve this problem,” explains Domingos.

The majority of the techniques highlighted in the analysis were all around in the 1950s. As each experienced triumphs and challenges in implementation, they’d experience a proportionate rise or decline in popularity.

If the past is any indicator of the future, the next decade should be similar. Which means that the era of deep learning may indeed be coming to an end. Whether or not this will be due to a new idea or an old technique regaining favor with the AI community is anyone’s guess.

Where do you think AI will go from here? Will deep learning still be in use 10 years from now? Let us know your thoughts in the comments!

Tags: , , , , , , , , , , , , , , , , , , , , , , , ,