AI in 2019: Advancements in Machine Learning, Natural Language Processing, and More

July 31, 2019 - 7 minutes read

Artificial intelligence (AI) is advancing at a rapid-fire pace. This breakneck speed makes it hard to keep track of and evaluate every important AI development. To solve this problem, angel investor Ian Hogarth and RAAIS and Air Street Capital founder Nathan Benaich have put together the State of AI Report 2019.

In this recently published 136-slide piece, Hogarth and Benaich dissect all things AI, from recent breakthroughs in the technology and areas of application to politics and financial impact. Both authors have extensive experience with AI involving research and startups. They also consulted a few prominent AI figures like Google AI researcher François Chollet and Facebook AI researcher Sebastian Riedel for insight.

The result? A rich resource to gain an overview of the current landscape of AI and where it’s headed — something that everyone should be familiarized with since AI will undoubtedly impact all of our lives. As Hogarth and Benaich explain it, “We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.”

So without further ado, let’s jump in.

Machine Learning & Gaming

As we discussed in our AI 101 session, reinforcement learning (RL) is a subfield of machine learning in which an algorithm discovers how to function through trial and error. Basically, the algorithm learns which actions yield the greatest rewards and adjusts its behavior accordingly. RL is often applied in robotics, navigation, and gaming.

RL has received plenty of attention from researchers over the past few years, with a substantial portion of its progress being made by training AI to play video games. As a result, we now have AI that can equal or surpass human performance in popular games like Quake III Arena and StarCraft II.

But beyond these capabilities, one question keeps popping up: Can AI’s abilities be expanded through playing games? Playtime is one of the best ways that children learn. Not only does it offer kids a chance to learn and practice different strategies in a low-risk environment, but it also allows them to explore based on curiosity.

OpenAI researchers sought to elucidate this concept by training a robot to shuffle physical objects quickly via simulation. With computer vision, the robotic system would predict the objects’ pose from three images and then apply RL to learn the next appropriate move.

Because RL relies on trial and error, this machine learning subset must balance exploration (experimenting with new strategies and behaviors) with exploitation (repeating what works best). Unfortunately, rewards can be difficult to encode in real-world scenarios. Can gaming supply the training sandbox that RL needs to move through the real world gracefully?

In an interview with ZDNet, Benaich notes that games can certainly help, but they aren’t a complete solution. “Data that is generated in a virtual environment is often less expensive and more widely available, which is great for experimentation.” He continues, “What’s more, game environments can be made more or less complex depending on the goals of the experiment in model development. However, the majority of games do not accurately mimic the real world and its plentiful nuances. This means that they’re a great place to start, but not an end in themselves.”

Natural Language Processing & Reasoning

2019 has been a monumental year for natural language processing (NLP). With Google AI’s BERT, OpenAI’s Transformer, Microsoft’s MT-DNN, and many other endeavors, it’s become clear that pre-trained language models can radically improve the outcomes for a plethora of NLP tasks.

Pre-trained models have already brought vast improvements to computer vision dealing with high- and low-level features (thanks in large part to ImageNet). Now language models are receiving similar boosts in high- and low-level language feature capabilities. Typically trained with immense amounts of unlabeled text from the Internet, these pre-trained language models could eventually be scaled up and unlock new commercial uses.

In their report, Hogarth and Benaich focus on the General Language Understanding Evaluation (GLUE) competition, a benchmark to evaluate NLP systems, to show the fast pace of progress: In just 13 months, modern NLP systems have increased their GLUE score from 69 to 88, with the human baseline level being 87.

Because language is intimately related to human cognition, these advancements could also carry the potential for improving AI’s common sense reasoning. Researchers from New York University have shown that neural models can attain simplistic common sense and even reasoning about previously unseen events by training on a dataset’s inferential knowledge.

Deep Learning & Domain Knowledge

Since an NLP model’s common sense can improve with a dataset’s inferential knowledge, does this mean that combining deep learning and domain knowledge could lead to even more fruitful outcomes? Benaich certainly thinks it’s an avenue worth exploring:

“Domain knowledge can effectively help a deep learning system bootstrap its knowledge of the problem by encoding primitives instead of forcing the model to learn these from scratch using (potentially expensive and scarce) data.”

However, Benaich was also quick to mention that more than text would be needed to advance AI common sense reasoning even more.

If you’d like to learn more about the current state of AI and its near future, you can see the full State of AI Report 2019 here. Stay tuned for the second part of our coverage of this report, where we’ll dive into hardware, automation, and the complicated politics surrounding AI.

Tags: , , , , , , , , , , , , , ,