Can AI Learn Common Sense?

May 14, 2020 - 9 minutes read

Thanks to a method known as supervised learning, artificial intelligence (AI) development has made leaps and bounds in the last few years. Numerous AI applications have become possible as a result of these advancements, causing the technology to feel like it’s everywhere.

But behind the scenes, innovators are already exploring what’s next for AI. And many of them have set their sights on one of the most elusive obstacles in the field: Giving AI its own sort of common sense.

The Need to Go Beyond Supervised Learning

Look at a few pieces of modern technology, and you’ll likely find AI embedded somewhere between the nuts ‘n’ bolts. Dig deeper, and what you’ll realize is that you’re actually observing supervised learning. This machine learning technique lets AI systems map specific inputs to outputs based on examples. In much the same way we teach children to read, supervised learning teaches computers to see patterns.

But top AI researchers believe that the future of AI depends on going beyond this approach. To attain human-level intelligence, AI should be able to learn on its own. After all, humans do it all the time. Yes, we can teach children how to read and write. But teaching babies to stand or walk? That takes trial and error on their part.

The ability to derive insights from seemingly disparate sources is what many people consider common sense. And current supervised learning systems lack it. According to David Cox, IBM Director of the MIT-IBM Watson AI Lab, even if an AI system read all the books in existence, it would still not have human-level intelligence; a lot of humanity’s knowledge isn’t written down. To attain it, you must infer it.

Supervised learning depends on annotated data to work. After painstakingly labeling images, audio, or text, workers feed this information to computer algorithms. After digesting a mountain’s worth of this data, the algorithms become proficient in recognizing what they’ve been conditioned to see.

As amazing as this technique is, supervised learning is constricted in its usage. “There is a limit to what you can apply supervised learning to today due to the fact that you need a lot of labeled data,” says Dr. Yann LeCun, an NYU professor, Chief AI Scientist of Facebook, and one of the recipients of the 2018 Turing Award.

Other Methods to Teach AI

With the success of applications like self-driving cars and language translation, supervised learning has eclipsed many other AI teaching methods. Several of them do not heavily rely on human supervision. And as supervised learning’s limitations become more apparent, these methods are seeing a resurgence in popularity.

“There’s self-supervised and other related ideas, like reconstructing the input after forcing the model to a compact representation, predicting the future of a video or masking part of the input and trying to reconstruct it,” explains Dr. Samy Bengio, a Google research scientist and brother of Dr. Yoshua Bengio, one of the other recipients of the 2018 Turing Award.

Reinforcement learning, which doesn’t rely on training data and needs limited supervision, is also a prime contender. Pioneered by Dr. Richard Sutton of the University of Alberta, this technique is based on the reward-driven learning that our own brains employ.

Reinforcement learning is akin to teaching a mouse to pull a lever in order to get a pellet of food. This simple yet sophisticated strategy teaches computers to take action. Basically, a reinforcement learning system will work towards an objective through trial and error until it consistently achieves that goal.

Different Paths to Predictive Learning

To encapsulate AI’s ideal future, Sutton prefers the term “predictive learning.” Essentially, this means that AI systems will not only be able to recognize patterns but also predict outcomes and even choose an appropriate course of action. Pretty much everyone agrees that predictive learning is a necessary next step for AI’s advancement. But how we’ll get there is still up for debate.

“Some people think we get there with extensions of supervised learning ideas; others think we get there with extensions of reinforcement learning ideas,” says Dr. Sutton.

Pieter Abbeel is Director of the Berkeley Robot Learning Lab, about half an hour’s drive from San Francisco. He’s pitting reinforcement learning systems against one another in a tactic known as self-play. He explains the logic behind this process: “By playing against your own level or against yourself, you can see what variations help and gradually build up skill.”

While reinforcement learning undoubtedly has vast potential, Dr. LeCun is betting on another method: Self-supervised learning. In this process, AI systems ingest gargantuan amounts of unlabeled data and try to make sense of it without any supervision or reward. Currently, LeCun is working on models that learn from observation. By accumulating enough information, they could form a type of common sense.

To understand how this would work, LeCun asks you to imagine if you gave a machine the task of predicting what happens next in a video clip. Accomplishing such a feat would require the system to form a representation of the data it’s ingesting. Then it must be able to differentiate between inanimate and animate objects; the former have predictable trajectories, while the latter don’t.

Once you make a self-supervised AI watch millions of YouTube videos, for example, it could then draw on the representation it has built off this data — basically, it teaches itself. Dr. Cox is working on a similar endeavor but incorporating more traditional components of AI. He calls this concoction “neuro-symbolic AI” and hopes it can acquire common sense comparable to humans.

Is General Intelligence Only a Matter of Time?

At the moment, AI-powered robotics is confined to discretely-defined environments with little to no variation. By building more robust general algorithms, we could perhaps give robots the ability to venture out into the real world and do real things. This is a simplified version of Dr. Sergey Levine’s assumption about AI’s future.

Levine is an assistant professor at Berkeley and runs its Robotic AI & Learning Lab. He’s using self-supervised learning to let robots explore their environments and naturally build up base knowledge that they can use in new settings. “They just play with their environment and learn,” Levine explains. “The robot essentially imagines something that might happen and then tries to figure out how to make that happen.”

Dr. Abbeel believes that advancements in AI are likely to be made by combining these different methodologies. Will this eventually lead to human-level intelligence in these machines? We’re not sure. But Dr. LeCun remains optimistic: “Of course; there’s no question. It’s a matter of time.”

Tags: , , , , , , , , , , , , , , , , , ,