AI Can Finally Explain Its Thought Process (Sort Of)

September 20, 2018 - 3 minutes read

Artificial intelligence (AI) that can comprehend and explain its thoughts is a “holy grail” in the field — one that the U.S. Defense Advanced Research Projects Agency (DARPA) is dedicating $2 billion to over the next five years towards.

Now, a team of researchers at the Massachusetts Institute of Technology (MIT) says they’ve accomplished the ambitious feat.

AI Has Some Explaining to Do

Neural networks are a powerful and versatile form of AI; they can be trained to write, craft art, make music, and even drive a car. But they have one big problem: they’re quite complex and convoluted. This has made it hard for AI developers to understand how and why some neural networks make the decisions they do.

A team of Boston-based developers and researchers at MIT have created a neural network that can explain its solution to a problem, step by step. Such an advancement would not only accelerate the creation of more sophisticated AI but also allow us to create safer applications for riskier situations like autonomous cars driving in traffic.

Transparency in Vision

The neural network depends on a new algorithm called the Transparency by Design Network (TbD-net). Basically, it breaks down image recognition, a common AI application, into subtasks. In each subtask, the algorithm highlights the parts of an image that were instrumental in its decision-making.

For example, let’s pretend you wanted AI that could identify large metal spheres in a room of many objects. With the TbD-net algorithm, the AI would first highlight large objects. Then it would move on to identifying large metal objects. Finally, it would narrow its perspective down to large metal spheres.

In this way, researchers get a play-by-play breakdown of AI action that usually happens under the hood.

Opening the Black Box

Imagine if TbD-net could be expanded to applications outside of image recognition. The results would be profound and more assuring. As MIT professor Tommi Jaakkola explains, “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

Automating tasks with AI will bring an unparalleled level of efficiency. But giving AI the ability to explain itself will open up new perspectives and insights that will accelerate further advancements on both a qualitative level as well as a quantitative one.

Tags: , , , , , , , , , , , , , ,