Is AI Really So Sure of Itself?

January 17, 2018 - 3 minutes read

Artificial intelligence (AI) isn’t refined enough yet to become a seamless part of our society. The human brain is incredibly complex — couple it with emotions, hormones, and external influences, and you’ve got a very difficult task to create a human-like AI.

Researchers at Google and Uber are working on adding a signature, essential human trait to AI: a sense of uncertainty. The approach, possible through deep learning, can result in more realistic probabilities of different outcomes. In turn, this will hopefully allow AI to measure and explain its confidence in its prediction or decision.

Reflecting an Uncertain World

This newest update to AI could be a major benefit for self-driving car technology. “You would like a system that gives you a measure of how certain it is,” says Dustin Tran, a Google employee who works on solving this problem. He continues, “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

Adding uncertainty into AI programs could make them smarter and less prone to blunders, according to prominent AI researcher Zoubin Ghahramani. Zoubin is also a professor at the University of Cambridge as well as Uber’s chief scientist. Uber uses proprietary technology, in the form of a programming language called Pyro, that merges deep learning and probabilistic programming.

This Is a Collaboration, Not a Competition

Although Tran works at Google, he’s contributed to Uber’s software too. Right now, it’s not about competing; the AI development ecosystem can only grow thoroughly through open-source, collaborative contributions. San Francisco AI development companies aren’t stopping their employees from advancing AI by giving a few ideas away.

An important, very useful feature of Pyro is that you can build a system that’s pre-programmed with knowledge. Another probabilistic programming language gaining attention is Edward, was developed by Columbia University with DARPA funding. Even though incorporating uncertainty has yet to blossom, everyone involved in tech is interested in it right now.

The Holy Grail of AI

Tran’s advisor, David Blei, a Columbia University professor of statistics and computer science, thinks the combination of probabilistic programming and deep learning is promising. But, he says, “there are many, many technical challenges.”

Indeed, developing AI is already challenging. Creating and testing it is enough of a barrier to viability. It can take numerous years to develop AI, let alone one that resembles a human-like digital brain. It’s really no wonder why this appears to be an outlandish proposition to many in the AI industry.

But the benefits of doing so are what keep AI developers going. Whoever achieves this feat could ultimately become the top player in AI. Of course, with the immense uncertainty, there’s no telling when it will happen or who will accomplish it. If only there was AI to do that…

Tags: , , , , , , , , , , , , , ,