What Are Google’s AI and Machine Learning Research Priorities?

January 21, 2019 - 7 minutes read

Interested in seeing what AI can do for your business? Read our Free Machine Learning Whitepaper right now!

2018 was a huge year for artificial intelligence (AI) development. Innovations in AI and its subsets like machine learning (ML) and deep learning grew by leaps and bounds. For anyone involved in tech, the impact was simply too big to ignore.

Jeff Dean, leader of Google’s AI division, recently released a blog post detailing the tech titan’s AI efforts this past year. It runs the gamut from ethics to quantum computing to computational photography.

In this article, we’ll take a look at some of our favorite topics mentioned in Dean’s post.

Evolving Machine Learning Automation

One of the most exciting areas of AI research is AutoML, the concept of automating parts of machine learning with machine learning itself! Google’s been active in this space for numerous years already. As Dean mentions, the company’s long-term objective is the development of machine learning systems that can automatically solve a new problem by drawing on insights and capabilities from other, previously solved obstacles.

A substantial portion of Google’s efforts in AutoML has been focused on reinforcement learning, specifically in relation to neural network architecture search. In 2018, the company expanded on reinforcement learning’s abilities in this arena to show that it could also be used to improve the accuracy of a plethora of different image models and optimize other aspects of training.

Besides reinforcement learning, Google also used 2018 to focus on its use of evolutionary algorithms to automatically find new neural network architectures that could be applied to a variety of visual tasks. All of this work in both fields came to a head in late October when Google unveiled AdaNet, a lightweight TensorFlow-based framework that utilizes ensemble learning (the practice of combining different ML model predictions) to optimize architectures for learning guarantees.

Automating the act of finding “computationally efficient” neural network architectures that can operate in tightly-constrained environments like mobile phones or self-driving vehicles was another big priority for Google this past year. So it also explored compressing ML models to have fewer resources and parameters while maintaining their integrity.

Giving Robots a Novel Perspective

For a while now, one of Google’s main goals has been to understand how machine learning can help robots comprehend and act in the world. Here’s how the company itself describes this challenging concept: “Designing robots that can observe their surroundings and decide the best course of action, while reacting to unexpected outcomes, is exceptionally difficult.”

In 2018, Google made some serious progress toward this objective. Using deep learning, reinforcement learning, seven robotic arms, 1000 items of all shapes and sizes, and 800 running hours, the company was able to teach robots how to grasp novel objects. Further testing revealed a 86% success rate in grabbing new objects.

Google’s engineers then built off this work to advance how robots can learn about objects without manual supervision or huge amounts of data in tow. And by combining ML with sampling-based methodologies, the AI-first organization also advanced how robots learn about motion, in turn accelerating their planning capabilities.

Essentially, how autonomous robots perceive the world and their environment got a drastic revamp in 2018, thanks to Google. This will undoubtedly play an integral role in robotic research for years to come.

Quantum Computing = the Next Frontier for Neural Networks?

Quantum computing is computing that uses information technology in tandem with quantum-mechanical phenomena. This promising paradigm will allow us to solve problems that no classical computer can handle. In fact, this act itself is known as quantum supremacy. It has not been achieved yet, but Google researchers believe the entire quantum computing field is on the verge of making it happen soon.

This past year, Google unveiled Bristlecone, a new 72-qubit quantum processor that the tech giant has been using to test applications in quantum machine learning. The San Francisco Bay Area-based company also released Cirq, an open-source quantum computer programming framework, which it then used to investigate how quantum computers could be applied to neural networks.

The company believes quantum computers acting as a computational substrate for neural networks could lead to unprecedented benefits. But we’ll have to wait until later in 2019 to hear more about this.

The Ethics of AI

There’s no doubt that AI can bring profound positive impact to the world. Unfortunately for Google, 2018 was quite a controversial year for its involvement with the technology. More than 3,000 of the company’s own workers signed a letter urging it to end its participation in a Department of Defense drone surveillance program known as “Project Maven.”

So it makes sense that AI ethics was a topic the company couldn’t avoid. As an indirect response, Google published the Google AI Principles and AI Practices. Together, these two outlines form the framework of how the company evaluates its own AI developments.

The company hopes that other organizations will find these documents helpful but is quick to note that some of the principles and practices noted are still constantly changing as the technology evolves.

What’s in Store for 2019?

You can learn more about Google’s AI efforts in 2018 here. What do you think the tech titan has planned for 2019? What AI endeavors are you most excited about for this year? Let us know in the comments!

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,