Why Machine Learning Isn’t Mainstream Yet

January 20, 2020 - 8 minutes read

machine learning apps

Thanks to great advancements in computational power, new algorithms, and better labeling of data, machine learning applications have flourished in recent years. From customer service chatbots to content recommendations, it now feels like this technology is everywhere.

Unfortunately, much of machine learning’s potential is still left on the table. Technical constrictions and complicated barriers lie in the way of this artificial intelligence (AI) subset becoming mainstream. Once these issues are resolved, more organizations and consumers than ever before will be able to practically leverage machine learning for their own benefit.

The Costs Still Outweigh the Benefits for Many

While recent research has undeniably enabled broader application of machine learning, costly computational requirements still impede this technology from entering the mainstream. Many experts have pinned their hopes of solving this problem on emerging algorithms that increase the efficiency of neural networks.

Inspired by and loosely based on how the human brain works, neural networks have been a focal point of computer science research for the last decade. But outside of highly-specific use cases, human brains still hold their own and often outperform machine learning applications.

machine learning apps

Currently, the human brain has one big advantage over AI development: It makes much more efficient use of its limited computational power than machine learning systems do. Computers excel at information storage and processing; they can perform thousands or even millions of calculations in a single second, something no human could match. But humans are much more energy-efficient than computers at processing information — in fact, they’re a 100,000 times better.

Since they aren’t constricted by space, computers obviously dwarf humans in terms of sheer processing power. And they’re only getting better with time. But even though computing power costs are decreasing, machine learning remains an expensive endeavor.

Most individuals, businesses, and researchers need the help of third-party services to implement or experiment with machine learning. Even chatbots, which now seem ubiquitous, can cost anywhere from $2,000 to $10,000, depending on complexity. To overcome this, many researchers have been creating and investigating different techniques to decrease the cost and time required to get a machine learning or deep learning application up and running.

Striving to Democratize AI

Both hardware and software play integral roles in machine learning. Consequently, researchers are working on boosting algorithm efficiency and designing hardware better. The latter category is not only extremely labor-intensive but time-consuming. To take care of both dilemmas, researchers have turned to design automation solutions.

machine learning apps

Neural Architecture Search (NAS) is often employed to automate the design of artificial neural networks (ANN) these days. With NAS, machine learning developers can design networks that are on par with or even outperform hand-designed counterparts. It’s considered a step towards automating machine learning but remains computationally costly.

To address this, Boston-based developers at MIT have created a much more efficient NAS algorithm that can learn convolutional neural networks (CNN) for particular hardware platforms. The research group accomplished this by “deleting unnecessary neural network design components” and focusing on specific hardware such as mobile devices. So far, test results show that these neural networks are nearly twice as fast as traditional counterparts.

Song Han, Assistant Professor of MIT’s Microsystems Technology Laboratory and co-author of the paper revolving around this research, says that democratizing AI is the main goal. “The aim is to offload the repetitive and tedious work that comes with designing and refining neural network architectures,” he explains. “We want to enable both AI experts and nonexperts to efficiently design neural network architectures with a push-button solution that runs fast on specific hardware.”

machine learning apps

On the hardware side, University of British Columbia researchers have demonstrated that Field-Programmable Gate Arrays (FPGA), configurable integrated circuits, offer a way to implement faster, more energy-efficient machine learning applications. As a result, FPGAs can make machine learning less time-consuming, more affordable, and more accessible.

When used in tandem with high-level synthesis (HLS), an automated design process for creating digital hardware that emulates algorithmic behavior, FPGAs enable a form of automatic hardware design, which would allow for faster machine learning implementation.

Making Machine Learning More Human

All of these potential avenues to streamline machine learning applications will ultimately lead to a shift in the use of the technology. For the most part, the automation tools we have today are isolated. For instance, website chatbots don’t usually interact with customer service employees unless certain conditions are satisfied; they rigidly adhere to their programming until they are programmed to do otherwise.

In the future, machine learning expert Robert Aschenbrenner predicts things will be quite different. “Rather than determining a process that we want to automate, a machine learning agent will observe the way we work, collecting and mining historical data to determine where opportunities for automation lie,” Aschenbrenner explains. “The AI tool will then hypothesize a solution in the form of an automated process change and simulate how those changes will improve productivity or lead to better business outcomes.”

machine learning apps

But to get to the point where algorithms learn like humans or animals do, there’s still plenty of work to be done. Aschenbrenner says that humans still beat machines in five key areas: rapid learning, reasoning and memory, vision, explainable models, and unsupervised/reinforced learning. As for why? Well, humans are naturally good at connecting seemingly disparate dots of information.

Better AI Is Inevitable

AI is certainly proliferating in its use cases of practical deployment. But these horizons won’t be broadened much more until we solve the hardware and software issues at hand and teach machines how to “connect the dots.”

The good news is that researchers and experts in the field have quite a number of promising avenues to achieve these feats. So it’s safe to say that it’s not really a question of if, but when AI will become more readily available.

machine learning apps

When that day comes, nearly every consumer and major industry will stand to benefit.

Tags: , , , , , , , , , , , , , , ,