Can We Avoid Flaws in Machine Learning?

July 12, 2018 - 4 minutes read

As artificial intelligence (AI) continues evolving, it goes without saying that it is going to need tweaks and patches along the way. Unfortunately, as several tech titans recently discovered, their AI systems have some serious problems revolving around gender and race.

Bias Is Moving Us Backwards

Research shows that language-processing AI can show signs of sexism, and facial recognition is notoriously terrible at identifying people of color. This has the likes of Mozilla, Accenture, Microsoft, IBM, Google, and even the U.S. government scrambling for a solution. These organizations are determined to fix these issues before they balloon into bigger problems.

Recently, Congress said AI bias must be a top priority for tech companies who are developing machine learning (ML) and AI algorithms. The House Committee on Science, Space, and Technology even explicitly asked Google and OpenAI if the government should regulate AI. “We need to grapple with issues regarding the data that are being used to educate machines. Biased data will lead to biased results from seemingly objective machines,” Dan Lipinski, Democrat Representative for Illinois, accurately points out.

Taking Initiative to Rectify the Situation

Google just published its ethics principles, which emphasize double-checking AI for bias. The company also built a website that ML developers can reference to reduce bias in their code.

Mozilla believes in showing people how a bad AI could affect them; the company has dedicated $225,000 for artists to showcase how AI could get dangerous if it has bias programmed into it. “We really want to take this abstract looming sense of fear, and help people get their heads around them. Unless you can imagine what [the danger] is, people can’t be asked to take action. Artists often play a very critical role that’s surprising,” says Mozilla’s executive director, Mark Surman.

Seattle-based Microsoft says its facial recognition technology works “up to 20 times” better on people of color and women now. IBM also released a statement about improved facial recognition with regard to people of color. They used more than 36,000 images from the Flickr Creative Commons to optimize their algorithm and open-sourced the dataset so developers can contribute to its ongoing improvement. Both Microsoft and IBM were implicated in the same study about poor facial recognition accuracy.

Accenture rolled out a service that helps eliminate bias in datasets used for machine learning. The tool identifies relationships in datasets surrounding age, gender, race, and other demographics. Rumman Chowdhury, Accenture’s global head of responsible AI, says the product will be important as AI becomes more regulated.

Can We Ever Get It Right?

Obviously, with the lack of accuracy in facial recognition, one CEO of a facial recognition technology startup says law enforcement can’t rely on the technology quite yet. But considering that more AI conundrums keep getting discovered, when will it actually be okay for law enforcement (or governments) to utilize AI-powered facial recognition? And where do we draw the line on the accuracy of these algorithms?

Are the tech giants are getting ahead of themselves with advanced AI, even though we still haven’t talked about regulation or enforcement? Yes, AI and ML will never be truly “perfect.” But how close can we come, and what level of “almost perfect” will we settle for?

Tags: , , , , , , , , , , , , ,