Is It Time to Regulate AI-Fueled Facial Recognition?

September 6, 2018 - 10 minutes read

AI app developerFacial recognition is blooming; it’s already used in airports and police stations, on China’s streets, and as training data for new artificial intelligence (AI) algorithms. As AI developers continue applying the technology to facial recognition, it’s becoming readily apparent that the potential for abuse is growing as well.

So why isn’t there any regulation in place? Bureaucracy is infamous for being slow. But governments can no longer afford to take their time with emerging and disruptive technologies.

Official Regulations Overdue

You’re well aware that Microsoft is one of the most prominent tech companies in the world. The Seattle-based developer recently called for Congress to start regulating how and where facial recognition can be used immediately. Adding AI into facial recognition only makes the facial recognition software better and more accurate. And if regulations aren’t in place already, there will be some serious backtracking to do.

AI app developer

Alvaro Bedoya is the executive director of Georgetown Law’s Center for Privacy and Technology. The Center has been hard at work creating and maintaining a model bill of law for facial recognition regulation. The bill focuses on preventing abuse by law enforcement in accessing driver’s license photos and mugshots.

A Call for Immediate Regulation

Naturally, Bedoya is in favor of introducing regulation immediately. He says, “In Illinois and Texas, the rule is to get permission. So there’s precedent for it both in how we regulated commercial privacy in the past and how we already regulate face recognition today. And I think the simple rule of getting people’s permission makes a ton of sense.”

AI app developer

Brian Brackeen, the CEO of Kairos, a facial recognition company, agrees wholeheartedly. He argues that the government needs to just start. “What we need is the [National Telecommunications and Information Administration] process with an anvil over our head. We need to say, ‘We’re doing regulation in the fourth quarter. This is the first quarter, so you have a year to give us best practices.’ So let’s get people in a room working through these issues with that requirement over our heads. I do think there is a will for it from both sides, Republicans and Democrats.”

Normalizing Facial Recognition Carries Risks

Evan Selinger, a Rochester Institute of Technology philosophy professor, believes there’s danger in normalizing facial recognition technology. In other words, getting consumers used to a certain technology makes it easier for tech companies to abuse consumer data or facial recognition algorithms in the future. Consumers, already used to the technology, will barely bat an eye when the news hits them. This already happened, in a way, with Facebook and its data leak scandal earlier in 2018.

AI app developer

Selinger is fearful that companies will “engineer the desire” for facial recognition in consumers. Then, he argues, companies will “create habits that lead people to believe they can’t live without facial recognition tech in their lives. This is what the consumer side of facial recognition technology is doing: making it seem banal and unworthy of concern. By getting people to see facial recognition technology as nothing extraordinary, an argument about value and risk is being made.”

The Other Side of the Argument

Benji Hutchinson, VP of federal operations at NEC America, doesn’t think that the government should get too involved in regulating facial recognition. He explains, “We do not believe in a complete moratorium on the technology, and we do not believe that there is a burning need for over-legislation.

AI app developer

“… This is a wildly successful technology that’s been used to stop terror attacks. It’s been used to take criminals off the street. It lets us have paperless, frictionless travel when we’re going through airports. It decreases lines and wait times. It makes people’s lives better. And I think those benefits get lost in all the negativity.”

Should Law Enforcement Use Facial Recognition?

Kade Crockford is the ACLU of Massachusetts’s director of the Technology for Liberty Program. She, and the ACLU, believe that governments shouldn’t use facial recognition. Crockford argues, “Our core concern is that policing in the United States today functions without effective oversight or accountability. There’s a real deficit of trust.

AI app developer

“And in that ecosystem, it’s really hard to see how any legal requirement could be applied in a way that would truly protect people. … We just don’t have the civil society or governmental infrastructure to ensure that law enforcement would not abuse that.”

Could Facial Recognition Bolster Human Biases?

Brackeen thinks law enforcement and the government already have a major advantage over other users of facial recognition: they’ve got massive databases of driver’s license photos and mugshots in databases. Unfortunately, the police have already proven themselves to be biased through their actions. Giving officers even more power could quickly lead to all sorts of data abuse.

AI app developer

Crockford agrees with the sentiment that police already trained their own brains to disproportionately look for troublemakers in people of color. “We see disproportionate arrests of black and brown people in almost every category of minor offense… The bias in marijuana arrests is just astonishing. Even today, 90% of the people arrested for marijuana offenses in New York are black or brown.

“And that’s not because white people don’t smoke pot in public in New York City. It’s just because white people are almost never arrested for that crime. So that is a bias, and by using that database and pretending it’s a neutral technology, it codifies that bias.”

Who’s to Blame for Inherent Biases?

Bedoya is interested in the hard data surrounding bias and accuracy rates. Children and minors under 18 should be protected, he insists, and private places like hospitals, churches, and schools should be excluded too. Bedoya points out facial recognition’s biggest failure: correctly identifying black and brown people.

AI app developer

Brackeen adds that this is “a data problem, not a problem with the technology. We’re updating our algorithms ourselves, and we can remove bias, at least up to a point. I think the current outrage is: we’re a small company, but we’re doing the work necessary to be better. A company like Amazon has always had the resources to do better. But, in fact, they’re not, and they’re selling to the government.”

Correcting AI’s Perspective

Hutchinson’s day-to-day work involves building diversity into NEC America’s algorithms. He explains, “We spend millions of dollars a year looking for error rates that occur with different types of faces. … We don’t publish a lot of the results, but it is absolutely in our best interest to ensure that it is a low-error algorithm.

AI app developer

“The fact is, the math is not biased; it’s not racist. If some companies have lower-end algorithms and they haven’t put the R&D into it and they do have higher error rates with certain ethnic groups, that may just be an issue of a poor algorithm.”

Can facial recognition algorithms ever become unbiased when humans are programming them? Should law enforcement get dibs on the latest facial recognition technology? Who should be in charge of regulating facial recognition technology? And what are the key aspects of the technology that will be regulated?

These are all difficult questions to answer, but it’s imperative we try to do so now.

Tags: , , , , , , , , , , , , , , , , , ,