We recently wrote about Amazon’s Rekognition being sold to local law enforcement. This isn’t a one-off case; security teams around the world will implement some type of facial recognition within the next five to ten years. But the technology is still far from perfect.
U.S. Customs and Border Protection (CBP) is one of the most recent organizations to begin employing a face recognition system. They’re using the technology at the U.S.-Mexico border to check the identities of drivers and passengers.
The All-Seeing AI
The U.S. CBP plans to roll out the new system at the Anzalduas border in Texas this August. Dubbed “Vehicle Face System” (VFS), the technology will use a combination of several cameras and software powered by artificial intelligence (AI) to check identities of the passerby. After taking high-quality DSLR photos of everyone in the car, the software will compare it against a federal database of facial images. The database of images is compiled from passports, visas, and other travel documents.
Many critics view the entire concept as invasive to those thousands of innocent travelers that pass through the border every week. It may save some time in checking and verifying travel documents, but it can also be seen as a targeted form of surveillance.
Malkia Cyril is the executive director of the Center for Media Justice, located in Oakland, California, right outside of San Francisco. She emphasizes the inequality this will bring: “This is an example of the growing trend of authoritarian use of technology to track and stalk immigrant communities. It’s absolutely a violation of our democratic rights, and we are definitely going to fight back.”
Is an Objective Perspective Possible?
As we continue developing AI, it’s important to recognize two factors. The first is that we dictate where AI is implemented and where it provides value. Second, it’s still far from perfect.
CBP tested the software in 2016, captured over 1,400 images, and never released public results of how the test pilot went. Research has shown that facial recognition, even the kind robustly powered by AI, has a very large inaccuracy rate. One study, conducted in Wales, even found a 90% misidentification rate. In the U.S., in particular, facial recognition was found to be routinely less accurate when tested on non-white faces over white ones.
CBP didn’t comment on how accurate VFS is. But the federal agency did say it’s working on an internal “privacy impact assessment.” What good that will bring, we may never know, as the agency is not in the habit of releasing results for public analysis. Do you think such results should always be made public? Is it possible to make objective, unbiased AI? Let us know your thoughts in the comments!Tags: AI, AI app developers, AI App Development, artificial intelligence, Mexico, Mexico border security, Mexico-U.S. relations, San Francisco AI app developers, san francisco mobile app developers, San Francisco mobile app development