Can the U.S. Military Stop Deepfakes and Other AI Tricks?

May 31, 2018 - 8 minutes read

If you haven’t seen the above video before, it was probably quite a shock to hear some of the statements coming out of Barack Obama’s mouth. Let’s face it, AI development has reached a point where it can get downright scary.

Consider this technology in our current political climate. It wasn’t so long ago when President Donald Trump and North Korea’s Supreme Leader Kim Jong-un were trash-talking each other about nuclear weapons over social media. Fast forward a few weeks, and the leaders almost reached peaceful talks of disarmament.

Fickle politics and advanced AI can be a disastrous combination. The Department of Defense (DoD) is trying to get the latter under control in order to stymy this threat. But is it already too late?

Fake It ‘Til You Make It

Imagine if a fake video of a world leader like Trump or Jong-un declaring war went viral. Not such a fun thought, right? The Defense Advanced Research Projects Agency (DARPA) doesn’t think so, either. DARPA is the DoD’s research arm, and it’s getting extremely disconcerted by the abilities that AI is giving us. Instead of helping to end fake news, AI may be making it much more viable.

AI app developer

That’s why DARPA is funding a contest to help determine how far along deepfakes and other AI tricks have come. Specifically, the agency wants to know if it will soon be impossible to distinguish fake content from real material, even with the help of AI. Unfortunately, many of DARPA’s own technologists think this possibility is not only inevitable but close to reality as well.

The AI fakery contest will take place this summer. From New York City to San Diego, the foremost experts in digital forensics will gather for a friendly competition to see who can generate the most convincing AI content in terms of video, audio, and imagery. On the flip side, they’ll also compete to create the best tool for automatically catching counterfeits.

A New, Troubling Technique

Many of DARPA’s technologists are worried about a new AI technique’s potential to make fake content impossible to catch automatically. Using a class of AI algorithms known as generative adversarial networks (GANs), this technique can produce strikingly realistic false imagery.

It is due to these powerful capabilities that the technique has rapidly grown in popularity among the global machine learning community. Counterfeit celebrity images or videos, changing night to day, and turning a frown upside down all become much easier with GANs.

AI app developer

A GAN is essentially comprised of two components: the actor and the critic. It’s the actor’s job to take a data set, like a series of videos or images, and learn the statistical patterns within it. It then generates fake data (what eventually makes the image). The critic’s job is to decipher the real examples from the fake ones that the actor creates. This feedback loop allows the critic to ultimately produce a realistic but synthetic example.

The fact that GANs were built to outsmart AI from the get-go is a good reason to fear it. But as with most cases, it is our fear of the unknown that’s really driving this contest. “Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” explains David Gunning. He’s the DARPA program manager helming this contest. “We don’t know if there’s a limit. It’s unclear.”

Deep Learning’s Duality

Deep learning (DL) is a subset of machine learning that makes remarkably accurate face recognition possible. Ironically, it also makes it easier to maliciously manipulate images and videos to create deepfakes. By feeding mountains of data into a “deep” neural network, DL can automatically and seamlessly integrate new variables, such as a face or voice tone, into existing content, like a photo or speech.

AI app developer

“These technologies can be used in wonderful ways for entertainment, and also lots of very terrifying ways,” says Aviv Ovadya, Chief Technologist at the University of Michigan’s Center for Social Media Responsibility. “You already have modified images being used to cause real violence across the developing world. That’s a real and present danger.” Ovadya is among a plethora of experts concerned that ongoing AI developments will be used in a nefarious manner.

Current standards for detecting digital forgery generally follow three main steps. First, the digital file in question is checked for any signs of splicing in the images or videos involved. Second, physical properties such as lighting are examined for any unnatural qualities.

Third, the file is inspected for logical discrepancies, like incorrect weather for the recorded date or inconsistencies in the background compared to the location. The third step is usually the most difficult to automate; therefore, it’s also the most difficult for a deepfake to pass.

An AI Arms Race

But the introduction of GANs changes everything. Detecting digital forgery was difficult enough as is. Now, it may be impossible. The competition between AI to detect false content and the AI that makes the deepfake is in a strange deadlocked race; both depend and benefit from general technological advancements. What will determine the winner is how one innovates with the current technologies at hand.

“We are definitely in an arms race,” says Walter Scheirer, a digital forensics expert from the Univesity of Notre Dame. Scheirer has been involved with DARPA’s AI initiatives for a couple of years now and is taken aback by the rapid progress AI has made.

AI app developer

Currently, it seems that detection has some catching up to do with the fake content creation. Popular deepfake-making tools are now easy enough that someone with modest technical knowledge can produce high-quality results. And to make it worse, Google just announced Duplex, an AI tool that can make realistic-sounding phone calls on your behalf. Google’s audio tool even goes as far as to add “umm”s and “uhh”s to the robot’s speech.

If the tech to catch misinformation never catches up, many experts posit that we’ll have to rely on the law to police content. However, current legal efforts have proven that this is a gray area that’s difficult to tackle. How is fake news defined? Would simple editing like image cropping become falsification? It is crucial that we draw these lines now because the current laws are being blurred more every day with improvements in AI.

Tags: , , , , , , , , , , , ,