How Researchers Are Preparing for the AI-Fueled Deepfake Deluge

November 30, 2020 - 7 minutes read

artificial intelligence app developmentArtificial intelligence (AI) is getting better — after all, it continuously trains itself to meet and exceed expectations — but could it be getting too good? AI applications are being used to detect and warn officials and journalists about AI-generated videos called “deepfake”. These videos show a prominent political leader or celebrity saying and doing things they never actually said or did. To most of our eyes, the video looks real and can cause outrage, protests, and violence.

But researchers are developing deepfake detection tools that are employed by journalists to verify the trustworthiness of a video before they publish an article about it. Sure, the fake video would make headlines and generate quite a lot of money for online publications, but the publishing body could also be sued for libel.

What Are Deepfakes?

Deepfakes are made using highly sophisticated AI and deep learning applications. Deep learning is a subset of machine learning development, and it’s great for creating successful results in a very niche area. It does one small thing really well, so when it’s given the task to swap out a face and facial expressions with another face, deep learning returns deepfake videos that look incredibly real. With just a few thousand dollars worth of computer equipment and some machine learning and deep learning training, an amateur can create a realistic deepfake video. Compare that to Hollywood, which spends hundreds of millions on computer-generated graphics and video.

artificial intelligence app development

Sometimes deepfakes are used for fun applications, like putting Arnold Swarznegger’s face on Bill Hader’s body, but deepfakes are truly dangerous when they’re used for nefarious purposes. Deepfakes have been used to create pornography with someone else’s face, videos of President Trump falsely saying outraging things, and, one day when they’re indistinguishable from the real video, they could be used to start a war. Deepfakes create victims (usually women so far), ruin lives irreparably, and have the potential to end a massive amount of lives.

What’s worse is that deepfakes sow seeds of doubt in every consumer about the trustworthiness of all videos they watch, even elevating their suspicion about deepfakes in unwarranted situations. This psychological warfare can create multi-generational amounts of distrust in others and the government.

How to Find the Fakes?

When deepfakes first hit the mainstream, researchers joined the battle to spot the fakes. Deepfake detection began over three years ago. Back then, it was easier to see a fake video because the person wouldn’t blink or their complexion would act shifty. But these days, both people and detection software have had immense trouble distinguishing between real and fake.

Researchers fall into two major categories in deepfake detection. The first research approach is surrounding the behavior of people in the video. Researchers can use AI to study the person’s mannerisms, hand gestures, and other patterns in speech and facial expression. Using this information, the AI can carefully watch videos and say with a certain percent of confidence if the video is real or fake. This method works really well even if the video quality is almost perfect.

artificial intelligence app development

The second approach involves focusing on differences that deepfakes all characteristically have when compared to real videos. Because deepfakes are made by merging frames that are individually-generated into a video, researchers can take individual frames and follow faces in the video. If any inconsistencies are found, there is some evidence that the video might be a deepfake. This method works well for a video with anyone in it, not just a famous person or world leader.

Ultimately, we might need a combination of both methods to start sussing out deepfakes. But until then, even the best models have trouble determining the probability of deepfake from online videos. It’s imperative that we improve these approaches and tools to become more useful and robust.

Who Gets Access to Deepfake Detectors?

In an ideal world, deepfake detection tools would be available to everyone for use. But the technology is still new and needs more improvement before they are secure, accurate, and robust enough to release widely. Right now, some journalists have access to deepfake detectors.

It could take a few more years before journalists all over the world get access. But there is hope that content regulators, like Facebook, implement it into their content-checking algorithms so that users can know if the video on their feed is likely to be a deepfake. Although that doesn’t stop all deepfake videos from being seen as real, it’s a step in the right direction in the war against disinformation.

artificial intelligence app development

On the other hand, anyone can set out to make a deepfake using tools that are publicly shared and available on the Internet. For some researchers, working closely with journalists was a good strategy because they are likely to accidentally spread misinformation. But journalists still need to check with their source, find others to corroborate with, and do their due diligence before publishing any potentially libelous information.

Who is Fighting the Good Fight?

The good news is that teams at large companies like Facebook and Seattle-headquartered Microsoft are investing in technology to detect and understand deepfakes. These enterprises can pour almost unlimited amounts of money into improving detection software and research around deepfakes, and the hope is that they’ll share these tools for all to use.

One thing is for certain: deepfakes aren’t going anywhere anytime soon. It’s going to be an uphill battle to manage public perception and misinformation as deepfakes keep getting better, but we have the time to figure it out before it becomes impossible.

Have you seen any deepfake videos? Did you know the video was fake — why or why not? Let us know in the comments below!

Tags: , , , , , , , , , , , , , , ,