Can We Stop AI-Generated Deepfakes from Rewriting History? — Part 2

November 16, 2020 - 7 minutes read

artificial intelligence app development

Welcome to the second and final chapter in our series that examines one of the bleakest possibilities for the future of artificial intelligence (AI) development: Could deepfakes rewrite human history?

For our first post, we explored how synthetic media could cause us to spiral into a digital dark age if left unchecked. Potential events included deepfakes depicting famous astronauts Neil Armstrong and Buzz Aldrin discrediting their own moon landing and renowned physicist Neil Degrasse Tyson declaring that Earth is flat. In case you missed this article, you can check it out here.

In this entry, we’ll delve into some of the measures that society can take to safeguard its past. Some solutions are pragmatic, while others may fall more on the preposterous side — but with such high stakes on the line, all options must be considered.

1. Train AI Algorithms to Detect Deepfakes

Who says fighting fire with fire doesn’t work? In the near future, we may employ AI and machine learning at scale to spot deepfakes across the internet. It’s already viable to detect imperfect deepfakes through heuristic analysis of telltale artifacts, and technology is only improving these efforts.

Microsoft recently showcased a new way to spot synthetic media “hiccups,” and The Defense Advanced Research Projects Agency (DARPA) is developing SemaFor. Short for “Semantic Forensics,” this program strives to recognize semantic deficiencies in artificial media. Examples of this include an image of a person with anatomically incorrect features of someone who is wearing apparel or accessories that are culturally out of place.

artificial intelligence app development

While this measure does have immense promise, it will cause the tech industry to play a longwinded cat-and-mouse game to stay one step ahead of deepfake creators — and that may not always be possible. After all, deepfakes are generated by AI, which is always learning and improving. In this case, it would be focused on devising new ways to beat conventional detection technology.

2. Improve Internet Moderation and Historical Archive Management

The true impact of deepfakes on our society will depend on how they are published and propagated. For example, social media platforms could decide to kick suspicious content from nontrusted sources to the curb. But that’s easier said than done. What would be classified as suspicious? What’s trusted? And what community guidelines would be imposed on these platforms that involve thousands of different cultures?

San Francisco Bay Area-based Facebook has already attempted to implement a ban on deepfakes, but this could get difficult to enforce as synthetic media becomes even more hyper-realistic. The future of deepfake moderation will be a delicate endeavor for social media companies to balance between maintaining security and not infringing on the activity of its users.

artificial intelligence app development

With all that said, it’s vital that we dedicate more resources to trustworthy historical archives. These will be the main tools that historians use to verify information, so it’s paramount that we take initiative on this. Financial support for reliable, distributed repositories such as the Internet Archive must be increased.

3. Authenticate Legitimate Content

The Content Authenticity Initiative (CAI) is a joint effort from Twitter, the BBC, The New York Times, Adobe, and many other organizations to counter deepfakes. It recently proposed an encrypted system to attach metadata tags to digital media so that creators and provenance could be easily verified.

It would work like this: Do you know the content was created by a certain source? Do you trust that source? Then you’re more likely to trust that the content is legitimate. The tags would also serve to show you if the content has been modified in any way.

artificial intelligence app development

While promising, this methodology is not without its weaknesses. CAI’s approach focuses on content protection and copyright. But as AI tools play an increasingly important role in media creation, the individual attribution of new content may become less important. There’s also a risk with embedding personal information into every single file. Currently, doing this is optional. But while that helps to preserve a person’s private data, it also limits the potential of this concept.

4. Restrict Deepfake Tool Access

As the prevalence of deepfake creation tools rise and we see what they’re capable of, it’s not out of the question that politicians call to make them illegal. In fact, it’s probable. But this could be problematic for society.

artificial intelligence app development

Aside from the negatives, AI-powered tools will undoubtedly empower humanity’s creative potential. Suppressing this should not be done without careful consideration. Doing so would be akin to outlawing printing presses because the content they’re generating doesn’t align with your own historical beliefs.

It’s also worth mentioning that even if synthetic media creative tools became outlawed, those tools would still be leveraged by rogue hands. Thus, legal remedies may only serve to hamper creative professionals while pushing deepfake tools further into illicit activities.

Is Our Future Fraught With False Information?

We hope you’ve enjoyed this brief overview of potential solutions to counter the proliferation of deepfakes. No matter what measure we employ, it’s imperative that we start preparing for the flood of synthetic media that’s sure to come over the next few years.

This list of possible remedies really only scratches the surface. Other options include using blockchain to validate content and even constructing cryptographic ark that preserves the most essential information in case the digital dark ages become a reality. It’s likely that we won’t just rely on one methodology; we’ll probably employ several simultaneously. How they truly affect society remains to be seen. But hopefully, we find a way to stop disinformation from destroying our understanding of the past.

artificial intelligence app development

How do you think we can stop the spread of deepfakes efficiently and effectively? Do you even think that’s possible? As always, let us know your thoughts in the comments below!

Tags: , , , , , , , , , , , , , , , , , , ,