Deepfakes: A New Frontier or Threat?

October 13, 2021 - 7 minutes read

There’s no denying it, the rise of deepfake technology is here, and it is a viable and troubling cyber security threat. In recent years, this technology has become prominent on a worldwide scale, but the potential damage caused by deepfakes can impact one individual or influence thousands. Deepfake technology has wormed its way into many different areas across several businesses, government, and entertainment industries. High-tech cybercriminals and nefarious actors are employing the technology to sow misinformation, deception, and in some cases, outright extortion and slander.

So, what does this mean for organizations in tech?

We believe there is tremendous value in staying ahead of this ever-evolving technology. How exactly can this be done? By innovating in the deepfakes space, and in technology that can effectively identify deepfakes, we believe companies can provide a tremendous amount of value to individuals  As a Las Vegas-based app development firm, we too have seen a substantial upswing in the volume of requests around this growing healthcare niche.

It’s essential to understand what a deepfake is, how and where it is used, and the technology behind it. In the simplest of explanations, deepfake technology is something like Photoshop, but on steroids. 

What Are Deep Fakes?

The term deepfake caught on after a Reddit user named ‘Deepfakes’ manipulated Google’s open-source, machine-learning technology to post adult videos featuring face-swapped celebrities, to alarming realism. 

In reality deep fakes are a combination of ‘deep learning’ or ‘machine learning’ and ‘fake.’ Deepfake uses machine learning and artificial intelligence to create fake audio or video versions of people, real or not. The technology has advanced to the point where only a few seconds of audio is needed for machine learning and AI to replicate someone’s voice authentically. Video and imagery manipulation technology are just as advanced, resulting in photo-realistic and convincing fakes.

Voice impersonations are nothing new, as people have been doing this for generations. Similarly, actors transform into characters – or caricatures – before our eyes in movies and television. The main difference is when we watch or hear such a performance, we know it is not real. That person may sound like Elvis Presley (and may even resemble him if you’re in Las Vegas), but we know it is an impersonation. Deepfake technology takes these concepts and blurs the line between what we think we see and hear is authentic when indeed it might not be.

Sticking with the Photoshop analogy, the site, an on-the-nose titled thispersondoesnotexist.com, generates non-existent people with every browser refresh using Generative Adversarial Network (GAN) technology. The machine learning and AI behind the application employ facial and recognition patterns to combine thousands of photos into a fake headshot of a person. The technology does this so well that it is nearly impossible to know that these images are of fake people.

Societal Implications of Deepfakes

More than just being unable to determine if the celebrity in a YouTube or TikTok video is real or if that audio playback is genuinely an authentic voice, deepfake technology has the potential to become a severe threat to audio and video authentication. This possibility of compromised security access applies to both individual tech users and large corporations. It is common for personally identifiable information and data to be secured behind voice or facial recognition software on the devices we carry around and use daily. The possibility and potential of deepfake technology to manipulate these security features, potentially on a large scale, is real. 

Deepfake technology has been employed to a strong level of success in information manipulation and propaganda, specifically in the realm of politics. Some of the (mis)information is so convincing that we collectively can no longer take what we see and hear across the many forms of media at face value. Perhaps the most notorious example of the power to deceive comes from a deepfake video of former President Obama. The seemingly ordinary video successfully used a younger Obama voice which seemed to be spoken by an older version of the former President. Be it a celebrity or the average citizen, deepfake technology can impact us all. 

There have been a few laws recently passed at the State-level that combats against deepfake technology and provides some measure of protection and recourse for people impacted by such acts. The National Defense Authorization Act of 2020 contains language designed to fight against deepfake technology at the federal level. Additionally, some cyber security companies, major tech companies like Google, and educational institutions are developing applications and algorithms using the same technology – machine learning – to identify deepfake audio and videos.

Deepfake technology is here to stay. It will only continue to grow in prominence and threat level. The audio and video fakes will be harder to spot as they become more real, life-like, and convincing. Deepfake technology isn’t quite a mainstream cyber security threat. However, the potential is there. For now, perhaps the most prudent course of action is to verify what you see and hear on the internet is indeed what your eyes and ears tell you.

Moving Forward

Dogtown Media is a mobile media development company headquartered in Venice Beach, California with a presence in San Francisco, New York City, and London. Since 2011, our team of hardened techies has launched over 200 apps, and counting. Our team is experienced in creating solutions for a wide range of platforms, including iOS, Android, HTML5 Web Apps and more. We offer specialized packages to satisfy a range of needs and budgets, from rapid prototyping for startups to full-featured interfaces for international enterprises.

If you’re interested in learning how Dogtown Media can help introduce your next telehealth or mHelath app to the market contact us. We’d love to help!

Tags: , , , , , , , , , , , , ,