Facebook’s recent embroilment with Cambridge Analytica has brought to light the importance of data privacy and transparency from tech giants like Google, Twitter, and Apple. No company is safe from consumer scrutiny, and that has been pushing tech companies to make changes in operations, data storage, privacy policies, and more variables surrounding user data.
One unique result of this controversy is Facebook’s utilization of facial recognition to delete fake accounts.
It’s Facebook, not Fakebook
The company plans to use the technology to catch users who are using other profiles’ photos, but the facial recognition is limited in some substantial ways; it doesn’t scan photos against all of the photos uploaded to the site, and that would be the best way to accomplish this tedious task.
The San Francisco-based developer, says it deleted 583 million fake accounts Q1 of 2018. In 2017’s Q4, the company reported that it deleted 694 million accounts.
A Safe Space for Everyone
Facebook is also tackling spam, hate speech, and other reported content. To help create more transparency for its users, the company also recently published its first Community Standards Enforcement Report. The post detailed the actions Facebook took on reported content in the areas of nudity, sexual activity, graphic violence, terrorist propaganda, and more.
In 2018’s Q1, Facebook deleted 21 million posts containing adult nudity and sexual activity. 96% of those reported posts were flagged automatically by the company’s computer vision tools. The tools also caught 86% of 3.5 million violent posts and 38% of 2.5 million posts containing hate speech.
Some Tough Problems to Tackle
Regardless of these initiatives, experts still worry that Facebook continues to over-rely on user-reported posts. CEO Mark Zuckerberg has defended his company in the past, saying it’s difficult to develop artificial intelligence (AI) that can determine hate speech, whereas detecting a graphic image is easier. In fact, Guy Rosen, Facebook’s VP of product management, says AI is still years away from being an effective and reliable detector of hate speech and other bad content.
Fighting fake news is another big priority for the company, which was found to be a big factor in helping Trump win the 2016 presidential election. Hate speech on Facebook has also been found to promote violence against Myanmar Muslims.
The company plans to continue releasing updated numbers on reported content and fake account deletion every six months. Whether this is an effort to appease users or to get ahead of the next possible scandal, Facebook is undeniably trying to be more transparent after its shortcomings were recently revealed. In this case, does the end justify the means? What do you think?Tags: 2016 election, 2016 presidential campaign, AI, AI App Developer, AI app developers, AI App Development, AI applications, AI apps, app developer, app development, cambridge analytica, facebook, facebook app, facebook censorship, Facebook data scandal, mobile app developer San Francisco, mobile app development, mobile app development San Francisco, news, San Francisco mobile app developer, San Francisco mobile app development, social media, tech and politics, tech news, technology and politics, Trump