This AI Rates Attractiveness… For Science?

December 14, 2020 - 9 minutes read

A new artificial intelligence (AI) application from a team of researchers in the European Union (EU), called “How Normal Am I?”, is shaking up a lot of people’s self-confidence today. Sure, not everyone is as photogenic as the models in the above image, but we’re all beautiful in our own unique way, right? This AI attractiveness test will tell you exactly how attractive you are, what age you look like, your BMI, gender, and even your life expectancy.

As the EU has one of the strictest privacy policies of any country, the web app promises not to use any cookies or collect any personal data, so this AI online tool isn’t around to be malicious for the future. But with your results, you might get your day ruined by trying out this website. Of course, facial recognition is riddled with biases, errors, and ethical concerns, so maybe you shouldn’t take these beauty test results too seriously — at least until the AI gets a lot better at its job. Read on to learn more about this innovative technology and why you shouldn’t worry about any low scores on pretty scale.

How Attractive Am I? AI Is Measuring and Using Attractiveness Levels

The team behind “How Normal Am I?” is SHERPA, a project funded by the EU that explores how AI impacts human rights and ethics. When you go to the site, you will be greeted by Tijmen Schep, SHERPA’s artist-in-residence, who will walk you through every step of the attractiveness assessment and facial analysis. Schep injects some fun into the interface while keeping you relaxed throughout the process. The estimated life expectancy result is calculated from your BMI score and your estimated age, while the beauty score is calculated using a state-of-the-art computer vision and deep learning algorithms.

According to Schep’s interview with San Francisco-based Hacker News, the main algorithm used on the site is a JavaScript API that detects and recognizes faces, called FaceApiJS. Traditionally, these algorithms are trained with thousands of publicly available models and photos of faces that are tagged by humans with an attractiveness level. A Neural Network locates the face in the picture, crops it, and applies other calculations, to evaluate it according to the training dataset. But human beauty is subjective and beauty standards differ between countries and cultures, making an algorithm like this very difficult to trust due to biases and subjectivity.

Apparently, origin and skin color may have an influence on the result. Schep says that in his own experience, “[it] doesn’t vary on our attractiveness scale as wildly if you’re a white male. With other ethnicities, the predictability levels can drop, especially if you’re Asian.” Moving closer to the camera can help boost your overall attractiveness score, but this can also be seen as manipulation and imprecision in calculating the attractiveness score. If you got a low attractiveness score, says Schep, it could be because “the judgment of these algorithms is so dependent on how they were trained,” but if you happened to get a really high human hotness score, “that’s just because you are incredibly beautiful.”

Schep says dating apps like Tinder use algorithms similar to “How Normal Am I?” to match people it rates to be equally or similarly attractive. Social media platforms like TikTok have used these types of algorithms to promote content created by attractive people.

Analyzing Other Facial Features

The algorithm that Schep uses, FaceApiJS, is built on top of a number of other machine learning development tools and APIs to detect facial features to calculate metrics like estimated BMI, estimated age, estimated life expectancy, gender, and your expression. Some companies use these tools to figure out if someone’s lying about their age on a dating site or to learn more about users. But Schep says, you can deceive the algorithm by shaking your head and getting a lowered estimated age of nearly a decade.

Next, the web app figures out your gender, which can also vary depending on where and how well the algorithm was developed. After that, it rates your BMI using your facial proportions, like the space above your eyes. Schep mentions in his walkthrough that raising your eyebrows can yield you a lower BMI, and it’s another way to trick the algorithm into a result that’s more favorable for your ego. The BMI algorithm was trained by Schep himself using a mix of photos: 50% were photos of American arrest records and 50% were Chinese celebrities. In order to get accurate results while using the tool, you should not upload a photo or video where you are wearing sunglasses, lots of makeup or anything covering your face.

The next attractiveness tool estimate is for life expectancy, and it is largely affected by the estimated BMI number. The algorithm doesn’t bother asking about important factors such as your daily lifestyle, exercise, and diet habits, so this estimate is a little difficult to believe. However, Schep says in the walkthrough that insurance companies use similar models to price policies for customers because “it’s better than nothing”.

Lastly, the attractiveness web app shows you your unique “faceprint”, which Schep likens to your unique fingerprint. Using this unique grid of grey, black, and white blocks, Schep says that programs like Clearview AI use this type of faceprint to match your photo to public pictures and even find your social media profile.

artificial intelligence app development

To wrap up the experience, Schep revealed that the algorithm was watching us when we were looking at the puppies before the assessment; the algorithm was deducing our emotions toward the dog and the assessment. That aspect felt a little violating because I, as the user, didn’t know I was already being watched. Schep also revealed that the company was tracking my cursor movement and engagement throughout the assessment, showing me a heat map of my movements.

The Crux of It All

But can we let Artificial Intelligence decide how we look or how we behave? AI can feel incredibly invasive when used without explicit permission or when used against you in a way that you didn’t know was possible. And, unfortunately, that encompasses a lot of the AI applications developed today. Schep adds that as “face recognition technology moves into our daily lives, it can create this subtle but pervasive feeling of being watched and judged all the time.” It can affect human behavior for the worse, as you “might feel more pressure to behave ‘normally’, which for an algorithm means being more average.”

And because of these ethical and moral dilemmas, we need to protect our privacy, our unique face prints, and our identities from becoming reduced to measurement by an AI. The way Schep sees it is, “You could say that privacy is a right to be imperfect.” There are long-term and everlasting effects of using AI for political gain or for law enforcement. Once humans begin distrusting technology like facial recognition in larger numbers, it will be extremely difficult to reverse the sentiment in the near future.

Tags: , , , , , , , , , , , , , ,