Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Black man uses passport photo as evidence AI is ‘racist’ in viral TikTok

‘Automation is never diversity-friendly,’ video’s caption reads

Chelsea Ritschel
New York
Wednesday 28 April 2021 08:58 BST
Comments
Black man uses passport photo as evidence AI is ‘racist’ in TikTok
Leer en Español

A TikTok user who educates people about the pervasiveness of racial bias has demonstrated how prejudice appears to be built into artificial intelligence.

Joris Lechêne, a model and artist from London, recently uploaded a TikTok in which he explained that he tried to upload a passport photo only for it to be denied because the software could not accurately recognise him.

“Don’t you love it when you train people to spot racist biases for a living and then it happens to you?” Lechêne, who goes by the username @joris_explains, began the video. “In the process of applying for a British passport, I had to upload a photo, so I followed every guideline to the T and submitted this melanated hotness.”

While he is speaking, Lechêne, who is Black, showed viewers the photo in question that he submitted, which sees him wearing a black shirt and standing in front of a gray wall.

“Lo and behold, that photo was rejected because the artificial intelligence software wasn’t designed with people of my phenotype in mind,” Lechêne continued. “It tends to get very confused by hairlines that don’t fall along the face and somehow mistake it for the background and it has a habit of thinking that people like me keep our mouths open.”

According to the screenshot of the rejection shared on the screen, Lechêne’s photo “doesn’t meet all the rules and is unlikely to be suitable for a new passport,” with the government website suggesting that his mouth “may be open” and that it was “difficult” to tell the image and the backdrop apart.

In the video, which has been viewed more than 156,000 times, Lechêne explained that he knew about the racial bias before being subjected to it because he uses similar examples in the prejudice training that he delivers.

According to Lechêne, his own experience is a reminder that current software is not without prejudice, with the model stating: “This is just a reminder that, if you believe that automation and artificial intelligence can help us build a society without biases, you are terribly mistaken.”

Rather, a more equitable society is only achievable through “political actions at every level of society,” Lechêne continued, adding: “because robots are just as racist as society is.”

This is not the first time the subject of racism in AI has been raised, as the topic has come up frequently as more of the world becomes digitised and examples have come to light.

Previously, the Google Photos app was found to be labelling Black people as gorillas, according to The New York Times, while an Amazon face service had trouble “identifying the sex of female and darker-​skinned faces”.

“The service mistook women for men 19 per cent of the time and misidentified darker-​skinned women for men 31 per cent of the time. For lighter-​skinned males, the error rate was zero,” The Times states.

The issue is due to biases in society that then end up ingrained in algorithms and artificial intelligence due to a lack of diverse training.

“Lack of diversity in the data you work with, that’s exactly what we’re talking about,”Lechêne explained in a follow-up video. “Society is heavily skewed towards whiteness and that creates an unequal system. And that unequal system is carried through the algorithm.”

@joris_explains

Reply to @william.haggerty.phl I would like to apologise for the inconvenience my non-whiteness may have caused you in your work. ##data ##antiracism

♬ Send Me on My Way - Vibe Street

Lechêne’s video prompted numerous dismayed comments on his behalf, with one person writing: “We need more POC in STEM so they can write algorithms that aren’t biased, especially as our society automates processes like these more.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in