Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Twitter will study ‘unintentional harms’ of its algorithm

Initiative aims to ensure “equity and fairness” of algorithm outcomes, says company

Vishwam Sankaran
Thursday 15 April 2021 13:39 BST
Comments
Twitter logo displayed on laptop screen
Twitter logo displayed on laptop screen (AFP via Getty Images)

Twitter has introduced a new company-wide move called the “Responsible Machinle Learning Initiative” to study whether its algorithms cause unintentional harm.

According to the microblogging site, the initiative seeks to ensure “equity and fairness of outcomes” when the platform uses machine learning to make its decisions, a move that comes as social media platforms continue to face criticism over racial and gender bias amplified by their algorithms.

The company said it also seeks to enable better transparency about the platform’s decisions and how it arrives at them, while providing better agency and choice of algorithms to its users.

Twitter noted that its machine learning algorithms can impact hundreds of millions of Tweets per day, adding that “sometimes, the way a system was designed to help could start to behave differently than was intended.”

It said the aim of the new initiative is to study these subtle changes and use the knowledge to build a better platform.

In the upcoming months, the company’s ML Ethics, Transparency and Accountability (META) team plans to study the gender and racial bias in its image cropping algorithm.

This comes after several users pointed out last year that photos cropped in people’s timelines appear to be automatically electing to display the faces of white people over people with darker skin pigmentation. 

The team is also slated to conduct an analysis of content recommendations for users from different political ideologies across seven countries.

Twitter said its researchers would also perform a fairness analysis of the Home timeline recommendations across racial subgroups.

“The META team works to study how our systems work and uses those findings to improve the experience people have on Twitter,” the company noted.

It added that its researchers are also building explainable ML solutions that can help users better understand the platform’s algorithms, what informs them, and how they impact the Twitter feed.

According to the microblogging platform, the findings from these studies may help in changing Twitter by helping remove problematic algorithms or help build new standards into its design policies when there is an outsized impact on particular communities.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in