Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.

Twitter experiments with asking users to ‘revise reply’ if they use bad language

App has often been criticised for not taking a stronger approach against sexist and racist users

Adam Smith
Wednesday 06 May 2020 10:06 BST
Comments
The Twitter application is seen on a phone screen August 3, 2017
The Twitter application is seen on a phone screen August 3, 2017 (REUTERS/Thomas White)

Twitter is testing a new feature to limit "harmful" language on the platform, asking users to reword tweets that include harmful language.

In a tweet, the site said: "When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful."

Twitter gave no indication of what language it considers ‘harmful’, so users are left to speculate over what words will or will not be acceptable – whether that’s simply foul language such as swearing or hateful speech such as sexist or racial slurs.

A Reuters report suggests that such language will be compared with other posts that have been reported.

Twitter has a hate speech policy which reprimands users who “promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”

However, the company has repeatedly been criticised for not taking enough action to protect its users, having been described as a “toxic place” especially for women and people of colour.

For the moment, Twitter’s “experiment” will only happen on the iOS version of its app. It is unclear how many users Twitter is testing the functionality on, or whether we can expect to see this change expanded to all 330 million of the social media site’s monthly active users. Twitter declined to comment.

“We're trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter's global head of site policy for trust and safety, told Reuters.

While there might be the potential for users to experiment with what words were or were not flagged – and thereby give malicious users an insight into what language Twitter finds offensive – Saligram said that the rollout was targeted at rule breakers who were not repeat offenders.

The test reportedly started on Tuesday and will continue for “a few weeks” globally, but only targetted on tweets in English.

This is not the only change that Twitter has been testing recently. The company has demonstrated a new war of showing quote-tweets under likes and retweets and a new way to read threads on iOS and in its web app.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in