Twitter releases new rules on banned content and behaviour as part of plan to get tougher on abuse

The new rules come as the company is criticised for not doing enough to tackle the spread of extremists including Isis on the platform

Twitter has made clear what it defines as abusive behaviour, as part of a mission to take on criticism that it is not doing enough to stop the threat of extremism on its site.

The network has released new Twitter Rules that are intended to stop “abusive behaviour and hateful conduct”. The new rules appear pointed at extremist groups like Isis but also at abuse by individuals on the service.

“The updated language emphasizes that Twitter will not tolerate behavior intended to harass, intimidate, or use fear to silence another user’s voice,” the company’s director of trust and safety wrote in a blog post announcing the changes. “As always, we embrace and encourage diverse opinions and beliefs –but we will continue to take action on accounts that cross the line into abuse.”

The new rules state that users “may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or disease”.

Previously, Twitter only had a more vague rule against threatening or promoting violence against others. That meant that the site would often not ban posts that were reported because they had no direct threat of real violence, even if they appeared to be a specific attack on a person.

The new rules also include ways of responding to posts apparently suggesting that a person is looking to hurt themselves. Twitter will now be able to contact the person and get them in touch with mental health practitioners.

As well as the new rules, the company said that it is taking extra steps to ban people who break them, though it did not institute any new changes to the banning or reporting process. They include previously instituted systems like verifying accounts using email and phone numbers, and locking down accounts that break rules.

“These measures curb abusive behaviour by helping the community understand what is acceptable on our platform,” the company claimed in its blog.

Comments