Instagram’s toothless updates won’t stop cyberbullying – tech giants are shirking their social responsibilities

Vulnerable teens who have an unhealthy relationship with social media cannot be expected to police themselves and each other

Biba Kang
Tuesday 09 July 2019 17:52
Influencer Chessie King reveals extent of online trolling to raise awareness of cyberbullying

How do you monitor social media when it is so popular, complex and fast-mutating? It’s a serious and complicated question – and the tragic case of 14-year-old Molly Russell, who took her life, reminded us that we must find the answers.

In an effort to respond, and tackle the cyber abuse that affects so many teenagers, the picture-sharing app Instagram has now announced two new features. The first is an anti-bullying tool, which will use artificial intelligence to recognise when new posts are similar to those that have been reported as inappropriate. So far, so good. But once the nefarious or abusive language has been identified, the perpetrator is not served with a reminder of the social and legal ramifications of online abuse. No, they’re merely asked a single, feeble question.

In an example scenario, Instagram shows a user typing “you are so ugly and stupid” (a catch-all insult). They are met with a notice asking, “Are you sure you want to post this? Learn more.”

This is designed to prompt users into reassessing what they post online. It is definitely a positive step, but the placid question feels like a weak response to behaviour that, as we have seen, can lead to suicide.

The second tool, which will be implemented at some undisclosed point in the future, is called “Restrict”. Users who are reluctant to block accounts completely (for teenagers especially, this action can have ramifications in real life) can instead filter content from others, meaning that the person who created a post will be able to read and approve comments. Only after the recipient grants approval will those comments become public, but restricted users will not know that they have been restricted.

Both new measures shift the responsibilities of moderation onto young social media users, expecting them to curate content – and handle abuse with detachment and maturity. Vulnerable teens who may already have an unhealthy relationship with social media cannot be expected to take the issue of cyberbullying into their own hands.

The new features are yet another indicator that tech giants, while keen to pay lip-service to the issues children face online, simply aren’t taking them seriously.

A few weeks ago, Facebook announced plans to fully encrypt its services. The company, which owns Instagram, intends to use end-to-end encryption on its Facebook Messenger service, a move which has been heavily criticised by the NSPCC. “It places privacy and secrecy ahead of accountability and transparency,” argued the charity’s chief executive, Peter Wanless. “It’s really disappointing that the reaction to the NSPCC’s and young people’s call for a safer internet is to make it a lot more secret and more dangerous for them.”

So Instagram is not alone in failing to tackle the big issues comprehensively, and it is difficult to know where the boundaries of free expression should begin and end. But the app’s toothless updates do nothing to counter the online culture that fosters and feeds teenage insecurity. Everything from the use of “face filters” to the practice of targeting users with deliberately provocative, aspirational images create a toxic landscape that leaves young people in a very vulnerable position.

Support free-thinking journalism and attend Independent events

The political, social and economic power of the social media tech giants is almost unfathomable. Their level of influence is becoming dystopian. They must be reminded that no company, however large and however globally influential, is beyond accountability.

The issue of children’s mental health is a battle worth fighting, especially given that tech companies have a known history of putting online safety too low down their list of priorities. How the tools of social media are employed is, of course, down to the user – but when those users are young and socially inexperienced, they cannot be expected to police themselves. The organisations that design these tools mustn’t be allowed to shirk their social responsibilities; they have the time and the resources to solve the problems they have created, but to do so they’ll need some better ideas than this.

Register for free to continue reading

Registration is a free and easy way to support our truly independent journalism

By registering, you will also enjoy limited access to Premium articles, exclusive newsletters, commenting, and virtual events with our leading journalists

Already have an account? sign in

By clicking ‘Register’ you confirm that your data has been entered correctly and you have read and agree to our Terms of use, Cookie policy and Privacy notice.

This site is protected by reCAPTCHA and the Google Privacy policy and Terms of service apply.

Join our new commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies


Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in