Facebook Live suicides call attention to dark side of livestreaming

The same engagement and immediacy that makes the feature so useful also brings a terrifying danger

Andrew Griffin
Wednesday 25 October 2017 08:50 BST
Comments
An iPhone streams a 'Facebook Live' live feed of the lobby at Trump Tower, November 29, 2016 in New York City
An iPhone streams a 'Facebook Live' live feed of the lobby at Trump Tower, November 29, 2016 in New York City

Facebook's Live video might be one of the site's most celebrated recent features. But it also among its most troubling.

A suicide in Turkey is just the latest example to bring to the fore how dangerous, traumatic and important Facebook's livestreaming technology can be. Since it was introduced in 2015, it has become one of its biggest features – and Facebook would like it to be even bigger – but also one of its most controversial.

That's because it is among the most direct and dangerous tools that the site offers. The same engagement and immediacy that has led news organisations and public figures around the world to adopt the feature has a terrifying darkside – one that sees the feature being used increasingly to document and even encourage harm and death.

Facebook says that it is still building the technology that will allow it to properly regulate live video. And even once that is built, it might not opt to take down such videos, it said – indeed, doing so could actually put the people involved in more danger.

In May, after a number of high profile deaths that were streamed live on the service, Mark Zuckerberg posted on his personal Facebook account to make clear that it would do more to try and stop them.

Over the last few weeks, we've seen people hurting themselves and others on Facebook – either live or in video posted later," he wrote. "It's heartbreaking, and I've been reflecting on how we can do better for our community."

In that post, he announced that the company would hire 3,000 more people to its "community operations" team, which already had 4,500 people. They will look over live videos and other content that has been reported, looking not only for people at risk of hurting themselves but banned content like hate speech and child exploitation.

The trouble with that is there is simply too much content for any amount of people to look at. Even with huge numbers of people being helped out by even bigger numbers of civilians reporting videos, Facebook simply doesn't have time to look through videos in realtime and find out ones that show people in danger.

Eventually, some of that work might be taken over by artificial intelligence. Mark Zuckerberg said in a call to investors that because of the size Live video was growing, he expected that it would have to hire more people. Eventually, AI might have to take over, he said – but that will be a matter of years.

"Over time, the AI tools will get better," he said. "Right now, there are certain things that AI can do in terms of understanding text and understanding what's in a photo and what's in a video. That will get better over time. That will take a period of years though to really reach the quality level that we want.

"So for a while our strategy has been to continue building as good of tools as we can. Because no matter how many people we have on the team, we're never going to be able to look at everything, right? So that's going to be a big challenge."

Earlier this year, Facebook added new, less technical features to help people who are at risk of suicide, self-harm or other crises. They primarily exist to put people in contact with groups that can help them out – and the site has said that more technical approaches, like cutting off livestreams too early, can actually cause problems for the people involved.

If someone sees a friend on Facebook they're worried about, then they can report the person and the content that worried them. Once that happens, it triggers a range of different features: the person who did the reporting will see tips on how they can help their friend, but the friend themselves will also be given suggestions like helplines and tips that can be useful at difficult times.

It introduced the tools with a range of partners, all more experienced in the work of harm prevention. Those included the Crisis Text Line, the National Eating Disorder Association and the National Suicide Prevention Lifeline.

Mr Zuckerberg said in May that those features were working – some of the time.

"This is important," he wrote. "Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself. In other cases, we weren't so fortunate."

The site said when it rolled those out that it needed to be careful with cutting of livestreams, since that might actually put the person at the centre of them at more risk. “Some might say we should cut off the livestream, but what we’ve learned is cutting off the stream too early could remove the opportunity for that person to receive help,” Facebook Researcher Jennifer Guadagno told Techcrunch when it was promoting those features.

Anyone faced with suicidal thoughts or in need of someone to talk to can access free support by calling the Samaritans on 116 123.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in