Facebook's decision to remove the accounts of far-right political group Britain First and its leaders is the latest instance of social media companies cracking down on hate speech after initially expressing a reluctance to engage in censorship.
Facebook stressed that it remains an "open platform for all ideas and political speech" but said: "There are times though when legitimate political speech crosses the line and becomes hate speech designed to stir up hatred against groups in our society."
It added that content posted on the Britain First Facebook page and those of Golding and Fransen had "repeatedly broken our community standards".
It said: "We recently gave the administrators of the pages a written final warning, and they have continued to post content that violates our community standards. As a result, in accordance with our policies, we have now removed the official Britain First Facebook page and the pages of the two leaders with immediate effect."
Pressure on social media sites to address racist, Islamophobic and homophobic posts has increased in recent years, with the election of Donald Trump as US President raising questions about "fake news" and the vulnerability of social news feeds to manipulation and propaganda.
Facebook in particular has been criticised for taking too long to remove Britain First despite the group's posts being linked to the murder of MP Jo Cox in 2016 and last summer's Finsbury Park terror attack.
Britain First's website is still online and it retains a presence on YouTube that has 60,000 subscribers but no content, drastically reducing the group's ability to spread its message and communicate with sympathisers.
Twitter last year removed blue verification ticks awarded to a number of prominent neo-Nazis and white supremacists in the US like Richard Spencer, who last month complained the site was quietly "purging" his account of followers to limit his influence.
CEO Jack Dorsey meanwhile acknowledged earlier this month that bigotry expressed on Twitter has “real-world negative consequences” and needs to be addressed.
“We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns and increasingly divisive echo chambers,” he said. “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough.”
Khan also reminded leading tech companies that they are "not above the law" and had a responsibility to protect their users from inappropriate or dangerous material.
YouTube faced renewed political pressure to clamp down on inflammatory content this week, with a Parliamentary Home Affairs Committee labelling the site "a platform for extremism" on Wednesday over its failure to take down an offensive video by neo-Nazi organisation National Action.
Join our new commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies