Isis militants are digital natives, adept at using social media to inspire their supporters and provoke fear and dismay on behalf of their opponents.
They have left tech companies scrambling in their wake, rapidly developing policies and protocols in response to the rapidly-evolving methods of Islamic extremists.
Due to security concerns and a desire to maintain their carefully-cultivated image as a bastion of free speech and open debate, companies like Twitter have tended not to reveal the details of their anti-terror protocols.
As such, the website's announcement that it has suspended 125,000 accounts with alleged links to Isis provides a rare insight into how the world's top tech companies are battling the world's most effective propaganda machine.
How can a computer detect terrorist propaganda?
Algorithms are not yet sophisticated enough to accurately identify hate speech or terrorist propaganda. When there is no clear social convention as to the nature of terrorism, a computer can hardly be expected to separate heartfelt political fervour from illegal exhortations to violence.
Interestingly, Twitter states it is "leverag[ing] proprietary spam-fighting tools" to combat Isis. The extremists' tech-savvy followers have been known to set up automated accounts, blasting brute amounts of extremist rhetoric into cyberspace. These accounts can be caught using programs normally deployed to take down spam adverts and other online clutter.
But automated accounts are far easier to identify than actual terrorist cells. As the UK Parliament's Intelligence and Security Committee noted in 2014, it is much simpler to train a computer to flag up child pornography than it is to set it up to scrape for terrorist chatter online.
Twitter has therefore also beefed up its teams that review reports, "reducing [their] response time significantly".
Sometimes, people have to do it
On the most mundane level, companies like Twitter employ droves of American college students and low-paid workers in the Phillippines to trawl through any content which is flagged as graphic. Burnout is high among these frontline employees, who are paid less than $500 a month to sit in front of screens filled with flickering images of gore, child pornography and Isis beheadings.
More senior, specialist teams in the US and Ireland monitor accounts which have been flagged as disseminating terrorist material. However, they are often little better equipped than computer algorithms when judging an account's legality, forced to "make challenging judgement calls based on very limited information and guidance," in the company's own words.
And sometimes, they need your help
All of these teams rely on the general public of Twitter to provide them with raw materials to work with, by referring potentially harmful accounts, tweets and images. In this sense, the company is effectively crowd-sourcing a social algorithm to determine what is extreme hatespeech and what is legitimate political discourse.
In 2015, the US killed British-born Isis hacker Junaid Hussain in a drone strike, partially basing their decision to pull the trigger on his Twitter activity. In 2014, then-CEO of Twitter Dick Costolo received death threats after removing a clutch of Isis-linked accounts.
Clearly, the stakes are high. And with Twitter leaning on the help of re-purposed anti-spam software and referrals from concerned members of the public, it seems to be Isis who have the upper hand.