During the riots of August 2011, I sat in my flat in Tooting, south London, watching one version of the news unfold on the television, and an even more startling one unfold on social media.
The sheer volume of incidents being reported and shared online gave the distinct impression that our inner cities were being systematically destroyed. One described how the Tooting branch of Primark had been set on fire and was currently ablaze; I looked at the tweet in amazement, wondering where on earth my neighbours and I would now source our cheap undergarments.
But the next morning I got up early, went for a walk and discovered that it was fine. Not burned to the ground. Not even singed around the edges. It was made up.
It won't be news to anyone that the internet is riddled with untruths and inaccuracies, or that our ability to distinguish between fact and fiction seems to be wilting rapidly; we sway uneasily between outright gullibility, believing anything that's served up with sufficient gravitas, and a refusal to believe anything at all – a path that inevitably leads down to the murky waters of conspiracy theory.
A few weeks ago, pictures circulated online of the Sphinx and Egyptian pyramids covered in snow – a state of affairs that many people seemed perfectly happy to believe until it was pointed out that it was actually a picture of a model of the Sphinx taken at a Japanese theme park. Which is all pretty frivolous and inconsequential, but when these kind of rumours have the capacity to affect our wellbeing – misinformation about contagious diseases or natural disasters, say – it becomes more of a problem.
A project called Pheme, announced a few days ago, aims to use computer power to distinguish social media fact from social media fiction. A combined effort of five universities and led by Dr Kalina Bontcheva, of the University of Sheffield's engineering department, it will attempt to analyse online rumour to determine its source and reliability, before classifying it as speculation, controversy, misinformation or malicious disinformation.
Storyful, a project launched last year by Irish journalist Mark Little, does a similar thing via a process of crowdsourcing, but Bontcheva believes that a lot of that donkey work can be automated, giving Pheme's users an unbiased, unsullied overview of a developing story.
So, in the same way as we might visit that scourge of the urban myth, snopes.com, to determine whether a forwarded email contains the tiniest shred of truth, the Pheme dashboard might become a source of truth in an increasingly chaotic social media environment. But this is no "lie detector", as some have breathlessly reported; such a project would only ever be as reliable as the sources it deems trustworthy, and people can make mistakes. (It's only a few months since the Red Cross accidentally posted a terrifying map that massively over-estimated the size of Typhoon Haiyan because they'd forgotten to scale the image.)
And it also begs the question of whether we care any more. Can we actually be bothered to check and verify before spreading rumours? After all, we already have that option available to us by using our own brains, and often we don't bother. We seem to prefer participating in a deafening chorus of "OMG", wallowing in shock and awe, rather than sitting down quietly for a sober consideration of what the truth might actually be.