Terms and conditions were once the most boring thing on the internet. That is changing – dramatically
We’d better start paying attention, since it’s not just our privacy but our very sense of ourselves that’s at stake when we click ‘agree’, writes Andrew Griffin

For all I know, I might have signed my soul away 10 times in the last week. Not with the dastardly élan or devilish incentives of a Faust, but with the weary resignation of someone who has simply spent too long on the internet. Like most people making their way around the internet, clicking “agree” on things I’d half-read and never giving them even a second thought, or even really a first one.
This is the nature of being online and using technology today: everything exciting or new that you are required to do will also mean pressing a tick box that means agreeing to terms and conditions. And each time you are faced with an unfortunate choice: either read what might amount to hours of terms before you explicitly agree, or press that button in the hope that it contains nothing too dangerous.
For the most part, that has been a good bet. Terms and conditions are rarely actually enforced, and anything especially egregious might be unenforceable anyway, and so there have been few examples where people “agreed” to something that they came to obviously and definitively regret. (That doesn't mean they haven't tried: even 10 years ago, a cybersecurity company was able to trick people into giving up their eldest child in return for free wifi.)
But that is changing recently, because of a revolution in the way we think about what we give up when we are on the internet. Artificial intelligence has radically transformed what we think we are giving up when we sign up for or use a service. Anything you provide might be stored forever, used to train AI systems that could become a reflection of you without you ever knowing it.
That became clear last week, when file transfer platform WeTransfer added new clauses to its terms and conditions that allowed it to use people’s data to train its artificial intelligence models.
There was immediate uproar: those files were private and often very important, and it could be not just ethically concerning but practically dangerous to have them used for AI systems. The outcry was widespread. (It was the rare kind that makes it to people’s Instagram stories and neighbourhood WhatsApp groups.)
Of course, there have been concerns like this before. Through the 2010s, the primary concern was about privacy, and scandals at companies such as Facebook, as well as public hackings, made clear that this was a rational and important concern. But over time, we became used to this, and many people probably changed their behaviour accordingly: there would always be some compromise with privacy if the internet was to be powered by ads, and so people either changed what they shared or did not use those platforms.
Artificial intelligence offers a whole new era in making those kinds of decisions. And part of what makes it feel more significant is that we have very little sense of what we are really giving up.
The Faustian bargain of Web 2.0 might have been damaging, but it was at least reasonably transparent. We knew what Mephistopheles, or Meta, were offering us in return for our data. The new world of AI is not just potentially dangerous but mysterious too.
This week’s outcry, however, does show just how much power users have; WeTransfer rushed to adjust the policy, though it blamed "confusion" among customers for the problem.
In fact, however, it might well be the opposite: artificial intelligence might finally have given us enough clarity to have a look at what we are really signing up to.



Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments