Eli Pariser: The net's bubble burster

Eli Pariser thinks that personalisation of the web will give us a warped view of world events. Clint Witchalls asks him if Google's tailored searches are really so bad

Wednesday 22 June 2011 00:00 BST
Comments

Whether it's from your Wi-Fi connection, your web-search history or the kind of browser you're on, Google tailors its search results to your personal interests. In his new book The Filter Bubble: What The Internet is Hiding From You, Eli Pariser, a former MoveOn.org activist, argues that this kind of personalisation by web giants like Facebook, Google and Microsoft will have a hugely detrimental effect on the way we use the internet. But is that really the case?

Clint Witchalls: You say that the era of web personalisation began on 4 December 2009. Can you tell me what web personalisation is and why 4 December is a significant date?

Eli Pariser: That was the date that Google abandoned any notion of a standard or objective Google. On 4 December, they announced that even if you were logged out of Google services, you were going to get your own personalised search results. Two people Googling the same thing might get very different results. Because Google is a central tool for navigating the internet these days, it seems to me that that was the tipping point.

Why is it now time to be worried?

The technology that was used to target ads is now being used to target content. It's one thing being shown products you might be interested in, but when that's shaping what information you see, it has a much broader effect. My main concerns are that it gives individuals a distorted view of the world because they don't know this filtering is happening or on what basis it's happening, and therefore they don't know what they're not seeing. It's a problem, more broadly, for democracy because it's like auto-propaganda. It indoctrinates us with our own beliefs and makes it less likely that we'll run into opposing viewpoints or ideas.

You identify Google and Facebook as being the chief culprits. Are there other firms that use personalisation or have plans to do so?

Nearly every major company is racing to incorporate this into their software. Bing, Microsoft's search engine, just signed a deal integrating Facebook into its results, so now Bing results are personalised based on your Facebook data. Yahoo News shows different people different news. Even more venerable news institutions such as The New York Times are investing in start-ups that are taking this approach. In a way, it's hard to argue why anyone wouldn't – from an economic standpoint – embrace this technology because it produces more page views and more ad views.

Do you think that all websites, including the likes of Wikipedia, Twitter and the BBC, will embrace content personalisation, or is there a limit to what can be personalised?

It's true that if you carefully seek out sources that are not personalised, like Wikipedia, you can still do that [have content that isn't personalised] but the trend is very rapid. Since I wrote the book, Twitter started integrating a personalised recommendation service into what it does, so now when you search for tweets, you are not getting the top tweets, you are getting the recommended tweets for you.

Terabytes of data are added to the web each day. Without filters, how are we meant to make sense of that much data?

We need some help from the algorithms. It's just a question of how they work and whether they have our best interests at heart. I would argue that right now, Facebook is more interested with being compulsive than in being informative or interesting or useful. They're not very good at providing us with what we really need to know.

What do you hope to achieve by making people encounter opposing views?

It's hard to change people's opinions, but one of the things about encountering countervailing arguments is that it reminds us of the limits of our understanding. I think it's an important realization for us all to have from time to time, that we may not be right about everything or we may not understand these issues as well as we think we do.

Isn't it paternalistic to tell people "you should be reading about the war in Afghanistan not a piece about Justin Bieber", even if their click history shows that that's what they're interested in?

I think the paternalism argument is a canard because, in fact, these companies are already making judgment values about what information to show you. It's not as if it's neutral and I'm arguing it should not be neutral. Every time you're providing a ranked set of information in response to something like [Barack] Obama's birth certificate, you're making value judgments about what is valuable and true and useful. I think paternalism is inherent in any system that is controlling what you see and what you don't. It's impossible to do that without making decisions on your behalf. So then you get to: what does it mean if you give people what they want? Because we have a lot of conflicting wants. We want to eat junk food and we want to be healthy. The best media is very canny about balancing those things... personalised media often doesn't.

Another downside to personalisation that you talk about is advertising. But surely targeted ads – even though they are sometimes a bit wide of the mark – are better than non-targeted ads?

The targeting of ads is less of a concern for me than the targeting of information. But, in a way, it's a similar question. To what degree are we, the people who are providing the data that drives all these things, in control of the experience that results? The most pernicious cases of targeted ads, you have some 14-year-old girl who looks up "obese" at Dictionary.com and is suddenly hounded all over the internet: "Are you feeling fat? Do our crazy weight-loss programme." Obviously, no one would opt into that.

Isn't the solution quite simple: use an anonymous proxy, such as Tor, so that these companies can't trace your searches?

I don't think so. It's a partial solution but: (a) these companies are getting very good at targeting based on actual machine fingerprinting. And (b) why should we have to make an either/or choice between the benefits of using these tools and being able to have some measure of transparency and control? It's a bad trade-off. The technology certainly exists to find some middle ground where people do have some control. They do see when this is happening and how their information is collected and shared. And they can make their own decisions about exactly how to use it.

How do you envisage the "middle ground" working?

You could have at the top of Google, or Google News, a slider bar that goes from "results from people who are very much like you" to "results from people who are nothing like you". You could slide and adjust on a case-by-case basis. On Facebook, I think there's a big challenge in the fact that Facebook has a one-dimensional view of the world and the dimension is what people "like". Like has a very particular valence – it's a positive word. You can easily click "like" on "I ran a marathon" but it's hard to click like on "civil war breaks out in the Congo". So some types of information get propagated on Facebook and other types of information don't. Facebook could easily correct that. They could have an "important" button. They could mix in things that people like and things that people find important.

In your view, what's the worst-case scenario if the filter bubble continues unchallenged?

It would be the Brave New World scenario in which we are increasingly captured in these bubbles full of very entertaining, compulsive content and lose sight of the big, common problems we all face. Things like climate change, global poverty or Aids are going to come back to bite us because we're too busy watching cat videos recommended by Facebook.

Do you feel confident that people will listen to your message and put pressure on companies to be more open about how they personalise?

I haven't run into many people who weren't pretty shocked that Google search results were personalised and who, once they found that out, wanted a way out, a way of changing it or being in control of it. There's some history here. There was a first generation of these same kinds of technologies – the term at the time was intelligent agents – like the Microsoft paperclip. The thing was, they were totally horrible. They were computer programmes that just didn't understand who you were. They had a very poor ability to wrestle with the nuances of the world. So that flamed out pretty quickly. Now we have a lot of the same technologies, but it's hidden, it's not as obvious.

'The Filter Bubble: What the Internet is Hiding from You' by Eli Pariser is published by Viking tomorrow (£12.99). To order a copy for the special price of £11.69 (free P&P) call Independent Books Direct on 08430 600 030, or visit www.independentbooksdirect.co.uk

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in