Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Audience participation: Films and crowdsourcing

Wednesday 30 September 2009 00:00 BST
Comments

Recommending films to film lovers ought to be a straightforward task. You liked Little Miss Sunshine; why not try Juno? You liked Spider-Man 2; why not try Spider-Man 3? Yet in a world where we select, rent or download a massive and growing number of our movies online, predicting a customer's tastes is big business. Big enough for Netflix, one of America's most successful internet movie rental companies, to offer a million dollars to anyone who could improve significantly on its recommendation software, Cinematch, which suggests films to users based on their previous preferences.

The Netflix Prize was launched in 2006 and finally won last week, when the company awarded the $1m jackpot to a hybrid team of tech experts calling itself BellKor's Pragmatic Chaos, comprising seven statisticians, machine-learning experts and software engineers from the US, Canada, Austria and Israel. The team designed a series of algorithms that enhanced the accuracy of Cinematch's recommendations to customers by more than 10 per cent.

Though BellKor led the field for much of the contest, the team was almost beaten to the prize at the eleventh hour by a global coalition of 30 competitors calling itself Ensemble. BellKor, too, was made up of former rivals who'd decided to pool their resources: Korbell, a research team from telecoms company AT&T, merged first with the Austrian team Big Chaos, and then with Pragmatic Theory, from Canada.

The Netflix Prize, which attracted more than 50,000 teams from 186 countries, has been celebrated by observers as a triumph of "crowdsourcing" – using a volunteer base of interactive users to solve a problem. The annual salaries required to retain a fraction of the formidable brains in the contest would amount to many millions; other companies are doubtless considering similar initiatives as a relatively cheap and easy method for improving services. So pleased is Netflix with the results that it has launched another competition, asking teams to develop software to guess a customer's tastes from their demographic profile: age, gender, and postcode.

Founded in California in 1997, Netflix boasts around 10 million customers. Cinematch, software allowing customers to give each of their DVD rentals a star rating between one and five, was added to the service in 2000. In October 2006, the company released a sample of 100 million movie ratings from their users, with personal information stripped from the profiles. The challenge for contestants was to predict the users' future movie preferences; the predictions were then matched against how the users later rated the recommended movies. To win the million-dollar prize, teams would have to improve on Cinematch's accuracy by at least 10 per cent.

In a fortnight, more than 150 teams had joined the race – three producing better results than Netflix's in-house developers who had been working on the problem for five years. Quickly, the teams began to share ideas, with the then-third placed competitor posting his methods online as early as December 2006. A year later, however, with the teams numbering in their thousands, AT&T's KorBell team were stalled at the top of the leaderboard, more than 1.5 per cent short of their 10 per cent target.

In November 2007, a retired management consultant from London named Gavin Potter joined the race from his garage, with the help of his maths-whizz daughter and a grounding in behavioural economics, which emphasised the psychology of Netflix customers as well as the blunt mathematics of their ratings. Potter's solo success – he quickly rose to the top 10 – earned him a profile in Wired magazine. It was read by two Montreal engineers, who decided to try their hands at the conundrum, entering under the name Pragmatic Theory.

Heeding Potter's call for more human methods, the Canadians – so one of them, Martin Piotte, told Wired – developed an "ability to translate intuition about user behaviour into usable equations". Each team that made up BellKor Pragmatic Chaos (Potter was not among them) had approached the project from different, complimentary directions, allowing them to attack it most effectively together.

As time passed, the leading teams grew reluctant to disclose their methods to anyone but their partners. But according to prize organiser Neil Hunt, Netflix's chief product officer, "there was an insight among some of the teams that if they combined their approaches, they got better ... When you get this combining of these algorithms in certain ways, it started this 'second frenzy'. In combination, the teams could get better and better".

Apple TV+ logo

Watch Apple TV+ free for 7 days

New subscribers only. £8.99/mo. after free trial. Plan auto-renews until cancelled

Try for free
Apple TV+ logo

Watch Apple TV+ free for 7 days

New subscribers only. £8.99/mo. after free trial. Plan auto-renews until cancelled

Try for free

This summer, on the last Friday in June, Bell-Kor submitted an algorithm that finally beat the target, proving 10.05 per cent more accurate than Cinematch. Under the rules of the competition, this gave the remaining teams 30 days to beat BellKor's score before the $1m was theirs. Suddenly, collaboration seemed a good idea again, and the recently formed Ensemble (a grand amalgam of "Grand Prize Team" and "Opera Solutions and Vandelay United") began to close on the leaders.

In the end, however, only the two top teams made it past the 10 per cent post. According to prize organiser Neil Hunt, Netflix's chief product officer, the race came down to a photo-finish. Twenty minutes before the final deadline, BellKor submitted its final algorithm – 10 minutes later, Ensemble did the same. The accuracy of their submissions, competition judges later revealed, was mathematically identical. But under the competition rules, the first team past the post won – BellKor took the prize. "That 20 minutes," said Netflix CEO Reed Hastings, "was worth $1m."

The new Netflix Prize, designed to tailor recommendations to the many customers who choose not to rate films, seeks no specific accuracy target. $500,000 will be awarded to the team with the most accurate algorithm after six months, and another $500,000 to the team ahead after 18 months. Netflix sees its future not in DVDs, but in streaming. The company's online service, which streams movies directly to users' PCs and TVs, has three million customers and instant, accurate recommendations will become even more valuable in a world where the next movie is just a click away.

The Netflix Prize has been compared to previous science prizes – competitions that brought us Longitude measurement, Lindbergh's transatlantic flight, and SpaceShipOne, the machine that won the 2004 Ansari X Prize to put a privately-funded manned craft in space. What Netflix has proven is the power not of individuals to innovate, but of disparate groups to organise themselves online and find solutions: crowdsourcing.

As Chris Volinsky, the AT&T researcher who led BellKor, told The New York Times, the key to his team's success was its mixed-discipline approach to the problem. "Collaboration has been so effective," he said, "because different people approach different problems differently."

This method could translate into other businesses facing complex challenges, particularly in the tech industry, where data can be so easily shared. Google, Microsoft's Bing and their competitors, for instance, could benefit from a comparable competition, as the principles of recommendation correspond to the "data mining" (picking pertinent details from a vast, unwieldy mass of data) capabilities required of a search engine. As AT&T, Big Chaos and Pragmatic Theory prove, three's a crowd – and a crowd might be what your company needs.

Altogether now: Crowdsourcing contests

The Longitude Prize

In 1714, the British Government offered a reward to anyone who could devise a simple, practical method by which ships could determine their Longtitude. Many won awards from the Board of Longitude for their innovations, but the big winner was John Harrison, the self-taught clockmaker who created the marine chronometer.

Wikipedia

The user-generated encyclopaedia – and sixth most popular website in the world – is perhaps the most familiar crowdsourcing success of recent years. Studies have suggested that its accuracy rivals that of 'Encyclopaedia Britannica'.

SellaBand

A music website giving fans the chance to fund their favourite unsigned acts, and offers those acts the opportunity to record their material using crowdsourced donations. Since the site's inception in 2006, 33 acts have reached the target of $50,000 and recorded debut albums.

YouTube Symphony Orchestra

The YouTube Symphony Orchestra was assembled via open auditions on the site between December 2008 and February 2009, when YouTube users voted for the (predominantly amateur) winning musicians to create the world's first online collaborative orchestra. The YouTube Symphony Orchestra gave its first performance at Carnegie Hall, New York in April.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in