Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Google AlphaGo computer beats professional at 'world's most complex board game' Go

Milestone in AI research likened to defeat of world chess champion Garry Kasparov in 1997 by IBM’s Deep Blue computer

Wednesday 27 January 2016 19:10 GMT
Comments
More complex than chess: the Chinese board game Go
More complex than chess: the Chinese board game Go (Wikimedia/Creative Commons)

It was considered one of the last great challenges between man and machine but now, for the first time, a computer program has beaten a professional player of the ancient Chinese game of Go in a defeat that many had not expected for at least another 10 years.

The machine’s victory is being likened to the defeat of reigning world chess champion Garry Kasparov in 1997 by IBM’s Deep Blue computer, which became a milestone in the advance of artificial intelligence over the human mind.

Go, however, is more complex than chess with an infinitely greater number of potential moves, so experts were surprised to find that computer scientists had invented a suite of artificial intelligence (AI) algorithms that taught the computer how to win against Europe’s top player.

The program, called AlphaGo, defeated European champion Fan Hui by a resounding five games to nil in a match played last October but only now revealed in a scientific study of the moves and algorithms published last night in the journal Nature. A match against the current world Go champion, Lee Sedol from South Korea, is now scheduled for March.

It was the first time a computer had won against a professional Go player on a full-sized board without any handicaps or advantages given to either side, said Demis Hassabis of Google DeepMind, the AI arm of Google in London, who helped to write the program.

Go rules

The rules of Go are deceptively simple and no luck is involved. Two players – one black, one white –  start with an empty board by placing one of their pieces or “stones” on a position, from where it does not move. The winner is the first to fill more than half the board with their stones. It is possible to take an opponent’s stone by completely surrounding it with your stones. Children and adults can easily play against each other and a handicap system allows players of different strengths to play with a 50 per cent chance of winning.

“Go is the probably the most complex board game humans play. There are more configurations of the board than there are atoms in the Universe. In the end, AlphaGo won 5-nil and it was perhaps stronger than even we were expecting,” Mr Hassabis said.

“AlphaGo discovered for itself many of the patterns and moves needed to play Go. Go is considered to be the pinnacle of AI research – the holy grail. For us, it was an irresistible challenge,” he said.

Computer chess programs work by analysing every possible move on the board but this is relatively straightforward when there are about 20 possible moves for each stage of the game. In Go, however, there are about 200 possible moves, making the task of writing a winning program far more difficult.

“The search process itself is not based on brute force but on something akin to [human] imagination. In the game of Go we need this incredibly complex intuitive machinery that we only previously thought to be possible in the human brain,” said David Silver of Google DeepMind, the lead author of the study.

AlphaGo uses two neural networks working in parallel and interacting with one another. A “value network” evaluated the positions of the black and white pieces or “stones” on the board, while a “policy network” selected the moves based on continuous learning of both past human moves and the program’s own dummy moves, Mr Silver said.

Video: IBM’s Watson defeats greatest 'Jeopardy!' champions

“Humans can play perhaps a thousand games in a year whereas AlphaGo can play millions of games a day. It is conceivable with enough process power, training and search power that AlphaGo could reach a level that is beyond any human,” he said.

Milestones in AI Research

  • 1950: British mathematician Alan Turing published a landmark study speculating on the possibility of creating machines that can think – as defined by his Turing Test.
  • 1956: The field of Artificial Intelligence (AI) or “machine intelligence” was born with the Dartmouth Conference of researchers including Marvin Minsky talking about creating an artificial brain.
  • 1980s: Concept of “expert systems” widely adopted by computer companies as the first commercial exploitation of AI.
  • 1989: Carnegie Mellon University developed Deep Thought, an expert system that could play chess as well as a grand master.
  • 1997: IBM’s Deep Blue computer beats reigning world chess champion Garry Kasparov for the first time.
  • 2005: A Stanford University robot won the Darpa Grand Challenge by driving autonomously for 131 miles along a rehearsed desert track.
  • 2011: IBM’s Watson, a question-answering computer, defeated the two greatest champions in the American quiz show Jeopardy!, Brad Rutter and Ken Jennings. Watson won $1m first prize.

In tests against other Go computer games on the market, AlphaGo won all but one out of 500 games, even when other programs were given a head-start with pieces already positioned on the board. Mr Silver said the neural networks were able to learn by themselves, unlike the “supervised” training of other artificial intelligence algorithms.

“It learns in a human-like manner but it still takes a lot of practice. It has to play many millions of games to do what a human player can learn in a few games,” Mr Silver said.

World champion Lee Sedol said he is looking forward to the challenge match in March. “I have heard that Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time,” he said.

Jon Diamond, president of the British Go Association, said: “Before this match the best computer programs were not as good as the top amateur players and I was still expecting that it would be at least 5 or 10 years before a program would be able to beat the top human players; now it looks like this may be imminent. The proposed challenge may well be that day.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in