Why does it matter that Google’s DeepMind computer has beaten a human at Go?

The Big Question: A computer’s mastery of arguably the most complex game in the world is a major step forward for artificial intelligence

Click to follow
The Independent Tech

Why are we asking this now?

A computer has won a game of Go against a person — much to many humans’ surprise. Google's machine appears to have mastered the game, beating Lee Sedol, widely regarded as humanity's best player.

The game still has some time to go — AlphaGo's win was just the first part of a best of five match. But even winning just one was a huge step forward for AlphaGo, the robot created by Google.

The computer wasn’t expected to win, at least not by everyone. For his part, Mr Lee had said that he expected to win by a “landslide” — initially predicting a 5-0 win.

Why is Go so important?

Computers have beaten humans at almost every game before, but none of them have had the same kinds of complexity. Winning at chess, for instance, was a huge achievement — but one of more traditional computing than artificial intelligence.

Go is thought to be one of the most complicated games that there is. Because of that there is an almost infinite number of moves — meaning that it is a game more of intuition than calculation.

The ancient Chinese game of Go is nearly 3,000 years old and immensely challenging. Players take turns putting black and white stones onto a gridded piece of wood, with the ultimate aim being taking over the full board.

That is done using a relatively simple set of rules. But because of the huge and complex possibilities that those simple rules create, it just isn’t possible to win the game by anticipating all the moves, as is the case with the (relatively) limited number of possible moves in a game like chess.

How did AlphaGo get so good at it?

As with much work in artificial intelligence, Google’s DeepMind team trained the computer using a system of trial and error. The computer uses “reinforcement learning” — a development remarkably similar to the way that humans learn.

That process happens as the computer plays against itself. When it does so, it adjusts its own thinking based on what it learns, meaning that it gets better all the time.

While it still uses some of those predictions that are involved in chess computers and other game-playing machines, it is also able to anticipate humans’ thinking and use a simulation of a kind of intuition.

What does it mean for us?

Intuition — like other kinds of human traits — is one of the key parts of artificial intelligence. We have managed to gather together huge amounts of computing power, and the current challenge for many engineers is making those computers learn, think and understand like humans do.

If computers manage to perfect many of those central parts of human life, then it could lead to a revolution on the scale of the first supercomputers. Having machines that can think like people can lead them to take on much of the work that is done by people: they’ll be able to talk and process information.

For the moment, we’re seeing that application in only minute ways, such as image recognition that allows Google and other companies to classify pictures according to what is in them. But the processes involved in those two activities are similar, in one way: recognising cats in Google Photos and recognising the best Go move in South Korea are both matters of trying out possibilities and being able to see what works.

What’s more, such computers get better by themselves. A development called machine learning allows computers to gather information like we do — meaning that they can make themselves more clever and more intelligent with time.

Some people are scared of those same developments, fearful that AI will become clever enough that it will decide to crush us.

But for now at least you’re more likely to see the results come into your searches and social networks.