It has finally come to pass: machines are grinding humans into the dirt. At least, they are in the context of an ancient Chinese board game. Tuesday morning saw the conclusion of a five-match series of games between one of the world's top-rated Go players, Lee Se-dol, and a computer opponent by the name of AlphaGo that's been built by a Google-owned company, DeepMind. The computer cruised to a 4-1 victory in a contest described in some quarters as a "battle against humanity", even though that battle merely involved placing black and white stones on a 19 by 19 grid.
The subtext of some of the breathless news reports seemed to be that we're just a few months from becoming the meek subjects of an artificial intelligence we were stupid enough to create in the first place.
We're not – but Lee's defeat is still significant. Go has long been deemed the game that computers could never master, thanks to the number of game positions (often described as "more than there are atoms in the universe", which I'm guessing is quite a lot) and the importance of intuition and creativity – very human qualities – in forcing victory.
Two deep neural networks were powering AlphaGo's gameplay, assessing the next move and the consequences of those moves based upon the analysis of millions upon millions of games, but Lee, having been confident beforehand ("Human intuition is too advanced") became despondent after the second loss ("From the beginning there was no moment I thought I was winning"). AlphaGo's moves were praised by experts for their "rare and intriguing" qualities and their creative style. The word "beauty" was bandied about. By all accounts it was playing just as a human would play.
So with its AlphaGo, has DeepMind, through a successful experiment in machine learning, managed to approximate something akin to human creativity and intuition? This encapsulates the so-called "hard problem" of AI, which asks whether such a thing is even possible; no matter how amazing the AI, so the argument runs, the computer will never really understand. It might be able to convince, say, a Chinese speaker that it understands Chinese, but it would only ever be simulating understanding.
Then again, AlphaGo managed to use existing knowledge to solve problems in the most human-like way a computer has yet managed. It's said that this Go victory has happened about a decade earlier than many scientists expected, and it has cast AlphaGo as the bright new future of AI. Humans, by comparison, are just a bunch of meatbags feeling slightly sorry for themselves.
But what does this development mean for the future? AlphaGo's skillset is, of course, only applicable to one particular game with one set of rules, a fixed beginning and a specified outcome – and of course life isn't really like that. We don't live on a Go-style grid, unless we happen to be living in Milton Keynes, but even Milton Keynes brings an unimaginable level of complexity that AI would struggle to cope with.
Life is messy, our aims are ambiguous and it's not even clear what a definition of "intelligence" is. The main effect of these advances in AI, perhaps, is to force us to appreciate what it means to be human. One contributor to the technology publication The Register suggested that the next big task was to devise a game that AI could never win against a human player. "Such a game already exists," replied one wag. "It's called dating."
Register for free to continue reading
Registration is a free and easy way to support our truly independent journalism
By registering, you will also enjoy limited access to Premium articles, exclusive newsletters, commenting, and virtual events with our leading journalists
Already have an account? sign in
Join our new commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies