It’s been a long time since IBM’s Deep Blue beat Garry Kasparov in a chess match. In the 18 years since, chess playing software has gotten better and better, to the point top human champions rarely win. In 2009, a program running on a fairly weak HTC mobile phone (by today’s standards) was able to play at a grandmaster level.
Chess, though, isn’t a particularly complex game, mathematically speaking.
It is another, older, game which has proven to be a tougher nut to crack with AI. The game, of course, is Go (围棋), which is believed to have originated in China 2,500 years ago. Go is an elegant game, in that the rules are simpler than Chess, and yet the number of possible positions is orders of magnitude more.
This is why, until recently, Go masters have not lost to their computer counterparts, and it is why excelling at the game has been seen as a significant hurdle for AI.
That hurdle has finally been surmounted. Google’s DeepMind division in the UK have revealed that in October of last year, their software AlphaGo beat the European champion, Fan Hui.
When you start playing Chess, at first it seems like a game of strategy. But, for practiced players, Chess becomes a game of pattern matching, because the commonly used variations of play are few enough that people can recall game positions and methods of counterattack. For these reasons, chess simulators work pretty much by searching a tree and looking for patterns, and chess simulators can beat humans essentially because they can scan more trees faster than our brains can.
Because Go is more mathematically complex, it remains a game of strategy for the human brain. That means nothing special except that there are simply too many board combinations for us to remember and act upon. We simply can’t take all the variables into account, and so at some point we have to play by guessing the best move. In other words, “strategizing”.
But even though computers can scan trees more efficiently, the number of mathematical possibilities is great enough that the number of nodes to scan is still too large to happen reasonably in real-time. (Maybe it would be possible to build a computer that could win at Go by simply scanning trees, but it would need to be a supercomputer, and would probably take days, weeks or months to make a single move, which isn’t practical.)
So the DeepMind team had to find a way other than brute force, and they settled on a neural network that was fed data from past games by Go champions, and was then left to play against itself, essentially “learning” optimal strategies. (This is essentially how I improved myself at Scrabble as a kid. Ahh, the things you do when you are an only child.)
The end result was that the neural network was able to predict the move a player would make 57 per cent of the time. The next step will be a match in March with the top Go player in the world, Lee Sedol.
Winning at Chess may have been important for PR and for bringing the concept of AI into the public consciousness, but conquering Go may have more impact on future AI, simply because the strategies employed are more general purpose, and therefore suited to other AI efforts.
As an amusing addendum to this story, Facebook also has a Go program of its own, and has recently placed third in an AI competition. That’s laudable. Facebook tried to steal the show with an announcement of its own successes, but it didn’t work out as well as they had hoped.