dark mode light mode Search Menu

Go, AlphaGo, and Artificial Intelligence

calliope on Flickr

You and your rival just landed on a newly-discovered planet and seek to conquer unclaimed territory. You each do so by building stone walls to mark your land. As you build, you attempt to capture each other’s stones by surrounding them, thus cutting off their air and gaining more territory for yourself.

This is the game of Go. It originated in China over 2500 years ago, making it the oldest game still played in its original form. Chinese scholars, including Confucius, have written about it throughout history to describe correct thinking about human nature. Around the year 1600 it became one of the four accomplishments that must be mastered by Chinese gentlemen. Go plays an important role in Japanese society as well, appearing in literature, theater, and ukiyoe – woodblock prints. It is equally esteemed in South Korea, home to the current world champion player, where an entire television station is dedicated to Go. It is likely the most widely played game in the world, with over 40 million players, from toddlers on up.

Although not as known in the West, many great American and European minds were also Go players. Albert Einstein, Alan Turing, and accomplished mathematician Paul Erdos were all known to enjoy the game. Today there are hundreds of Go clubs across the United States where both children and adults can learn the game and even enter Go competitions.

The rules of Go are few and simple, yet the game is exponentially complex. In fact, Go is so complex that nearly 20 years passed between Deep Blue’s defeat of chess champion Garry Kasparov in 1997, and the January 2016 upset of European Go champion Fan Hui by AlphaGo.

Two months later in March, world Go champion Lee Sedol surrendered during his first of five matches with AlphaGo, signaling a major breakthrough in artificial intelligence. By the time you read this article, the world will know whether or not we still have a “deterministic perfect information game” — a game where no information is hidden from either player and there is no element of chance — in which humans out-perform machines.

Go is sometimes thought of as the far East’s chess game. However, although both chess and Go are exponentially complex, the game of Go has a far larger exponent. A full Go board is comprised of slightly rectangular grids formed by nineteen vertical lines intersecting nineteen horizontal lines, creating 361 intersections. Once the first player places a stone, the second player has 360 possible moves to choose from. This makes 129,960 possible moves after each player has taken just one turn, albeit many are not wise plays. Compare this to chess; once both players have played there are 400 possible board positions.

To calculate the theoretic number of possible moves in Go, consider that each intersection can be in one of three states: black stone, white stone, or empty. Thus the number of possible positions can be calculated as shown below:

3(19 x 19) = 3361

The rules of Go limit the number of legal positions to just over 1% of this theoretic possibility, leaving approximately 2.082 10170 possible plays (2 followed by 170 zeros). Compare this to the number of atoms in the universe, 1080, and it becomes clear that even a computer would not be able to cycle through all possible plays in anywhere close to a reasonable amount of time for a game.

In comparison, chess has approximately 1050 legal positions. Though still a formidable number, IBM’s engineers used a classical search algorithm to program Deep Blue. A search tree enabled the machine to make the best plays by simulating all possible games that might follow each move the machine made, at a mind-boggling rate of 200 million calculations per second. Additionally, the computer used techniques such as alpha-beta search and null-move to determine which moves deserve more attention.

These types of algorithms do not work well in Go. The number of possible positions is too great and it is difficult to determine which moves deserve the most attention. Another major challenge for machines programmed to play Go is that champion Go players rely on intuition to help guide their moves. They often cannot explain how they know where to place their next piece.

Computers do not have intuition. They can not “just know” much of anything. AlphaGo uses a Monte-Carlo tree search combined with deep neural networks to build up something of an intuition that its creators hope is comparable, or superior, to that of Go players. If they are successful, applications of AlphaGo abound in decision-making tasks from the medical field to business and beyond.

Ready to try a game? The book Graded Go Problems for Beginners combined with online play make for a good start. A popular first online stop is Consumi. Brace for rapid losses on a small board as you begin to develop the intuition that the masters build up over a lifetime of play.

Learn More

Pages for beginners


Consumi (online play)


Pandanet (Internet Go server)


Graded Go Problems for Beginners


American Go Association


American Go Federation


Where to play Go


The Surrounding Game (Go movie)


Garry Kasparov speaks at Google


Nature article


Wired article (2014)


Deep Mind


Mastering Go with Deep Neural Networks and Tree Search


Google Blogspot


The Computer that Mastered Go


Lee Sedol


Google’s AI won the game Go by defying millennia of basic human instinct


Related Posts