If you were reading the headlines in the last months you must have read something related to Google and his huge milestone with AlphaGo. But, what is AlphaGo and more important what it means for our future?
Go is the traditional challenge for test an AI system. Since a few decades back the target of most of the AI research labs in the world was to develop a system able to beat a Go master player.
But why Go? Go was originated on China around 3.000 years ago, the rules are quite simples. There are two players (one with black stones and the other with white) that take turns to place stones in the board (the board in general is with a size of 19 x 19 lines). Each player tries to capture stones from his rival or surround empty space to make points of territory.
Even with those simple rules, the possible boards combinations of Go are quite large, even are quite more combinations than chess. Let’s try to put it in numbers; the possible boards combinations in chess is something around 1 x 1057, which is a huge number! But nothing compared with Go, which are the possible board combinations of Go (for a 19×19 board)? Well is around 1 x 10170! As you can imagine this number is bigger than chess.
But let’s put that number in perspective, the universe has around 1 x 1080 atoms, so Go has more board combinations than atoms in the universe. Not only that, it has more combinations than 1 x 1080 universes each one with 1 x 1080 atoms! Impressive right?
As you can expect is practically impossible for a computer (or all the computers in the actual world) to evaluate all those possible boards during a Go match.
At this point you can think that the only difference between Go and Chess is the number of possible boards, but that’s not complete true.
In a Chess game you can always determine who is wining, each piece has a value and you know that if you have a queen and your adversary doesn’t you’re probably wining. Also in Chess there are well known heuristic, so a master in Chess can explain why is moving a piece to a particular position.
In Go those things are not true, first you can’t be sure all the time which player is wining, and second there aren’t heuristics to evaluate each move. Go masters just trust in his intuition and feel, if you ask to a Go master why he made some particular move he will probably answer because he feel it in that way.
So this last part is what makes Go so particular, interesting and hard to play to AI, mimic that “feeling” sense was the biggest challenge for a software who tries to beat a Go master player.
What is AlphaGo?
But what is AlphaGo after all?
AlphaGo is a computer program, which use deep learning and Google’s neural network in order to teach itself to play go.
But what is a neural network? Neural networks try to simulate the neurons in the brain. In biology a neuron has 3 main parts:
- Dendrite (who takes the input)
- Soma (make something)
- Axon (output)
So basically (and I am trying to simplify the concepts here) a neuron takes one or more inputs, do something with that input and may produce an output. That is the behavior software neural networks try to accomplish. A neural network system can contain a single layer of neurons or be a multi-layer system, where one layer takes as input the output of the previous layer.
AlphaGo takes the board description as an input (that means the entire board with all the stones places over him) and process this input over 12 different network layers containing millions of neuron-like connections. The output of this will be a small set of the best possible moves, how AlphoGo calculates this set and how decides which move must select is out of the scope of this post.
What makes AlphaGo so special?
This is not the first time an algorithm beat a master in some game. If we look back we can remember when IBM Deep Blue beat Garry Kasparov twenty years ago.
But which is the difference between AlphaGo and Deep Blue?
Well Deep Blue was created to play chess, it had handwritten algorithms and heuristics to know which was the best move to make based on the state of the board. In the other hand AlphaGo use a combination of neural network to learn how to evaluate a board and figure out which is the strongest move to make, and mimic that “intuition” that a master Go player has.
Deep Blue can only play chess, and that’s it. It’s one of the best (if not the best) chess algorithms invented by the human but it can’t play Go or checkers. It just can’t because he only knows how to play chess and he is not able to learn new abilities.
The algorithms and methods used in AlphaGo can be used to learn any game; Go, Chess, checkers, Atari, or whatever. In the case of AlphaGo the intention was to teach the system to play Go, so Google researchers feed the system with millions of historical games and put the system to play online and against itself until the system has enough training to play itself and beat one of the best human players in the human history.
See the difference? Deep Blue was a huge achievement for that time. But was hand programmed to make a unique task and had one of the larger processing power for his time. AlphaGo in the other hand it has not be hand programmed to play go, the same system can learn any other topic. It just needs a huge amount of data and training. If you give these two things he will get better and better at the time goes.
What this AlphaGo winnings means for all of us?
It’s hard to predict exactly what AlphaGo means for our future. Before AlphaGo the experts were confident to make predictions for the next 10 years, but now they aren’t sure for more than the next 5 years.
Is AlphaGo (or any other neural network) near to be as powerful as the human brain? Well not yet, the human brain has around 1 trillion synapses between they neurons. Right now the biggest public known artificial neural network has around 1 billion synapses between they neurons. So we are around 1000 times less powerful than the brain.
How long it will take to an artificial neural network reach the milestone of 1 trillion synapses? Nobody knows with confident, we are sure that it will not be in the next 5 years, but we aren’t sure if it will take 6 or 20 years. But is definitely a fact that someday we will get to that point.
AlphaGo itself as a great Go player is not quite significant for us; it will not affect our lives. But what AlphaGo means in the sense of an artificial neural network capable of self-learn how to play one of the hardest games in the history and beat the BEST player of the world in that game, that’s important. Is important to notice the idea of an artificial system learning a new discipline and become in just a few months better than any human (more particular than a human that been playing the same game all of his life). We need to understand that master Go players wins games following his intuition and AlphaGo prove that we can teach an algorithm to do that (or at least to make it look like it does).
Is hard to predict what will happen in the future, is clearly uncertain what will happen. What is a fact for me is that we are just seeing the top of the iceberg, this area will continue to evolve and progress and probably it will progress faster over the next months/years.
For us as computer scientists or IT companies we have to start to learn these new topics and start to integrate in our development workflows. Is without a doubt a game changer in how we make the things (or how we learn or are used to make the things), but we should start to take it seriously and at least learn and think how to integrate this new/old technologies in our daily jobs.