AlphaGo Crushes Human
Go Game AI Machine LearningAfter AlphaGo won 4 to 1 against Lee Sedol, the world is a buzz with excitement and fear of AI. Although we’re still a long way from the “rise of the machines,” but we are getting closer.
AlphaGo does use simulations and traditional search algorithms to help it decide on some moves, but its real breakthrough is its ability to overcome Polanyi’s Paradox. It did this by figuring out winning strategies for itself, both by example and from experience. The examples came from huge libraries of Go matches between top players amassed over the game’s 2,500-year history. To understand the strategies that led to victory in these games, the system made use of an approach known as deep learning, which has demonstrated remarkable abilities to tease out patterns and understand what’s important in large pools of information. via NY Times
Polanyi’s Paradox states that we humans know more than we can tell, which is really our tacit knowledge. For the longest time AI couldn’t do that, couldn’t know more than what it can tell. It just relied on hard rules and brute force computations. AlphaGo changed all that.
For the first time ever, AI is breaking through the paradox, which is cool and scary at the same time. Once AI can figure out how to reprogram itself or make better machines than we can design, we’ll be in trouble.
Update
I wanted to briefly expand on my last line above about “Once AI can figure out how to reprogram itself.” This is happening already with Google’s AutoML Zero. While this is in a slightly different use case space, it shows how the AI can assembled the right ‘chunks’ of code to optimize an outcome.