Go master: AI will one day prevail but beauty of Go remains

Staff Writer
Columbus CEO

SEOUL, South Korea (AP) — Computers eventually will defeat human players of Go, but the beauty of the ancient Chinese game of strategy that has fascinated people for thousands of years will remain, the world champion of the game said Tuesday.

South Korean Lee Sedol, a Go champion who has won 18 international titles since he became a professional player at the age of 12, said the risk of human error means he may not win his match this week against Google's artificial intelligence machine, AlphaGo.

"Because humans are human, they make mistakes," the 33 year-old said a day before the first of the five games he is due to play against AlphaGo. "If there are human mistakes, I could lose."

It was Lee's first admission of his weakness against Google's AI machine and also a dialing down of his confidence from two weeks ago, when he had predicted a 5-0 result in his favor.

After watching Google's presentation of how AlphaGo works, Lee said he thought a machine might be able to imitate human intuition, even though the intuition may not be as good as a person's.

A loss for Lee would be a historic moment for the AI community.

Human errors are not his only vulnerability.

Lee said that in playing against a machine, the absence of visual cues that human players use to read the reactions and psychology of their opponents puts him in unfamiliar territory.

"In a human versus human game, it is important to read the other person's energy and force. But in this match, it is impossible to read such things. It could feel like I'm playing alone," Lee said.

Because the number of possible Go board positions exceeds the number of atoms in the universe, top players rely heavily on their intuition, said Demis Hassabis who heads Google's DeepMind, the developer of AlphaGo.

This has made Go one of the most complex games ever devised and the ultimate challenge for the AI experts, who had expected that it would take at least another decade for a computer to beat a professional Go player.

That changed last year when AlphaGo defeated a European Go champion in a closed-door match later published in the journal Nature.

Google's DeepMind team programmed the machine to mimic experts' Go moves based on data from about 100,000 Go games available online. AlphaGo then was programmed to play against itself and "learn" from its mistakes. The team also built a system that enabled AlphaGo to anticipate the long-term results of each move and predict the winner without going through the near-infinite possible sequences of moves.

Using this approach, AlphaGo beat the European Go champion by searching through far fewer positions than traditional AI machines, such as DeepBlue, the famed IBM computer that defeated the world's chess champion in 1997, Hassabis said.

AlphaGo also has other strengths as a machine.

"I think the advantage of AlphaGo is that it will never get tired and it will not get intimidated either," Hassabis said.

Lee said he hopes to defend the edge humans have in Go, but also wants to remind audiences that the game is not all about victory.

"Of course I can lose. But a computer does not play by understanding the beauty of Go, the beauty of humans," he said. "My job is to play Go more beautifully."

And that beauty, many Go fans believe, is something a machine cannot replicate.