Updated

In a historic first for artificial intelligence, a computer program developed by Google DeepMind has successfully beaten a professional human player at the ancient game of Go.

The board game, which has its roots in ancient China some 3,000 years ago, has long been considered the ultimate test for artificial intelligence because of the large search space and the challenge of evaluating board position and moves. For those unfamiliar with the game which is hugely popular in Asia, it requires two players to alternative place black and white pieces onto a square grid with the goal of dominating the most territory.

Related: Artificial Intelligence machine gets testy with programmer

The program, called AlphaGo, swept all five games against three-time European Go Champion and Chinese professional Fan Hui, the first time that a Go-professional has lost such a match. Now, the program will take on the world’s top player Lee Sedol from South Korea – called the Roger Federer of the Go world - in a match scheduled for March in Seoul.

“By beating Fan Hui, our program AlphaGo became the first program ever to beat a professional player in an even game with no handicap and, thus in so doing, achieving one of the longstanding grand challenges of AI,” Google DeepMind’s Demis Hassabis, a co-author on the finding that appear in the journal Nature Wednesday, told reporters.

“However,  the more significant aspect of all this for us is that Alpha Go isn’t just an expert system built with handcrafted rules like, for example, DeepBlue was, but instead it uses general machine learning techniques to figure out for itself how to win at Go,” he said. “The ultimate challenge, though, which still lies ahead, is to beat one of the best players in the world.”

Related: Shrink playerIs the world ready for artificial intelligence Barbie?

Go players acknowledge the success of AlphaGo but still held out hope that Sedol could win one for the humans.

"AlphaGo's strength is truly impressive! I was surprised enough when I heard Fan Hui lost, but it feels more real to see the game records,” Hajin Lee, the  secretary general of the International Go Federation and a Korean Go Professional, said in a statement. “My overall impression was that AlphaGo seemed stronger than Fan, but I couldn't tell by how much. I still doubt that it's strong enough to play the world's top professionals, but maybe it becomes stronger when it faces a stronger opponent."

Sedol, for his part, said he was up for the challenge.  “I have heard that Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time," he said.

For many in the AI world, the victory was seen as a critical advancement for machines – that of playing games. After IBM's Deep Blue beat Gary Kasparov at chess in the 1990s, many set their sights on Go, which is considered far more complex – requiring an average of 200 moves to 20 for chess.

“Before this match, the best computer programs were not as good as the top amateur players and I was still expecting that it would be at least 5-10 years before a program would be able to beat the top human players; now it looks like this may be imminent,” Jon Diamond, president of the British Go Association and who was one of the early researchers in Computer Go at London University, said in a statement.

Related: US biotech to apply artificial intelligence to UK genome study

“One significant aspect of this match was that AlphaGo analyzed orders of magnitude fewer positions than Deep Blue did,” he continued. “DeepBlue also had a handcrafted evaluation function, which AlphaGo does not. These indicate the general improvements in AI techniques that Google DeepMind has achieved. This surely means that the technology behind it will be really useful in other knowledge domains."

The Google DeepMind researchers said their hope was that the success of AlphaGo would one day reach far beyond the world of gaming.

“While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real world problems,” Hassabis said. “Because the methods we’ve used are general purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems – from medical diagnostics, to climate modeling to smart phone assistance. We’re excited to see what we can use this technology to tackle next.”