MY KOLKATA EDUGRAPH
ADVERTISEMENT
Regular-article-logo Monday, 06 April 2026

Go, computer! Ace humans

AI program beats champion of board game

G.S. Mudur Published 28.01.16, 12:00 AM

New Delhi, Jan. 27: A computer program has for the first time beaten a professional human player in a profoundly complex board game called Go, marking a historic milestone in scientists' efforts to develop artificial intelligence.

Scientists at Google DeepMind, a UK-based Google subsidiary focused on artificial intelligence (AI), announced today they have developed a computer program called AlphaGo that has beaten Fan Hui, the reigning European Go champion, winning five games to zero.

The researchers say AlphaGo, which uses novel self-learning and game-playing strategies, displays significantly higher levels of intelligence than Deep Blue, the IBM computer that had beaten the reigning world Chess champion Gary Kasparov in May 1997.

AlphaGo will face its ultimate challenge when it will be pitted against Lee Sedol, the world champion in Go over the past decade, in a five-game match in Seoul in March this year. A research paper describing AlphaGo will appear on Thursday in the scientific journal Nature.

AlphaGo uses techniques that make it more human-like than existing AI systems and could be applied to several real-world problems - from helping doctors analyse medical diagnostic images to improving smartphones to modelling the climate.

"Go is probably the most complex game ever devised by humans," said Demis Hassabis, an AI scientist and vice-president at Google DeepMind who led the research. "In a typical game, there are an average of 200 possible moves in Go compared to an average of 20 possible moves in chess."

In Go, which originated in China over 2,500 years ago, two players place black and white pieces onto a square grid with the aim of occupying more territory than the opponent at the end of the game. Until now, the strongest Go programs have only beaten amateur players.

Although Go's rules are simpler than those of chess, it is a far more complex game. Chess is played on a grid of 8 by 8 squares, a standard Go board has a grid of 19 by 19 squares. The opening player in chess has to pick from 20 possible moves, while the opening player in Go has to choose from 361 possible moves. The British Go Association says this wide latitude of choice of moves continues throughout the game.

"Go has an enormous search base, which is intractable to brute-force search," said David Silver, another scientist in the 20-member research team at Google DeepMind in London, and Google's headquarters in Mountain View, California.

"The key to AlphaGo is to reduce the search base to something more manageable," Silver said. The scientists combined state-of-the-art search techniques with two deep neural networks to reduce the depth of the search through a human-like approach to the game.

The researchers first tested AlphaGo on existing Go-playing computer programs and found that it won 494 out of 495 games, winning even when it was placed under handicaps - where the opponent is allowed up to four initial moves.

The program was tested against the European champion Fan Hui in October 2015 at the Google DeepMind offices in London where it won the match, five formal games to zero - although Fan Hui won two of five informal matches with shorter time controls.

"In a room upstairs, the engineering team was cheering for the machine, but on the other hand, in a quiet room downstairs, one couldn't help root for the poor human being getting beaten," said Tanguy Chouard, a research editor at Nature who watched the match.

"It was chilling to watch," Chouard said in a special global news briefing on Tuesday.

Scientists view AlphaGo as a major advance in the quest to build a new generation of AI. "AlphaGo successfully combines some nice tricks to achieve a great and convincing result," said Arnab Bhattacharya, associate professor of computer science at the Indian Institute of Technology, Kanpur. "This approach comes closer to mimicking the way humans think - at least while playing board games - than has been achieved so far," Bhattacharya told The Telegraph.

During the match against Fan Hui, AlphaGo evaluated thousands of times fewer future positions than Deep Blue had done during its chess match against Kasparov, compensating by selecting those positions more intelligently and evaluating them more precisely. Scientists say this is closer to how humans play.

"While games are the perfect platform to develop and test AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real world problems," Hassabis, who led the research, said.

Potential application areas include medical diagnosis and smartphone applications. In the long term, Hassabis said, such AI systems could be used by scientists to address challenging scientific problems that involve looking for insights and structures in large amounts of data.

Until now, most existing AI systems designed to play board games such as Deep Blue used sophisticated search strategies, analysing future moves and their likely consequences to pick the best option - using hand-crafted rules and pruning obvious sets of poor choices.

"Getting a computer to beat a professional Go player has been a grand challenge problem for AI - AlphaGo is a landmark feat," said Partha Pratim Talukdar, an assistant professor of computer science and automation at the Indian Institute of Science, Bangalore, who was not associated with the research but is an expert in machine learning.

But scientists caution that paucity of training data may pose a challenge to efforts to apply these strategies to other problems. A key component of AlphaGo was trained through a set of 30 million Go board positions based on moves by human expert players.

"There may be some areas where such large data sets are available, such as online consumer behaviour. But in areas such as medical imaging or diagnostics, such large volumes of data may not be available, thus making training a challenge," Talukdar said.

Scientists have spent decades trying to build AI into computers. Several generations of AI systems are used across a wide range of applications - financial analysis, airline scheduling, transportation, medical diagnostics, telecommunications and robotics, among others. However, the quest to build human-like thinking capacities into computers continues.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT