One of the great things about chess is that it is a crucible for so many aspects of computer technology. The development of chess engines has transformed chess over the last 25 years. Computers famously became much stronger players than humans. The latest artificial intelligence innovations from DeepMind’s AlphaZero combine neural networks with Monte-Carlo tree searches to produce new insights into chess theory (see Matthew Sadler and Natasha Regan’s talk). The approach is now popular with the emergence fo the Leela Chess Zero project: an open-source chess engine based upon AlphaGo.
The well-known drawback to neural networks is that although they may produce excellent results, they are like a black box. You cannot interrogate a neural network in the same way that you could ask questions of an expert or indeed an expert system. If you look under the bonnet of a neural net you just see a bunch of numbers, no doubt finely tuned, but in themselves meaningless. Even if you use the leading conventional chess engine Stockfish you are stuck with its evaluations. It is not showing you, it is telling you. Step forwards Decodea’s Gideon Segev who articulated the issue and the solution.
Decodea is an Israeli company in the rapidly expanding field of Explainable AI. Their approach is take the output from an AI system, Stockfish in the case of chess, and then to provide an explanation based upon high level concepts such as deduction chains (“if this then that”) and counterfactuals (“if this hadn’t happened”). Gideon shows some examples from games in which surprising moves are shown to have an inevitable logic to them. These explanations convert the mysterious into the familiar. Where chess leads, other domains will follow.