Three of the people who have been involved with the London Chess Conference since the beginning have written a paper summarising the state of research into the effects of chess instruction. Giovanni Sala, John Foley and Fernand Gobet have summarised the state of the art and theoretical challenges. The paper, which is an opinion piece in the the online journal Frontiers of Psychology, places the recent EEF study in a broader context. The EEF study was a large study in England conducted by the Institute of Education and funded by the Educational Endowment Foundation. The study found no long term effect of chess on academic performance. This study finding was regarded as disappointing by many in the chess community. However on closer inspection it turns out that the study had some serious weaknesses.
A major problem is the use of public examination results to indicate whether any intervention has an impact. The trouble arises because mathematics exam results in primary school in England appear to have been getting better and better over the last two decades. This may be because children are getting smarter or it may be for other reasons to do with school league tables and teaching to the test.
In the upper diagram we see the latest results for KS2 mathematics (children aged around 11). Rather than the expected Normal distribution, we find what can only be described as a Half-Normal distribution. Half the children have scored 75% or more. This is an extraordinary result because our traditional experience with children is that some are good at maths but most struggle. The skewed shape of the exam results deserves some explanation.
The lower diagram shows how the shape of the exam results distribution has shifted over the past two decades. What we are witnessing is the accumulation of effects of educational policies which produce good results irrespective of the underlying differences in personal ability of the children. Finally, in 2015, we see that the shape has shifted so far to the right (called negative skew) that the results in maths cannot be regarded as very helpful in representing the underlying reality.
The technical term for this artefact is the “Ceiling Effect”. If examinations produce an artificial limitation on the distribution of results then we cannot distinguish those children who might have done much better. If most children are doing great, then how can an educational intervention such as chess make any noticeable difference? More generally how can any educational intervention be detected? This is a wider issue for the mathematics education research community to resolve.