Evolving a Neural Network to Play Checkers without Human Expertise

  • K. Chellapilla
  • D. B. Fogel
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 62)


We have been exploring the potential for a co-evolutionary process to learn how to play checkers without relying on the usual inclusion of human expertise in the form of features that are believed to be important to playing well. In particular, we have focused on the use of a population of neural networks, where each network serves as an evaluation function to describe the quality of the current board position. After only a little more than 800 generations, the evolutionary process has generated a neural network that can play checkers at the expert level as designated by the U.S. Chess Federation rating system. This has been documented against real players with games played over the Internet. Our checkers program, named Anaconda, has also competed well against commercially available software.


Neural Network Hide Layer Evolutionary Computation Hide Node Radial Basis Function Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Schaeffer, J. (1996), One Jump Ahead: Challenging Human Supremacy in Checkers, Springer, Berlin.Google Scholar
  2. [2]
    Samuel, A. L. (1959), “Some studies in machine learning using the game of checkers,” IBM J. Res. Deli., vol. 3, no. 3, pp. 210–219.CrossRefGoogle Scholar
  3. [3]
    Chellapilla, K. and Fogel, D. B. (1999), “Evolution, neural networks, games, and intelligence,” Proceedings of the IEEE, vol. 87, no. 9, pp. 1471–1498.CrossRefGoogle Scholar
  4. [4]
    Chellapilla. K. and Fogel, D. B. (2000) “Evolving an expert checkers playing program without using human expertise,” IEEE Trans. Pattern Analysis and Machine Intelligence, in review.Google Scholar
  5. [5]
    Minsky, M. L. (1961) “Steps toward artificial intelligence,” Proceedings of the IRE, vol. 49, no. 1, pp. 8–30.MathSciNetCrossRefGoogle Scholar
  6. [6]
    Hornik, K., Stinchcombe, M., and White, H. (1989) “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, pp. 359–366.CrossRefGoogle Scholar
  7. [7]
    Poggio, T. and Girosi, F. (1990) “Networks for approximation and learning,” Proceedings of the IEEE, vol. 78, no. 9, pp. 1481–1497.CrossRefGoogle Scholar
  8. [8]
    Bäck, T., Hammel, U., and Schwefel, H.-P. (1997) “Evolutionary computation: comments on the history and current state,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 3–17.CrossRefGoogle Scholar
  9. [9]
    Bäck, T. (1996) Evolutionary Algorithms in Theory and Practice, Oxford, NY.MATHGoogle Scholar
  10. [10]
    Michalewicz, Z. and Fogel, D. B. (2000) How to Solve It: Modern Heuristics, Springer, Berlin.MATHGoogle Scholar
  11. [11]
    Fogel, L. J. (1999) Intelligence through Simulated Evolution, John Wiley, NY.MATHGoogle Scholar
  12. [12]
    Yao, X. (1999) “Evolving artificial neural networks,” Proceedings of the IEEE, vol. 87, no. 9, pp. 1423–1447.CrossRefGoogle Scholar
  13. [13]
    Yao, X. and Fogel, D. B. (2000), Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks,IEEE Press, Piscataway, NJ, in press.Google Scholar

Copyright information

© Physica-Verlag Heidelberg 2001

Authors and Affiliations

  • K. Chellapilla
  • D. B. Fogel

There are no affiliations available

Personalised recommendations