Skip to main content

HeX and the Single Anthill: Playing Games with Aunt Hillary

  • Chapter
  • First Online:
Fundamental Issues of Artificial Intelligence

Part of the book series: Synthese Library ((SYLI,volume 376))

Abstract

In a reflective and richly entertaining piece from 1979, Doug Hofstadter playfully imagined a conversation between ‘Achilles’ and an anthill (the eponymous ‘Aunt Hillary’), in which he famously explored many ideas and themes related to cognition and consciousness. For Hofstadter, the anthill is able to carry on a conversation because the ants that compose it play roughly the same role that neurons play in human languaging; unfortunately, Hofstadter’s work is notably short on detail suggesting how this magic might be achieved. Conversely in this paper – finally reifying Hofstadter’s imagination – we demonstrate how populations of simple ant-like creatures can be organised to solve complex problems; problems that involve the use of forward planning and strategy. Specifically we will demonstrate that populations of such creatures can be configured to play a strategically strong – though tactically weak – game of HeX (a complex strategic game). We subsequently demonstrate how tactical play can be improved by introducing a form of forward planning instantiated via multiple populations of agents; a technique that can be compared to the dynamics of interacting populations of social insects via the concept of meta-population. In this way although, pace Hofstadter, we do not establish that a meta-population of ants could actually hold a conversation with Achilles, we do successfully introduce Aunt Hillary to the complex, seductive charms of HeX.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    As Drew McDermott writes in the Cambridge Handbook of Consciousness (McDermott 2007), it is as if Hofstadter “wants to invent a new, playful style of argumentation, in which concepts are broken up and tossed together into so many configurations that the original question one might have asked get shunted aside”.

  2. 2.

    Although the recruitment behaviour of real ants is more complex than the behaviour in SDS, both are population-based and find their optima via agents communicating with each other.

  3. 3.

    The rigorous proof is not based on this left-right consideration: “this would involve getting into the quite complex notion of orientation, which is not needed for our proof” (Gale 1979).

  4. 4.

    Group size \(=\, g\, =\) (board area DIV 2) + 1.

  5. 5.

    A process isomorphic to asynchronous passive recruitment SDS (De Meyer 2003).

  6. 6.

    A ‘game tree’ is a directed graph whose nodes are positions in a game and whose edges are moves. The complete game tree for a game is the game tree starting at the initial position and containing all possible moves from each position; a n-ply game tree describes all possible move/counter-move combinations to a depth of n moves.

  7. 7.

    This assumption is not necessarily a good one due to the distinction between random play and optimal play – see analysis of standard Monte-Carlo methods (and MCSDS) in Sect. 22.3.4.

  8. 8.

    The term was coined by Levins in Levins (1969) to describe the dynamics of interacting populations of social insects.

  9. 9.

    The initial motivation for the work on SDST was to extend the applicability of Stochastic Diffusion Search (SDS) to more complex search spaces, and combinatorial games were chosen as a first study case. Then, Monte-Carlo Tree Search (MCTS) came naturally as a good framework for several reasons. First, MCTS does not rely on domain knowledge but rather on a large number random game simulations and the notion of random game simulation fits well with the concept of partial evaluation in SDS. Second, the strength of MCTS relies on the tree policy balancing between exploration of the search space and exploitation of the promising solutions and SDS is a metaheuristic precisely conceived to solve this “exploration-exploitation dilemma” in the management of the computational resources. Finally, MCTS has proven very successful in a wide range of problems – not only game playing – and is still under active study.

  10. 10.

    Ant Colony Optimisation also shares this property

  11. 11.

    This property is due to the partial evaluation of solutions: in the case of string matching for example, as discussed by Nasuto (1999), the position of the solution after convergence is indicated by the formation of a cluster of agents, possibly dynamically fluctuating; in the case of a partial match, agents will keep exploring the text while the cluster will globally stay on the best match.

References

  • Abramson, B. (1990). Expected-outcome: A general model of static evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(2), 182–193.

    Article  Google Scholar 

  • Aleksander, I., & Stonham, T. (1979). Guide to pattern recognition using random-access memories. IEE Journal on Computers and Digital Techniques, 2(1), 29–40.

    Article  Google Scholar 

  • Beattie, P., & Bishop, J. (1998). Self-localisation in the ‘SENARIO’ autonomous wheelchair. Journal of Intelligent & Robotic Systems, 22(3), 255–267.

    Article  Google Scholar 

  • Bishop, J. (1989). Stochastic searching networks. In First IEE International Conference on Artificial Neural Networks, 1989 (Conf. Publ. No. 313) (pp. 329–331). IET.

    Google Scholar 

  • Bishop, J. (1992). The stochastic search network. In R. Linggard, D. Myers, & C. Nightingale (Eds.), Neural networks for images, speech, and natural language (pp. 370–387). London/New York: Chapman & Hall.

    Chapter  Google Scholar 

  • Bonabeau, E., Dorigo, M., & Theraulaz, G. (2000). Inspiration for optimization from social insect behaviour. Nature, 406, 3942.

    Article  Google Scholar 

  • Browne, C., Powley, E., Whitehouse, D., Lucas, S., Cowling, P., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., & Colton, S. (2012). A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1), 1–43.

    Article  Google Scholar 

  • Chaslot, G., Bakkes, S., Szita, I., & Spronck, P. (2008). Monte-carlo tree search: A new framework for game ai. In Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment Conference, Palo Alto (pp. 216–217).

    Google Scholar 

  • De Meyer, K. (2003). Foundations of stochastic diffusion search. Ph.D. thesis, University of Reading.

    Google Scholar 

  • De Meyer, K., Bishop, J., & Nasuto, S. (2000). Attention through self-synchronisation in the spiking neuron stochastic diffusion network. Consciousness and Cognition, 9(2), 81–81.

    Google Scholar 

  • De Meyer, K., Nasuto, S., & Bishop, J. (2006) Stochastic diffusion optimisation: The application of partial function evaluation and stochastic recruitment in swarm intelligence optimisation. In A. Abraham, C. Grosam, & V. Ramos (Eds.), Swarm intelligence and data mining (Vol. 2). Berlin/New York: Springer.

    Google Scholar 

  • Dorigo, M (1992). Optimization, learning and natural algorithms. Ph.D. thesis, Milano: Politecnico di Italy.

    Google Scholar 

  • Dorigo, M., Maniezzo, V., Colorni, A., Dorigo, M., Dorigo, M., Maniezzo, V., Maniezzo, V., Colorni, A., & Colorni, A. (1991). Positive feedback as a search strategy. Technical report (Technical Report No. 91-016), Politecnico di Milano.

    Google Scholar 

  • Gale, D. (1979). The game of hex and the Brouwer fixed-point theorem. The American Mathematical Monthly, 86(10), 818–827.

    Article  Google Scholar 

  • Goodman, L. J., & Fisher, R. C. (1979). The behaviour and physiology of bees. Oxon: CAB International.

    Google Scholar 

  • Grech-Cini, H., & McKee, G. (1993). Locating the mouth region in images of human faces. In Sensor fusion VI (SPIE-the international society for optical engineering, Vol. 2059). Bellingham: Society of Photo-optical Instrumentation Engineers.

    Google Scholar 

  • Hart, S. (1992). Games in extensive and strategic forms. Handbook of Game Theory with Economic Applications, 1, 19–40.

    Article  Google Scholar 

  • Hofstadter, D. (1979). Godel, escher, bach: An eternal golden braid. New York: Basic Books.

    Google Scholar 

  • Holldobler, B., & Wilson, E. O. (1990) The ants. Cambridge: Springer.

    Book  Google Scholar 

  • Kennedy J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks (IV) (pp. 1942–1948).

    Google Scholar 

  • Kennedy, J. F., Eberhart, R. C., & Shi, Y. (2001). Swarm intelligence. San Francisco/London: Morgan Kaufmann.

    Google Scholar 

  • Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. In Machine Learning: ECML 2006 (pp. 282–293).

    Google Scholar 

  • Levins, R. (1969). Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the ESA, 15(3), 237–240.

    Google Scholar 

  • McDermott, D. (2007) Xartificial intelligence and consciousness. In M. Moscovitch, P. D. Zelazo,& E. Thompson (Eds.), The Cambridge handbook of consciousness. Cambridge/New York: Cambridge University Press.

    Google Scholar 

  • Metropolis, N., & Ulam, S. (1949). The monte carlo method. Journal of the American Statistical Association, 44(247), 335–341. doi:10.1080/01621459.1949.10483310. http://www.tandfonline.com/doi/abs/10.1080/01621459.1949.10483310. PMID: 18139350

    Article  Google Scholar 

  • Moglich, M., Maschwitz, U., & Holldobler, B. (1974). Tandem calling: A new kind of signal in ant communication. Science, 186(4168), 1046–1047.

    Article  Google Scholar 

  • Nasuto, S. (1999). Resource allocation analysis of the stochastic diffusion search. Ph.D. thesis, University of Reading.

    Google Scholar 

  • Nasuto, S., & Bishop, J. (1998). Neural stochastic diffusion search network-a theoretical solution to the binding problem. In Proceedings of ASSC2, Bremen (Vol. 19).

    Google Scholar 

  • Nasuto, S., & Bishop, M. (1999). Convergence analysis of stochastic diffusion search. Parallel Algorithms and Applications, 14(2), 89–107.

    Article  Google Scholar 

  • Nasuto, S., Bishop, J., & Lauria, S. (1998). Time complexity analysis of the stochastic diffusion search. Neural Computation, 98.

    Google Scholar 

  • Nasuto, S., Bishop, J., & De Meyer, K. (2009). Communicating neurons: A connectionist spiking neuron implementation of stochastic diffusion search. Neurocomputing, 72(4), 704–712.

    Article  Google Scholar 

  • Seeley, T. D. (1995). The wisdom of the Hive. Cambridge: Harvard University Press.

    Google Scholar 

  • Tanay, T. (2012). Game-tree exploration using stochastic diffusion search. Technical report, goldsmiths, University of London.

    Google Scholar 

  • Tanay, T., Bishop, J., Nasuto, S., Roesch E. B., & Spencer, M. (2013). Stochastic diffusion search applied to trees: A swarm intelligence heuristic performing monte-carlo tree search. In Proceedings of the AISB 2013: Computing and Philosophy Symposium,‘What is Computation?’, Exeter.

    Google Scholar 

  • Whitaker, R., & Hurley, S. (2002). An agent based approach to site selection for wireless networks. In Proceedings of the 2002 ACM Symposium on Applied Computing, Madrid (pp. 574–577). ACM.

    Google Scholar 

Download references

Acknowledgements

The central argument presented herein was developed under the aegis of Templeton project 21853, Cognition as Communication and Interaction. The initial development of SDST was extracted from the unpublished MSC Dissertation from Tanay (2012) and from Tanay et al. (2013). This work was originally presented by Bishop at the PT-AI conference St. Antony’s College, Oxford, 22nd-23rd September, 2013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. M. Bishop .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Bishop, J.M., Nasuto, S.J., Tanay, T., Roesch, E.B., Spencer, M.C. (2016). HeX and the Single Anthill: Playing Games with Aunt Hillary. In: Müller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_22

Download citation

Publish with us

Policies and ethics