The Rocky Road from Hume to Kant: Correlations and Theories in Robots and Animals
This essay will address the problem of prediction. Prediction is at its root concerned with the idea of causation. The notion of how causal relationships can be represented in minds has been an important thread in Sloman’s work, and ongoing conversations with him have influenced my thinking. This paper will examine several aspects of prediction and causation. First it examines reasons why animals and machines benefit from being able to predict, and the consequent requirements on prediction mechanisms. Next it will examine some actual machines that we have synthesised for predicting the effects a robot manipulator has on an object it pushes. These mechanisms contain varying amounts of prior knowledge. This leads to the issue of whether and how predicting machines benefit from prior knowledge, and whether a prediction mechanism is equivalent in any sense to the notion of a theory. The paper will reach a point where I claim that given the constraints faced by animals and robots it is often better to construct many micro-theories rather one macro-theory. The idea of theories will also lead to the examination of the notion of levels of description in theory building. This will lead in turn to consider whether hierarchies of increasingly abstract prediction machines can lead to better robots, and to better understanding of animals.
KeywordsRobotics Prediction Learning Scientific theories Causation
While the opinions expressed here are purely my own, most of the technical work has been carried out by Marek Kopicki, Sebastian Zurek, Rustam Stolkin, Thomas Mörwald and Vero Arriola-Rios. Thanks to them all. Many of the ideas for this work came from Aaron Sloman, including the practical idea of the polyflap domain. Thanks to Aaron for many hours of discussion, and for his kindness and support in my career.
- Arriola-Rios VE, Wyatt JL (2011) 2D mass-spring-like model for prediction of a sponge’s behaviour upon robotic interaction. In: Research and development in intelligent systems XXVIII, pp 195–208Google Scholar
- Craik J (1943) The nature of explanation. Cambridge University Press, CambridgeGoogle Scholar
- Dawkins R (2006) The blind watchmaker. Penguin, HammondsworthGoogle Scholar
- Flanagan JR, Wing AM (1997) The role of internal models in motion planning and control: evidence from grip force adjustments during movements of hand-held loads. J Neurosci 17(4):1519–1528Google Scholar
- Hume D (2008) An enquiry concerning human understanding. Oxford University Press, OxfordGoogle Scholar
- Kopicki M (2010) Prediction learning in robotic manipulation. Ph.D. thesis, University of Birmingham, BirminghamGoogle Scholar
- Kopicki M, Wyatt J, Stolkin R (2009) Prediction learning in robotic pushing manipulation. In: International conference on advanced robotics, 2009. ICAR 2009, pp 1–6Google Scholar
- Kopicki M, Zurek S, Stolkin R, Mörwald T, Wyatt J (2011) Learning to predict how rigid objects behave under simple manipulation. In: Proceedings of the IEEE international conference on robotics and automation (ICRA11). http://www.cs.bham.ac.uk/msk/pub/icra2011.pdf
- Miall R, Wolpert D (1996) Forward models for physiological motor control. Neural Netw 9(8):1265–1279 (four Major Hypotheses in Neuroscience)Google Scholar
- Sloman A (1978) The computer revolution in philosophy. Harvester Press, Hassocks. http://www.cs.bham.ac.uk/research/projects/cogaff/crp/