Advertisement

Explaining Everything

  • David DavenportEmail author
Chapter
Part of the Synthese Library book series (SYLI, volume 376)

Abstract

Oxford physicist David Deutsch recently claimed that AI researchers had made no progress towards creating truly intelligent agents and were unlikely to do so until they began making machines that could produce creative explanations for themselves. Deutsch argued that AI must be possible because of the Universality of Computation, but that progress towards it would require nothing less than a new philosophical direction: a rejection of inductivism in favour of fallibilism. This paper sets out to review and respond to these claims. After first establishing a broad framework and terminology with which to discuss these questions, it examines the inductivist and fallibilist philosophies. It argues that Deutsch is right about fallibilism, not only because of the need for creative explanations but also because it makes it easier for agents to create and maintain models—a crucial ability for any sophisticated agent. However, his claim that AI research has made no progress is debatable, if not mistaken. The paper concludes with suggestions for ways in which agents might come up with truly creative explanations and looks briefly at the meaning of knowledge and truth in a fallibilist world.

Keywords

AGI Creativity Explanation Mental models Fallibilism Computationalism Truth Knowledge Information 

References

  1. Adriaans, P. (2013). Information. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 Edition). Retrieved from http://plato.stanford.edu/archives/fall2013/entries/information/
  2. Bartha, P. (2013). Analogy and analogical reasoning. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 Edition). Retrieved from http://plato.stanford.edu/archives/fall2013/entries/reasoning-analogy/
  3. Clark, A. (2013a). Are we predictive engines? Perils, prospects, and the puzzle of the porous perceiver. Behavioral and Brain Sciences, 36(3), 233–253. doi: 10.1017/S0140525X12002440.CrossRefGoogle Scholar
  4. Clark, A. (2013b). Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181–253. doi: 10.1017/S0140525X12000477.CrossRefGoogle Scholar
  5. Davenport, D. (1997). Towards a computational account of the notion of truth. TAINN’97. Retrieved from http://www.cs.bilkent.edu.tr/~david/papers/truth.doc
  6. Davenport, D. (2009). Revisited: A computational account of the notion of truth. In J. Vallverdu (Ed.), ecap09, Proceedings of the 7th European Conference on Philosophy and Computing. Barcelona: Universitat Autonoma de Barcelona.Google Scholar
  7. Davenport, D. (2012a). Computationalism: Still the only game in town. Minds and Machines, 22(3), 183–190. doi: 10.1007/s11023-012-9271-5.CrossRefGoogle Scholar
  8. Davenport, D. (2012b). The two (computational) faces of AI. In V. C. Müller (Ed.), Theory and philosophy of artificial intelligence (SAPERE). Berlin: Springer.Google Scholar
  9. Deutsch, D. (2011). The beginning of infinity. New York: Penguin.Google Scholar
  10. Deutsch, D. (2012, October 3). Creative blocks the very laws of physics imply that artificial intelligence must be possible. What’s holding us up? Retrieved June 21, 2013, from Aeon Magazine: http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/
  11. Floridi, L. (2011). The philosophy of information. New York: Oxford University Press.CrossRefGoogle Scholar
  12. Friston, K. (2008). Hierarchical models in the brain. PLoS Computational Biology, 4(11). doi:10.1371/journal.pcbi.1000211.Google Scholar
  13. Friston, K., Adams, R. A., Perrinet, L., & Breakspear, M. (2012). Perceptions as hypotheses: Saccades as experiments. Frontiers in Psychology, 3, 151. doi: 10.3389/fpsyg.2012.00151.Google Scholar
  14. Goertzel, B. (2012). The real reasons we don’t yet have AGI. Retrieved from http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
  15. Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer. doi: 10.1007/b138233.Google Scholar
  16. Kaufman, A. S. (2000). Tests of intelligence. In R. J. Sternberg (Ed.), Handbook of intelligence. New York: Cambridge University Press.Google Scholar
  17. Legg, S. (2008). Machine super intelligence (PhD. thesis). Lugano: Faculty of Informatics of the University of Lugano. Retrieved from http://www.vetta.org/documents/Machine_Super_Intelligence.pdf
  18. Nagela, J., Marb, R., & Juan, V. S. (2013). Authentic Gettier cases: a reply to Starmans and Friedman. Cognition, 129(3), 666–669.CrossRefGoogle Scholar
  19. Popper, K. R. (1959). The logic of scientific discovery. New York: Basic Books.Google Scholar
  20. Rathmanner, S., & Hutter, M. (2011). A philosophical treatise of universal induction. Entropy, 13, 1076–1136.CrossRefGoogle Scholar
  21. Vickers, J. (2013). The problem of induction. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 Edition). Retrieved from http://plato.stanford.edu/archives/fall2013/entries/reasoning-analogy/
  22. Wiedermann, J. (2013). The creativity mechanisms in embodied agents: An explanatory model. IEEE Symposium Series on Computational Intelligence (SSCI 2013). Singapore: IEEE.Google Scholar
  23. Woodward, J. (2011). Scientific explanation. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2011 Edition). Retrieved from http://plato.stanford.edu/archives/win2011/entries/scientific-explanation/

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Bilkent UniversityAnkaraTurkey

Personalised recommendations