Skip to main content

Representation by correspondence: An inadequate conception of knowledge for artificial systems

  • Philosophy of Artificial Intelligence
  • Conference paper
  • First Online:
  • 174 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1502))

Abstract

In the Artificial Intelligence community knowledge is normally viewed as structures in the mind (symbols, features, images, etc.) that correspond to structures in the environment. I argue that the standard view is inadequate and that it cannot aid in the construction of truly intelligent systems. Representations by correspondence require prior knowledge of the structure in the mind, the structure in the world, and of the correspondence between them. Unless some other kind of knowledge is already available to the system, it can have no knowledge by correspondence. Various derivatives of this fundamental problem are discussed, including the proliferation of correspondences; the need to posit an observer; the inability to account for error from the system’s point of view; and the radical incompatibility between representation by correspondence and evolutionary or developmental accounts of knowledge.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Aristotle. (1941). De anima (On the soul) In R. McKeon (Ed.), The basic works of Aristotle (pp. 535–603). New York: Random House. (Original work published c. 325 BC.)

    Google Scholar 

  • Bickhard, M. H. (1980). Cognition, convention, and communication. New York: Praeger.

    Google Scholar 

  • Bickhard, M. H. (1993). Representational content in humans and machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285–333.

    Google Scholar 

  • Bickhard, M. H. (1998). Genuine representation in artificial systems. (This session).

    Google Scholar 

  • Bickhard, M. H., & Campbell, R. L. (1996). Topologies of learning and development. New Ideas in Psychology, 14, 111–156.

    Article  Google Scholar 

  • Bickhard, M. H., & Terveen, L. (1995). Foundational issues in Artificial Intelligence and Cognitive Science: Impasse and solution. Amsterdam: North-Holland.

    Google Scholar 

  • Campbell, R. L. (1988). Overlooked skyhooks: Evolutionary epistemology without emergent knowledge. Metascience, 7.3.

    Google Scholar 

  • Campbell, R. L., & Bickhard, M. H. (1987). A deconstruction of Fodor’s anticonstructivism. Human Development, 30, 48–59.

    Article  Google Scholar 

  • Chomsky, N. (1988). Language and problems of knowledge: The Managua lectures. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dartnall, T. (1998). Why (a kind of) AI can’t be done. (This session).

    Google Scholar 

  • Dennett, D. C. (1995). Darwin’s dangerous idea: Evolution and the meanings of life. New York: Simon & Schuster.

    Google Scholar 

  • Fodor, J. (1981). The present status of the innateness problem. In J. Fodor (Ed.), RePresentations (pp. 257–316). Cambridge, MA: MIT Press.

    Google Scholar 

  • Fodor, J. (1987). Psychosemantics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Fodor, J. (1990). A theory of content and other essays. Cambridge, MA: MIT Press.

    Google Scholar 

  • Harnad, S. (1989). The symbol grounding problem. Physica D, 42, 335–346.

    Article  Google Scholar 

  • Locke, J. (1961). An essay concerning human understanding (J. W. Yolton, ed.). London: Dent. (Original work published 1690).

    Google Scholar 

  • Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition. Dordrecht: Reidel.

    Google Scholar 

  • Newell, A. (1980). Physical symbol systems. Cognitive Science, 4, 135–183.

    Article  Google Scholar 

  • Palmer, S. E. (1978). Fundamental aspects of cognitive representation. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 259–303). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Shanon, B. (1988). On the similarity of features. New Ideas in Psychology, 6, 307–321.

    Article  Google Scholar 

  • Smith, B. C. (1985). Prologue to reflection and semantics in a procedural language. In R. Brachman & H. Levesque (Ed.), Readings in knowledge representation. Los Altos, California: Morgan Kaufmann.

    Google Scholar 

  • Vera, A. H., & Simon, H. (1993a). Situated action: A symbolic interpretation. Cognitive Science, 17, 7–48.

    Google Scholar 

  • Vera, A. H., & Simon, H. (1993b). Situated action: Reply to William Clancey. Cognitive Science, 17, 117–133.

    Article  Google Scholar 

  • Wittgenstein, L. (1961). Tractatus logico-pholosophicus. London: Routledge & Kegan Paul. (Original work published 1922).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Grigoris Antoniou John Slaney

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Campbell, R.L. (1998). Representation by correspondence: An inadequate conception of knowledge for artificial systems. In: Antoniou, G., Slaney, J. (eds) Advanced Topics in Artificial Intelligence. AI 1998. Lecture Notes in Computer Science, vol 1502. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095037

Download citation

  • DOI: https://doi.org/10.1007/BFb0095037

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65138-3

  • Online ISBN: 978-3-540-49561-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics