A Structured Connectionist Approach to Inferencing and Retrieval

  • Trent E. Lange
Part of the The Springer International Series In Engineering and Computer Science book series (SECS, volume 292)


Simple connectionist models have generally been unable to perform natural language understanding or memory retrieval beyond simple stereotypical situations that they have seen before. This is because they have had difficulties representing and applying general knowledge rules that specifically require variables, barring them from performing the high-level inferencing necessary for planning, reasoning, and natural language understanding. This chapter describes ROBIN, a structured (i.e., localist) connectionist model capable of massively-parallel high-level inferencing requiring variable bindings and rule application, and REMIND, a model based on ROBIN that explores the integration of language understanding and episodic memory retrieval in a single spreading-activation mechanism.


Memory Retrieval Connectionist Model Variable Binding Language Understanding Evidential Activation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.Google Scholar
  2. [2]
    Barnden, J. (1990). The power of some unusual connectionist data-structuring techniques. In J. Barnden and J. Pollack (eds.), Advances in Connectionist and Neural Computation Theory. Norwood, NJ: Ablex.Google Scholar
  3. [3]
    Barnden, J. and Srinivas, K. (1992). Overcoming rule-based rigidity and connectionist limitations through massively-parallel case-based reasoning. International Journal of Man-Machine Studies, 36:221–246.CrossRefGoogle Scholar
  4. [4]
    Bookman, L.A. (1994). Trajectories through Knowledge Space: A Dynamic Framework for Machine Comprehension. Boston, MA: Kluwer.MATHGoogle Scholar
  5. [5]
    Charniak, E. (1986). A neat theory of marker passing. In Proceedings of the Fifth National Conference on Artificial Intelligence, Philadelphia, PA.Google Scholar
  6. [6]
    Cottrell, G. and Small, S. (1982). A connectionist scheme for modeling word-sense disambiguation. Cognition and Brain Theory, 6:89–120Google Scholar
  7. [7]
    Diederich, J. (1990). Steps toward knowledge-intensive connectionist learning. In J. Barnden and J. Pollack (eds.), Advances in Connectionist and Neural Computation Theory. Norwood, NJ: Ablex.Google Scholar
  8. [8]
    Dolan, C. and Smolensky, P. (1989). Tensor product production system: A modular architecture and representation. Connection Science, 1:53–68.CrossRefGoogle Scholar
  9. [9]
    Dyer, M. (1983). In-depth Understanding: A Computer Model of Integrated Processing for Comprehension. Cambridge, MA: MIT Press.Google Scholar
  10. [10]
    Gentner, D. and Forbus, K. (1991). MAC/FAC: A model of similarity-based retrieval. In Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society (pp. 504–509). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  11. [11]
    Granger, R. H., Eiselt, K. P., and Holbrook, J. K. (1986). Parsing with parallelism: A spreading activation model of inference processing during text understanding. In J. Kolodner and C. Riesbeck (eds.), Experience, Memory, and Reasoning (pp. 227–246). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  12. [12]
    Hammond, K. (1989) Case-based Planning. Boston: Academic Press.Google Scholar
  13. [13]
    Hendler, J. (1989). Marker-passing over microfeatures: Towards a hybrid symbolic/connectionist model. Cognitive Science, 13:79–106.CrossRefGoogle Scholar
  14. [14]
    Hofstadter, D. and Mitchell, M. (in press). The copycat project: A model of mental fluidity and analogy-making. To appear in J. Barnden and K. Holyoak (eds.), Advances in Connectionist and Neural Computation Theory, volume II: Analogical Connections. Norwood, NJ: Ablex.Google Scholar
  15. [15]
    Holldobler, S. (1990). A structured connectionist unification algorithm. In Proceedings of the Ninth National Conference on Artificial Intelligence, Boston, MA.Google Scholar
  16. [16]
    Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95:163–182.CrossRefGoogle Scholar
  17. [17]
    Kitano, H., Tomabechi, H. and Levin, L. (1989). Ambiguity resolution in DMTrans Plus. In Proceedings of the Fourth Conference of the European Chapter of the Association of Computational Linguistics. New York, NY: Manchester University Press.Google Scholar
  18. [18]
    Kolodner, J., Simpson, R., and Sycara, K. (1985). A process model of case-based reasoning in problem solving. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence, p. 284–290. Los Altos, CA: Morgan Kaufman.Google Scholar
  19. [19]
    Lange, T. and Dyer, M. (1989). High-level inferencing in a connectionist network. Connection Science, 1(2):181–217.CrossRefGoogle Scholar
  20. [20]
    Lange, T. (1992). Lexical and pragmatic disambiguation and reinterpretation in connectionist networks. Interational Journal of Man-Machine Studies, 36:191–220.CrossRefGoogle Scholar
  21. [21]
    Lange, T. and Wharton, C. (in press). REMIND: Retrieval from episodic memory by inferencing and disambiguation. In J. Barnden and K. Holyoak (eds.), Advances in Connectionist and Neural Computation Theory, Volume 3: Metaphor and Reminding. Norwood, NJ: Ablex.Google Scholar
  22. [22]
    Lytinen, S. (1984). The organization of knowledge in a multi-lingual integrated parser. Ph.D. thesis, Research Report 340, Yale University, Department of Computer Science, New Haven, CT.Google Scholar
  23. [23]
    McClelland, J. L. and Kawamoto, A. H. (1986): Mechanisms of sentence processing: Assigning roles to constituents of sentences. In McClelland and Rumelhart (eds.), Parallel Distributed Processing: Vol. 2, p. 272–325. Cambridge, MA: The MIT Press.Google Scholar
  24. [24]
    Miikkulainen, R. (1993). Subsymbolic Natural Language Processing: An Integrated Model of Scripts, Lexicon, and Memory. Cambridge: MIT Press.Google Scholar
  25. [25]
    Miikkulainen, R. and Dyer, M. (1991). Natural language processing with modular PDP networks and distributed lexicon. Cognitive Science, 15:343–399.CrossRefGoogle Scholar
  26. [26]
    Norvig, P. (1989). Marker passing as a weak method for text inferencing. Cognitive Science, 13:569–620.CrossRefGoogle Scholar
  27. [27]
    Riesbeck, C. K. and Schank, R. (1989). Inside Case-based Reasoning. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  28. [28]
    Riesbeck, C. K. and Martin, C. E. (1986). Direct memory access parsing. In J. Kolodner and C. Riesbeck (eds.), Experience, Memory, and Reasoning, pp. 209–226. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  29. [29]
    Rumelhart, D. E., Hinton, G. E., and McClelland, J. L. (1986): A general framework for parallel distributed processing. In Rumelhart and McClelland (eds.), Parallel Distributed Processing: Vol. 7, p. 45–76. Cambridge, MA: The MIT Press.Google Scholar
  30. [30]
    Schank, R. (1982). Dynamic memory, NY: Cambridge University Press.Google Scholar
  31. [31]
    Schank, R. and Abelson, R. (1977). Scripts, Plans, Goals and Understanding. Hillsdale, NJ: Lawrence Erlbaum.MATHGoogle Scholar
  32. [32]
    Schank, R., and Leake, D. B. (1989). Creativity and learning in a case-based explainer. Artificial Intelligence, 40:353–385.CrossRefGoogle Scholar
  33. [33]
    Shastri, L. and Ajjanagadde, V. (1993). From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences, 16:417–494.CrossRefGoogle Scholar
  34. [34]
    St. John, M. (1992). The story gestalt: A model of knowledge-intensive processes in text comprehension. Cognitive Science 16:271–306.CrossRefGoogle Scholar
  35. [35]
    Sun, R. (1993). Integrating Rules and Connectionism for Robust Commonsense Reasoning. New York: John Wiley and Sons, Inc.Google Scholar
  36. [36]
    Thagard, P., Holyoak, K. J., Nelson, G., and Gochfeld, D. (1990). Analog retrieval by constraint satisfaction. Artificial Intelligence, 46:259–310.CrossRefMathSciNetGoogle Scholar
  37. [37]
    Touretzky, D. (1990). Connectionism and compositional semantics. In J. Barnden and J. Pollack (eds.), Advances in Connectionist and Neural Computation Theory, Norwood, NJ: Ablex.Google Scholar
  38. [38]
    Touretzky, D. and Hinton, G. (1988). A distributed connectionist production system. Cognitive Science, 12:423–466.CrossRefGoogle Scholar
  39. [39]
    Waltz, D. and Pollack, J. (1985). Massively parallel parsing: A strongly interactive model of natural language interpretation,. Cognitive Science, 9:51–74.CrossRefGoogle Scholar
  40. [40]
    Wilensky, R. (1983). Planning and Understanding. Reading, MA: Addison-Wesley.Google Scholar
  41. [41]
    Wharton, C. M. and Lange, T. (1993). Case-Based Retrieval and Priming: Empirical Evidence for Integrated Models. In Proceedings of the IJCAI-93 Workshop on Reuse of Designs: An Interdisciplinary Cognitive Approach. Chambery, France, August 1993.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • Trent E. Lange
    • 1
  1. 1.Artificial Intelligence Laboratory Computer Science DepartmentUniversity of CaliforniaLos Angeles

Personalised recommendations