Advertisement

Subsymbolic Parsing of Embedded Structures

  • Risto Miikkulainen
Part of the The Springer International Series In Engineering and Computer Science book series (SECS, volume 292)

Abstract

Symbolic artificial intelligence is motivated by the hypothesis that symbol manipulation is both necessary and sufficient for intelligence [34]. Symbolic systems have been quite successful, for example, in modeling in-depth natural language processing [[13], [26], [43]], episodic memory [[22], [24], and problem solving [[23], [35], [36]]. In such systems, knowledge is encoded in terms of explicit symbolic structures, and processing is based on handcrafted rules that operate on these structures.

Keywords

Hide Layer Relative Clause Sentence Structure Parse Tree Embed Clause 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Robert B. Allen. Several studies on natural language and back-propagation. In Proceedings of the IEEE First International Conference on Neural Networks (San Diego, CA), volume II, pages 335–341, Piscataway, NJ, 1987. IEEE.Google Scholar
  2. [2]
    Robert B. Allen and Mark E. Riecken. Reference in connectionist language users. In R. Pfeifer, Z. Schreter, F. Fogelman Soulié, and L. Steels, editors, Connectionism in Perspective, pages 301–308. Elsevier, New York, 1989.Google Scholar
  3. [3]
    Alan D. Baddeley. Working Memory. Oxford University Press, Oxford, UK; New York, 1986.Google Scholar
  4. [4]
    George Berg. A connectionist parser with recursive sentence structure and lexical disambiguation. In Proceedings of the Tenth National Conference on Artificial Intelligence, pages 32–37, Cambridge, MA, 1992. MIT Press.Google Scholar
  5. [5]
    Douglas S. Blank, Lisa A. Meeden, and James B. Marshall. Exploring the symbolic/subsymbolic continuum: A case study of RAAM. In John Dinsmore, editor, The Symbolic and Connectionist Paradigms: Closing the Gap, pages 113–148. Erlbaum, Hillsdale, NJ, 1992.Google Scholar
  6. [6]
    Alfonso Caramazza and Edgar B. Zurif. Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain and Language, 3:572–582, 1976.CrossRefGoogle Scholar
  7. [7]
    David J. Chalmers. Syntactic transformations on distributed representations. Connection Science, 2:53–62, 1990.CrossRefGoogle Scholar
  8. [8]
    Lonnie Chrisman. Learning recursive distributed representations for holistic computation. Connection Science, 3:345–366, 1992.CrossRefGoogle Scholar
  9. [9]
    Walter A. Cook. Case Grammar Theory. Georgetown University Press, Washington, DC, 1989.Google Scholar
  10. [10]
    Cynthia Cosic and Paul Munro. Learning to represent and understand locative prepositional phrases. In Proceedings of the 10th Annual Conference of the Cognitive Science Society, pages 257–262, Hillsdale, NJ, 1988. Erlbaum.Google Scholar
  11. [11]
    Nelson Cowan. Evolving conceptions of memory storage, selective attention, and their mutual constraints within the human information-processing system. Psychological Bulletin, 104:163–191, 1988.CrossRefGoogle Scholar
  12. [12]
    Charles Patrick Dolan. Tensor Manipulation Networks: Connectionist and Symbolic Approaches to Comprehension, Learning and Planning. PhD thesis, Computer Science Department, University of California, Los Angeles, 1989. Technical Report UCLA-AI-89-06.Google Scholar
  13. [13]
    Michael G. Dyer. In-Depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension. MIT Press, Cambridge, MA, 1983.Google Scholar
  14. [14]
    Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990.CrossRefGoogle Scholar
  15. [15]
    Jeffrey L. Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7:195–225, 1991.Google Scholar
  16. [16]
    Jeffrey L. Elman. Incremental learning, or The importance of starting small. In Proceedings of the 13th Annual Conference of the Cognitive Science Society, pages 443–448, Hillsdale, NJ, 1991. Erlbaum.Google Scholar
  17. [17]
    Charles J. Fillmore. The case for case. In Emmon Bach and Robert T. Harms, editors, Universals in Linguistic Theory, pages 0–88. Holt, Rine-hart and Winston, New York, 1968.Google Scholar
  18. [18]
    Geoffrey E. Hinton. Mapping part-whole hierarchies into connectionist networks. Artificial Intelligence, 46:47–75, 1990.CrossRefGoogle Scholar
  19. [19]
    Ming S. Huang. A developmental study of children’s comprehension of embedded sentences with and without semantic constraints. Journal of Psychology, 114:51–56, 1983.CrossRefGoogle Scholar
  20. [20]
    Robert A. Jacobs, Michael I. Jordan, and Andrew G. Barto. Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Science, 15:219–250, 1991.CrossRefGoogle Scholar
  21. [21]
    Ajay N. Jain. Parsing complex sentences with structured connectionist networks. Neural Computation, 3:110–120, 1991.CrossRefGoogle Scholar
  22. [22]
    Janet L. Kolodner. Retrieval and Organizational Strategies in Conceptual Memory: A Computer Model. Erlbaum, Hillsdale, NJ, 1984.Google Scholar
  23. [23]
    John E. Laird, Allen Newell, and Paul S. Rosenbloom. SOAR: An architecture for general intelligence. Artificial Intelligence, 33:1–64, 1987.CrossRefGoogle Scholar
  24. [24]
    Michael Lebowitz. Generalization and Memory in an Integrated Understanding System. PhD thesis, Department of Computer Science, Yale University, New Haven, CT, 1980. Research Report 186.Google Scholar
  25. [25]
    Geunbae Lee, Margot Flowers, and Michael G. Dyer. Learning distributed representations of conceptual knowledge and their application to script-based story processing. Connection Science, 2:313–346, 1990.CrossRefGoogle Scholar
  26. [26]
    Wendy G. Lehnert. The Process of Question Answering. Erlbaum, Hillsdale, NJ, 1978.MATHGoogle Scholar
  27. [27]
    Gordon D. Logan and William B. Cowan. On the ability to inhibit thought and action: A theory of an act of control. Psychological Review, 91:295–327, 1984.CrossRefGoogle Scholar
  28. [28]
    James L. McClelland and Alan H. Kawamoto. Mechanisms of sentence processing: Assigning roles to constituents. In James L. McClelland and David E. Rumelhart, editors, Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Volume 2: Psychological and Biological Models, pages 272–325. MIT Press, Cambridge, MA, 1986.Google Scholar
  29. [29]
    Risto Miikkulainen. A PDP architecture for processing sentences with relative clauses. In Hans Karlgren, editor, Proceedings of the 13th International Conference on Computational Linguistics, pages 201–206, Helsinki, Finland, 1990. Yliopistopaino.Google Scholar
  30. [30]
    Risto Miikkulainen. Subsymbolic Natural Language Processing: An Integrated Model of Scripts, Lexicon, and Memory. MIT Press, Cambridge, MA, 1993.Google Scholar
  31. [31]
    Risto Miikkulainen and Michael G. Dyer. Encoding input/output representations in connectionist cognitive systems. In David S. Touretzky, Geoffrey E. Hinton, and Terrence J. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 347–356, San Mateo, CA, 1989. Morgan Kaufmann.Google Scholar
  32. [32]
    Risto Miikkulainen and Michael G. Dyer. Natural language processing with modular neural networks and distributed lexicon. Cognitive Science, 15:343–399, 1991.CrossRefGoogle Scholar
  33. [33]
    Paul Munro, Cynthia Cosic, and Mary Tabasko. A network for encoding, decoding and translating locative prepositions. Connection Science, 3:225–240, 1991.CrossRefGoogle Scholar
  34. [34]
    Allen Newell. Physical symbol systems. Cognitive Science, 4:135–183, 1980.CrossRefGoogle Scholar
  35. [35]
    Allen Newell. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, 1991.Google Scholar
  36. [36]
    Allen Newell and Herbert A. Simon. GPS: A program that simulates human thought. In Edward A. Feigenbaum and Jerome A. Feldman, editors, Computers and Thought. McGraw-Hill, New York, 1963.Google Scholar
  37. [37]
    Donald A. Norman and Tim Shallice. Attention to action: Willed and automatic control of behavior. Technical Report 99, Center for Human Information Processing, University of California, San Diego, 1980.Google Scholar
  38. [38]
    Jordan B. Pollack. Cascaded back-propagation on dynamic connectionist networks. In Proceedings of the Ninth Annual Conference of the Cognitive Science Society, pages 391–404, Hillsdale, NJ, 1987. Erlbaum.Google Scholar
  39. [39]
    Jordan B. Pollack. Recursive distributed representations. Artificial Intelligence, 46:77–105, 1990.CrossRefGoogle Scholar
  40. [40]
    Michael I. Posner and C. R. Snyder. Attention and cognitive control. In Robert L. Solso, editor, Information Processing and Cognition, pages 55–85. Erlbaum, Hillsdale, NJ, 1975.Google Scholar
  41. [41]
    David E. Rumelhart, Geoffrey E. Hinton, and James L. McClelland. A general framework for parallel distributed processing. In David E. Rumelhart and James L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations, pages 45–76. MIT Press, Cambridge, MA, 1986.Google Scholar
  42. [42]
    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning internal representations by error propagation. In David E. Rumelhart and James L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations, pages 318–362. MIT Press, Cambridge, MA, 1986.Google Scholar
  43. [43]
    Roger C. Schank and Robert P. Abelson. Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Erlbaum, Hillsdale, NJ, 1977.MATHGoogle Scholar
  44. [44]
    Walter Schneider and Mark Detweiler. A connectionist/control architecture for working memory. In Gordon H. Bower, editor, The Psychology of Learning and Motivation, volume 21, pages 53–119. Academic Press, New York, 1987.Google Scholar
  45. [45]
    Walter Schneider and Richard M. Shiffrin. Controlled and automatic human information processing I: Detection, search, and attention. Psychological Review, 84:1–66, 1977.CrossRefGoogle Scholar
  46. [46]
    David Servan-Schreiber, Axel Cleeremans, and James L. McClelland. Learning sequential structure in simple recurrent networks. In David S. Touretzky, editor, Advances in Neural Information Processing Systems I, pages 643–652. Morgan Kaufmann, San Mateo, CA, 1989.Google Scholar
  47. [47]
    David Servan-Schreiber, Axel Cleeremans, and James L. McClelland. Graded state machines: The representation of temporal contingencies in simple recurrent networks. Machine Learning, 7:161–194, 1991.Google Scholar
  48. [48]
    Tim Shallice. Specific impairments of planning. Philosophical Transactions of the Royal Society of London B, 298:199–209, 1982.CrossRefGoogle Scholar
  49. [49]
    Tim Shallice. From Neuropsychology to Mental Structure. Cambridge University Press, Cambridge, UK, 1988.Google Scholar
  50. [50]
    Noel E. Sharkey and Amanda J. C. Sharkey. A modular design for connectionist parsing. In Anton Nijholt Marc F. J. Drossaers, editor, Twente Workshop on Language Technology 3: Connectionism and Natural Language Processing, pages 87–96, Enschede, the Netherlands, 1992. Department of Computer Science, University of Twente.Google Scholar
  51. [51]
    Richard M. Shiffrin and Walter Schneider. Controlled and automatic human information processing II: Perceptual learning, automatic attending, and a general theory. Psychological Review, 84:127–190, 1977.CrossRefGoogle Scholar
  52. [52]
    Richard M. Shiffrin and Walter Schneider. Automatic and controlled processing revisited. Psychological Review, 91:269–276, 1984.CrossRefGoogle Scholar
  53. [53]
    Robert. F. Simmons and Yeong-Ho Yu. Training a neural network to be a context-sensitive grammar. In Proceedings of the Fifth Rocky Mountain Conference on Artificial Intelligence, Las Cruces, NM, pages 251–256, 1990.Google Scholar
  54. [54]
    Paul Smolensky. On the proper treatment of connectionism. Behavioral and Brain Sciences, 11:1–74, 1988.CrossRefGoogle Scholar
  55. [55]
    Paul Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46:159–216, 1990.CrossRefMATHMathSciNetGoogle Scholar
  56. [56]
    Mark F. St. John. The story gestalt: A model of knowledge-intensive processes in text comprehension. Cognitive Science, 16:271–306, 1992.CrossRefGoogle Scholar
  57. [57]
    Mark F. St. John and James L. McClelland. Applying contextual constraints in sentence comprehension. In David S. Touretzky, Geoffrey E. Hinton, and Terrence J. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 338–346, San Mateo, CA, 1989. Morgan Kaufmann.Google Scholar
  58. [58]
    Mark F. St. John and James L. McClelland. Learning and applying contextual constraints in sentence comprehension. Artificial Intelligence, 46:217–258, 1990.CrossRefGoogle Scholar
  59. [59]
    Andreas Stolcke. Learning feature-based semantics with simple recurrent networks. Technical Report TR-90-015, International Computer Science Institute, Berkeley, CA, 1990.Google Scholar
  60. [60]
    Ronald A. Sumida. Dynamic inferencing in parallel distributed semantic networks. In Proceedings of the 13th Annual Conference of the Cognitive Science Society, pages 913–917, Hillsdale, NJ, 1991. Erlbaum.Google Scholar
  61. [61]
    David S. Touretzky. Connectionism and compositional semantics. In John A. Barnden and Jordan B. Pollack, editors, High-Level Connectionist Models, volume 1 of Advances in Connectionist and Neural Computation Theory, Barnden, J. A., series editor, pages 17–31. Ablex, Norwood, NJ, 1991.Google Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • Risto Miikkulainen
    • 1
  1. 1.Department of Computer SciencesThe University of Texas at AustinAustin

Personalised recommendations