Distributed Subsymbolic Representations for Natural Language: How many Features Do You Need?

  • Richard F. E. Sutcliffe
Conference paper
Part of the Workshops in Computing book series (WORKSHOPS COMP.)


In a Natural Language Understanding system, be it connectionist or otherwise, it is often desirable for representations to be as compact as possible. In this paper we present a simple algorithm for thinning down an existing set of distributed concept representations which form the lexicon in a prototype story paraphrase system which exploits both conventional and connectionist approaches to Artificial Intelligence (AI). We also present some performance measures for evaluating a lexicon’s performance. The main result is that the algorithm appears to work well — we can use it to balance the level of detail in a lexicon against the amount of space it requires. There are also interesting ramifications concerning meaning in natural language.


Machine Translation Concept Representation Lexical Entry Context Word Artificial Intelligence System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Schank RC. Conceptual Dependency: A Theory of Natural Language Understanding. Cognitive Psychology 1972; 3 (4): 552–630CrossRefGoogle Scholar
  2. 2.
    Wilks Y. An intelligent analyser and understander of English. Communications of the Association for Computing Machinery 1975; 8 (18): 264–274CrossRefGoogle Scholar
  3. 3.
    Fillmore C. The case for Case. In Bach E and Harms RT (eds) Universals of Linguistic Theory. Holt, Rinehart and Winston New York, 1968, pp 1–88Google Scholar
  4. 4.
    Wilks Y. An Artificial Intelligence Approach to Machine Translation. In: Schank RC and Colby KM (eds) Computer Models of Thought and Language. WH Freeman San Fransciso, pp 114–151Google Scholar
  5. 5.
    Schank RC, Abelson RP. Scripts, Plans and Knowledge. In Proceedings of the 4th UCAI, Tbilisi, USSR, 1975Google Scholar
  6. 6.
    Dyer MG. In-Depth Understanding. A computer model of integrated processing for narrative comprehension. Research Report Number 219, Department of Computer Science, Yale University, 1982Google Scholar
  7. 7.
    Wilensky R, Arens Y, Chin D. Talking to UNIX in English: An overview of UC. Communications of the ACM 1984; 27 (6): pp 574–593CrossRefGoogle Scholar
  8. 8.
    Katz JJ, Fodor JA. The structure of a semantic theory. Language 1963; 39(2): pp 170–210. Also in Fodor JA, Katz JJ (eds) The structure of Language. Prentice Hall Englewood Cliffs.NJ 1964Google Scholar
  9. 9.
    Hinton GE. Implementing semantic networks in parallel hardware. In Hinton GE. and Anderson JA. (eds) Parallel models of associative memory. Lawrence Erlbaum Associates Hillsdale NJ, 1981, pp 161–187Google Scholar
  10. 10.
    Waltz DL, Pollack JB. Massively parallel parsing: A strongly interactive model of natural language interpretation. Cognitive Science 1985; 9: 51–74CrossRefGoogle Scholar
  11. 11.
    Hinton GE, Sejnowsi TJ. Learning Semantic Features. In: Proceedings of the 6th Annual Conference of the Cognitive Science Society, 1984, pp 63–70Google Scholar
  12. 12.
    Hinton GE. Learning distributed representations of concepts. In: Proceedings of the 8th Annual Conference of the Cognitive Science Society, 1986, pp 1–12Google Scholar
  13. 13.
    Miikkulainen R, Dyer MG. Building distributed representations without microfeatures. Technical Report UCLA-AI-87–17, AI Laboratory, Computer Science Department, University of California at Los Angeles, CA, 1987Google Scholar
  14. 14.
    Miikkulainen R, Dyer MG. A Modular Neural Network Architecture for Sequential Paraphrasing of Script-Based Stories. Technical Report UCLA-AI89–02, AI Lab, Computer Science Department, UCLA, LA, LA 90024, 1989Google Scholar
  15. 15.
    Elman JL. Finding structure in time. TR 8801, Center for Research in Language, University of California, San Diego, CA. April, 1988Google Scholar
  16. 16.
    Chomsky N. Aspects of the theory of syntax. MIT Press Cambridge MA, 1965Google Scholar
  17. 17.
    Schaeffer B, Wallace R. Semantic similarity and the comprehension of meanings. Journal of Experimental Psychology 1969; 82: 343–346CrossRefGoogle Scholar
  18. 18.
    Rosch E. Natural categories. Cognitive Psychology 1973; 4: 328–350CrossRefGoogle Scholar
  19. 19.
    Rosch E. Human categorization. In: Warren N (ed) Advances in cross-cultural psychology ( Vol I ). Academic Press London UK, 1977Google Scholar
  20. 20.
    Smith EE, Shoben EJ, Rips LJ. Structure and process in semantic memory: A featural model of semantic decisions. Psychological Review 1974; 81: 214–241CrossRefGoogle Scholar
  21. 21.
    Brachman RJ. What IS-A is and isn’t: An Analysis of Taxonomic Links in Semantic Networks. IEEE Computer 16(10, 1983, pp 30–36Google Scholar
  22. 22.
    Sutcliffe RFE. A Parallel Distributed Processing Approach to the Representation of Knowledge for Natural Language Understanding. Unpublished doctoral thesis, University of Essex, UK, 1988Google Scholar
  23. 23.
    Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume I: foundations. MIT Press Cambridge MA, 1986, pp 318–362Google Scholar
  24. 24.
    Minsky L, Papert, S. Perceptions. MIT Press Cambridge MA, 1988 (1969)Google Scholar
  25. 25.
    Sutcliffe RFE. Representing Meaning using Microfeatures. In: Reilly R, Sharkey NE (eds) Connectionist Approaches to Natural Language Processing. Erlbaum Hillsdale NJ, 1991Google Scholar
  26. 26.
    Salton G (ed). The SMART Retrieval System - Experiments in Automatic Document Processing. Prentice-Hall Englewood Cliffs NJ, 1971.Google Scholar
  27. 27.
    Ward JH. Hierarchical grouping to optimize and objective function. Journal of American Statistical Association 1963; 58: 236–244CrossRefGoogle Scholar
  28. 28.
    Everitt B. Cluster Analysis. Heinemann Halstead John Wiley London 1974Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • Richard F. E. Sutcliffe

There are no affiliations available

Personalised recommendations