Complexity Issues in Robotic Machine Learning of Natural Language

  • Patrick Suppes
  • Lin Liang
  • Michale Böttner
Conference paper
Part of the Woodward Conference book series (WOODWARD)

Abstract

In Sections 1, 2 and 3, the theoretical framework we have been developing for a probabilistic theory of machine learning of natural language is outlined. In Section 4, some simple examples showing how mean learning curves can be constructed from the theory are given. But we also show that the explicit computation of the mean learning curve for an arbitrary number of sentences is unfeasible. This result holds even when the learning itself is quite rapid. In Section 5 we briefly describe the kinds of comprehension grammars generated by our theory from a given finite sample of sentences.

Keywords

Elementary Action 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Feldman, J. A., Lakoff, G., Stolcke, A. and S. Hollbach Weber (1990) Miniature Language Acquisition: A touchstone for Cognitive Science. International Computer Science Institute, Berkeley Ca.Google Scholar
  2. [2]
    Hollbach Weber, S. and A. Stolcke (1990) L 0: A Testbed for Miniature Language Acquisition. TR-90–010, International Computer Science Institute, Berkeley Ca.Google Scholar
  3. [3]
    Regier, T. (1990) Learning Spatial Terms without explicit negative evidence. TR-90–057, International Computer Science Institute, Berkeley Ca.Google Scholar
  4. [4]
    Siskind, J. M. (1990) Acquiring core meanings of words, represented as Jackendoff-style conceptual structures, from correlated streams of linguistic and non-linguistic input. Proceedings of the 28th Annual Meeting of the Association of Computational Linguistics, 143–156.Google Scholar
  5. [5]
    Siskind, J. M. (1991a) Dispelling Myths about Language Bootstrapping. AAAI Spring Symposium Workshop on Machine Learning of Natural Language and Ontology, Stanford.Google Scholar
  6. [6]
    Siskind, J. M. (1991b) Naive Physics, Event Perception, Lexical Semantics and Language Acquisition. AAAI Spring Symposium Workshop on Machine Learning of Natural Language and Ontology, Stanford.Google Scholar
  7. [7]
    Stolcke, A. (1990) Learning Feature-based Semantics with Simple Recurrent Networks. TR-90–015, International Computer Science Institute, Berkeley Ca.Google Scholar
  8. [8]
    Suppes, P. (1959) A linear model for a continuum of responses. In R. R. Bush & W. K. Estes (Eds.), Studies in mathematical learning theory. Stanford: Stanford University Press, pp. 400–414.Google Scholar
  9. [9]
    Suppes, P. and R. C. Atkinson (1960) Markov learning models for multiperson interactions. Stanford: Stanford University Press, 296 pp.MATHGoogle Scholar

Copyright information

© Springer-Verlag New York, Inc. 1992

Authors and Affiliations

  • Patrick Suppes
    • 1
  • Lin Liang
    • 1
  • Michale Böttner
    • 2
  1. 1.Institute for Mathematical Studies in the Social SciencesStanford UniversityStanfordUSA
  2. 2.Max-Planck-Institute for PsycholinguisticsNijmegenThe Netherlands

Personalised recommendations