Skip to main content

Beyond Word N-Grams

  • Chapter

Part of the book series: Text, Speech and Language Technology ((TLTB,volume 11))

Abstract

We describe, analyze, and evaluate experimentally a new probabilistic model for word-sequence prediction in natural language based on prediction suffix trees (PSTs). By using efficient data structures, we extend the notion of PST to unbounded vocabularies. We also show how to use a Bayesian approach based on recursive priors over all possible PSTs to efficiently maintain tree mixtures. These mixtures have provably and practically better performance than almost any single model. We evaluate the model on several corpora. The low perplexity achieved by relatively small PST mixture models suggests that they may be an advantageous alternative, both theoretically and practically, to the widely used n-gram models.

(Part of) This work was accomplished at AT&T.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Cesa-Bianchi, N., Freund, Y., Haussier, D., Helrnbold, D. P., Schapire, R. E. and Warmuth, M. K. 1993. How to use expert advice.. Proceedings of the 24th Annual ACM Symposium on Theory of Computing (STOL).

    Google Scholar 

  • Church, K. W. and Gale, W. A. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams. Computer Speech and Language, 5: 19–54.

    Article  Google Scholar 

  • Church, K. W. and Gale, W. A. 1999. Inverse Document Frequency (IDF): A Measure of Deviations from Poisson. This volume, pp. 283–295.

    Google Scholar 

  • DeSantis, A., Markowski, G. and Wegman, M. N. 1988. Learning Probabilistic Prediction Functions. Proceedings of the 1988 Workshop on Computational Learning Theory, pp. 312–328.

    Google Scholar 

  • Fisher, R. A., Corbet, A. S. and Williams, C. B. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. J. Animal Ecology, Vol. 12, pp. 42–58.

    Article  Google Scholar 

  • Good, I. J. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40 (3): 237–264.

    Google Scholar 

  • Good, I. J 1969. Statistics of Language: Introduction. Encyclopedia of Linguistics, Information and Control. A. R. Meetham and R. A. Hudson, editors. pp. 567–581. Pergamon Press, Oxford, England.

    Google Scholar 

  • Katz, S. M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Trans. on ASSP 35 (3): 400–401.

    Article  Google Scholar 

  • Pereira, F., Tishby, N. and Lee,L. 1993. Distributional clustering of English words. In 30th Annual Meeting of the Association for Computational Linguistics, pp. 183190, Columbus, Ohio. Ohio State University, Association for Computational Linguistics, Morristown, New Jersey.

    Google Scholar 

  • Ron, D., Singer, Y. and Tishby, N. 1996. The power of amnesia: learning probabilistic automata with variable memory length. Machine Learning Journal.

    Google Scholar 

  • Shannon,C. E. 1951. Prediction and Entropy of Printed English. Bell Sys. Tech. J., Vol. 30, No. 1, pp. 50–64.

    Google Scholar 

  • Sleator, D. D. and Tarjan, R. E. 1985. Self-Adjusting Binary Search Trees. Journal of the ACM, Vol. 32, No. 3, pp. 653–686.

    Article  Google Scholar 

  • Willems, F. M. J., Shtarkov, Y. M. and Tjalkens,T. J. 1995. The context tree weighting method: basic properties. IEEE Trans. on Inform. Theory, 41(3): 653664.

    Google Scholar 

  • Witten, I. H. and Bell, T. C. 1991. The zero-frequency problem: estimating the probabilities of novel events in adaptive text compression. IEEE Trans. on Inform. Theory, 37 (4): 1085–1094.

    Article  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Pereira, F.C., Singer, Y., Tishby, N. (1999). Beyond Word N-Grams. In: Armstrong, S., Church, K., Isabelle, P., Manzi, S., Tzoukermann, E., Yarowsky, D. (eds) Natural Language Processing Using Very Large Corpora. Text, Speech and Language Technology, vol 11. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-2390-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-94-017-2390-9_8

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-5349-7

  • Online ISBN: 978-94-017-2390-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics