Beyond Word N-Grams
We describe, analyze, and evaluate experimentally a new probabilistic model for word-sequence prediction in natural language based on prediction suffix trees (PSTs). By using efficient data structures, we extend the notion of PST to unbounded vocabularies. We also show how to use a Bayesian approach based on recursive priors over all possible PSTs to efficiently maintain tree mixtures. These mixtures have provably and practically better performance than almost any single model. We evaluate the model on several corpora. The low perplexity achieved by relatively small PST mixture models suggests that they may be an advantageous alternative, both theoretically and practically, to the widely used n-gram models.
KeywordsOnline Mode Distinct Word Paradise Lost Brown Corpus Splay Tree
Unable to display preview. Download preview PDF.
- Cesa-Bianchi, N., Freund, Y., Haussier, D., Helrnbold, D. P., Schapire, R. E. and Warmuth, M. K. 1993. How to use expert advice.. Proceedings of the 24th Annual ACM Symposium on Theory of Computing (STOL).Google Scholar
- Church, K. W. and Gale, W. A. 1999. Inverse Document Frequency (IDF): A Measure of Deviations from Poisson. This volume, pp. 283–295.Google Scholar
- DeSantis, A., Markowski, G. and Wegman, M. N. 1988. Learning Probabilistic Prediction Functions. Proceedings of the 1988 Workshop on Computational Learning Theory, pp. 312–328.Google Scholar
- Good, I. J. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40 (3): 237–264.Google Scholar
- Good, I. J 1969. Statistics of Language: Introduction. Encyclopedia of Linguistics, Information and Control. A. R. Meetham and R. A. Hudson, editors. pp. 567–581. Pergamon Press, Oxford, England.Google Scholar
- Pereira, F., Tishby, N. and Lee,L. 1993. Distributional clustering of English words. In 30th Annual Meeting of the Association for Computational Linguistics, pp. 183190, Columbus, Ohio. Ohio State University, Association for Computational Linguistics, Morristown, New Jersey.Google Scholar
- Ron, D., Singer, Y. and Tishby, N. 1996. The power of amnesia: learning probabilistic automata with variable memory length. Machine Learning Journal.Google Scholar
- Shannon,C. E. 1951. Prediction and Entropy of Printed English. Bell Sys. Tech. J., Vol. 30, No. 1, pp. 50–64.Google Scholar
- Willems, F. M. J., Shtarkov, Y. M. and Tjalkens,T. J. 1995. The context tree weighting method: basic properties. IEEE Trans. on Inform. Theory, 41(3): 653664.Google Scholar