Skip to main content

Estimation of Entropy from Subword Complexity

  • Chapter
  • First Online:
Challenges in Computational Statistics and Data Mining

Part of the book series: Studies in Computational Intelligence ((SCI,volume 605))

Abstract

Subword complexity is a function that describes how many different substrings of a given length are contained in a given string. In this paper, two estimators of block entropy are proposed, based on the profile of subword complexity. The first estimator works well only for IID processes with uniform probabilities. The second estimator provides a lower bound of block entropy for any strictly stationary process with the distributions of blocks skewed towards less probable values. Using this estimator, some estimates of block entropy for natural language are obtained, confirming earlier hypotheses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    www.gutenberg.org.

References

  1. Algoet PH, Cover TM (1988) A sandwich proof of the Shannon-McMillan-Breiman theorem. Ann. Probab. 16:899–909

    Article  MathSciNet  MATH  Google Scholar 

  2. Baayen, H (2001) Word frequency distributions. Kluwer Academic Publishers, Dordrecht

    Google Scholar 

  3. Cover TM, Thomas JA (1991) Elements of information theory. Wiley, New York

    Google Scholar 

  4. Crutchfield JP, Feldman DP (2003) Regularities unseen, randomness observed: the entropy convergence hierarchy. Chaos 15:25–54

    Article  MathSciNet  Google Scholar 

  5. Dillon WR, Goldstein M (1984) Multivariate analysis: methods and appplications. Wiley, New York

    Google Scholar 

  6. Dębowski Ł (2011) On the vocabulary of grammar-based codes and the logical consistency of texts. IEEE Trans. Inform. Theor. 57:4589–4599

    Article  Google Scholar 

  7. Dębowski Ł (2013) A preadapted universal switch distribution for testing Hilberg’s conjecture (2013). http://arxiv.org/abs/1310.8511

  8. Dębowski Ł (2014) Maximal repetitions in written texts: finite energy hypothesis vs. strong Hilberg conjecture (2014). http://www.ipipan.waw.pl/~ldebowsk/

  9. Dębowski Ł (2014) A new universal code helps to distinguish natural language from random texts (2014). http://www.ipipan.waw.pl/~ldebowsk/

  10. Ebeling W, Pöschel T (1994) Entropy and long-range correlations in literary English. Europhys. Lett. 26:241–246

    Article  Google Scholar 

  11. Ebeling W, Nicolis G (1991) Entropy of symbolic sequences: the role of correlations. Europhys. Lett. 14:191–196

    Article  Google Scholar 

  12. Ebeling W, Nicolis G (1992) Word frequency and entropy of symbolic sequences: a dynamical perspective. Chaos Sol. Fract. 2:635–650

    Article  MathSciNet  MATH  Google Scholar 

  13. Ferenczi S (1999) Complexity of sequences and dynamical systems. Discr. Math. 206:145–154

    Article  MathSciNet  MATH  Google Scholar 

  14. Gheorghiciuc I, Ward MD (2007) On correlation polynomials and subword complexity. Discr. Math. Theo. Comp. Sci. AH, 1–18

    Google Scholar 

  15. Graham RL, Knuth DE, Patashnik O (1994) Concrete mathematics, a foundation for computer science. Addison-Wiley, New York

    MATH  Google Scholar 

  16. Hall P, Morton SC (1993) On the estimation of entropy. Ann. Inst. Statist. Math. 45:69–88

    Article  MathSciNet  MATH  Google Scholar 

  17. Hilberg W (1990) Der bekannte Grenzwert der redundanzfreien Information in Texten—eine Fehlinterpretation der Shannonschen Experimente? Frequenz 44:243–248

    Article  Google Scholar 

  18. Ivanko EE (2008) Exact approximation of average subword complexity of finite random words over finite alphabet. Trud. Inst. Mat. Meh. UrO RAN 14(4):185–189

    Google Scholar 

  19. Janson S, Lonardi S, Szpankowski W (2004) On average sequence complexity. Theor. Comput. Sci. 326:213–227

    Article  MathSciNet  MATH  Google Scholar 

  20. Joe H (1989) Estimation of entropy and other functionals of a multivariate density. Ann. Inst. Statist. Math. 41:683–697

    Article  MathSciNet  MATH  Google Scholar 

  21. Khmaladze E (1988) The statistical analysis of large number of rare events, Technical Report MS-R8804. Centrum voor Wiskunde en Informatica, Amsterdam

    Google Scholar 

  22. Kontoyiannis I, Algoet PH, Suhov YM, Wyner AJ (1998) Nonparametric entropy estimation for stationary processes and random fields, with applications to English text. IEEE Trans. Inform. Theor. 44:1319–1327

    Article  MathSciNet  MATH  Google Scholar 

  23. Koslicki D (2011) Topological entropy of DNA sequences. Bioinformatics 27:1061–1067

    Article  Google Scholar 

  24. Krzanowski W (2000) Principles of multivariate analysis. Oxford University Press, Oxford

    Google Scholar 

  25. de Luca A (1999) On the combinatorics of finite words. Theor. Comput. Sci. 218:13–39

    Article  MATH  Google Scholar 

  26. Schmitt AO, Herzel H, Ebeling W (1993) A new method to calculate higher-order entropies from finite samples. Europhys. Lett. 23:303–309

    Article  Google Scholar 

  27. Shannon C (1951) Prediction and entropy of printed English. Bell Syst. Tech. J. 30:50–64

    Article  MATH  Google Scholar 

  28. Vogel H (2013) On the shape of subword complexity sequences of finite words (2013). http://arxiv.org/abs/1309.3441

  29. Wyner AD, Ziv J (1989) Some asymptotic properties of entropy of a stationary ergodic data source with applications to data compression. IEEE Trans. Inform. Theor. 35:1250–1258

    Article  MathSciNet  MATH  Google Scholar 

  30. Zipf GK (1935) The Psycho-Biology of language: an introduction to Dynamic Philology. Houghton Mifflin, Boston

    Google Scholar 

  31. Ziv J, Lempel A (1977) A universal algorithm for sequential data compression. IEEE Trans. Inform. Theor. 23:337–343

    Article  MathSciNet  MATH  Google Scholar 

  32. Ziv J, Lempel A (1978) Compression of individual sequences via variable-rate coding. IEEE Trans. Inform. Theor. 24:530–536

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Łukasz Dębowski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Dębowski, Ł. (2016). Estimation of Entropy from Subword Complexity. In: Matwin, S., Mielniczuk, J. (eds) Challenges in Computational Statistics and Data Mining. Studies in Computational Intelligence, vol 605. Springer, Cham. https://doi.org/10.1007/978-3-319-18781-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-18781-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-18780-8

  • Online ISBN: 978-3-319-18781-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics