Abstract
For a string (xn), generated by sampling a probability distribution P(xn), we have already suggested the ideal code length — logP(xn) to serve as its complexity, the Shannon complexity, with the justification that its mean is for large alphabets a tight lower bound for the mean prefix code length. The problem, of course, arises that this measure of complexity depends very strongly on the distribution P, which in the cases of interest to us is not given. Nevertheless, we feel intuitively that a measure of complexity ought to be linked with the ease of its description. For instance, consider the following three types of data strings of length n = 20, where the length actually ought to be taken large to make our point:
-
1.
01010101010101010101
-
2.
00100010000000010000
-
3.
generate a string by flipping a coin 20 times
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2007 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
(2007). Kolmogorov Complexity. In: Information and Complexity in Statistical Modeling. Information Science and Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-68812-1_4
Download citation
DOI: https://doi.org/10.1007/978-0-387-68812-1_4
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-36610-4
Online ISBN: 978-0-387-68812-1
eBook Packages: Computer ScienceComputer Science (R0)