Predictive Complexity and Information
A new notion of predictive complexity and corresponding amount of information are considered. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. We consider predictive complexity for a wide class of bounded loss functions which are generalizations of square-loss function. Relations between unconditional KG(x) and conditional KG(xy) predictive complexities are studied. We define an algorithm which has some “expanding property”. It transforms with positive probability sequences of given predictive complexity into sequences of essentially bigger predictive complexity. A concept of amount of predictive information IG(y: x) is studied. We show that this information is non-commutative in a very strong sense and present asymptotic relations between values IG(y: x), IG(x: y), KG(x) and KG(y).
KeywordsLoss Function Binary Tree Computable Mapping Expert Advice Prediction Strategy
Unable to display preview. Download preview PDF.
- 1.Haussler, D., Kivinen, J., Warmuth, M. K. (1994) Tight worst-case loss bounds for predicting with expert advice. Technical Report UCSC-CRL-94-36, University of California at Santa Cruz, revised December 1994. Short version in P. Vitányi, editor, Computational Learning Theory, Lecture Notes in Computer Science, volume 904, pages 69–83, Springer, Berlin, 1995.Google Scholar
- 3.Cormen, H., Leiserson, E., Rivest, R (1990) Introduction to Algorithms. New York: McGraw Hill.Google Scholar
- 4.Kalnishkan, Y. (1999) General linear relations among different types of predictive complexity. In. Proc. 10th international Conference on Algorithmic Learning Theory-ALT’ 99, v. 1720 of Lecture Notes in Artificial Intelligence, pp. 323–334, Springer-Verlag.Google Scholar
- 7.Vovk, V. (1990) Aggregating strategies. In M. Fulk and J. Case, editors, Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pages 371–383, San Mateo, CA, 1990. Morgan Kaufmann.Google Scholar
- 10.Vovk, V., Watkins, C. J. H. C. (1998) Universal portfolio selection, Proceedings of the 11th Annual Conference on Computational Learning Theory, 12–23.Google Scholar
- 11.V’yugin, V. V. (1999) Does snooping help? Technical report No CLRC-TR-99-06, December 1999, Computer Learning Research Centre, Royal Holloway, University of London.Google Scholar
- 12.Vyugin, M. V., V’yugin, V. V. (2001) Non-linear inequalities between Kolmogorov and predictive complexities, In Proc. Twelfth International Conference on Algorithmic Learning Theory—ALT’ 01, 190–204.Google Scholar