Abstract
The successful design of practical algorithms for solving a class of problems very often depends on the existence of a formal model where algorithmical ideas can be developed, analyzed, and compared. In the case of machine learning, a number of such formal models have been proposed in the past. Some of them have been successful in generating elegant mathematical results but, on the other hand, they have had a rather limited impact on the practical side. In this paper, we suggest that the on-line prediction model is a good source of interesting algorithmic ideas with a great potential for new applications. To this end, we will describe a simple algorithm based on “multiplicative weights,” we will analyze this algorithm within the on-line prediction model, and finally we will show some of its variants and applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of the 36th Annual Symposium on the Foundations of Computer Science pages 322–331. IEEE press, 1995.
N. Cesa-Bianchi, Y. Freund, D. P. Helmbold, D. Haussler, R. Schapire, and M. K. Warmuth. How to use expert advice. Technical Report UCSC-CRL-95–19, University of California at Santa Cruz, 1995. An extended abstract appeared in the Proceedings of the 25th ACM Symposium on the Theory of Computation.
N. Cesa-Bianchi, Y. Freund, D. P. Helmbold, and M. K. Warmuth. On-line prediction and conversion strategies Machine Learning To appear. An extended abstract appeared in the Proceedings of the First EuroCOLT Workshop.
N. Cesa-Bianchi, D. P. Helmbold, and S. Panizza. On Bayes methods for on-line boolean prediction. In Proceedings of the 9ih Annual Conference on Computational Learning Theory pages 314–324. ACM Press, 1996.
C. Cortes and H. Drucker. Boosting decision trees. In Advances in Neural Information Processing Systems SMIT Press, 1996. To appear.
T. Dietterich, M. Kearns, and Y. Mansour. Applying the weak learning framework to understand and improve C4.5. In Proceedings of the 13th International Conference on Machine Learning pages 96–104. Morgan Kaufmann, 1996.
Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning, and an application to boosting. In Proceedings of the 2nd Euro-COLT Workshop pages 23–37. Lecture Notes on Artificial Intelligence, Vol. 904, Springer-Verlag, 1995.
Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In Proceedings of the 13th International Conference on Machine Learning, pages 148–156. Morgan Kaufmann, 1996.
Y. Freund and R. Schapire. Game theory, on-line prediction and boosting. In Proceedings of the 9th Annual Conference on Computational Learning TheoryACM Press, 1996.
D. Haussler and A. Barron. How well does the Bayes method work in on-line predictions of {+1,-1} values? In Proceedings of 3rd NEC Symposiumpages 74–100. SIAM, 1993.
D. Haussler, J. Kivinen, and M. K. Warmuth. Tight worst-case loss bounds for predicting with expert advice. In Proceedings of the 2nd European Conference on Computational Learning Theory, pages 69–83. Lecture Notes on Artificial Intelligence, Vol. 904, 1995.
D. P. Helmbold, J. Kivinen, and M. K. Warmuth. Worst-case loss bounds for sigmoided neurons. In Advances in Neural Information Processing Systems 8MIT Press, 1996. In press.
M. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In Proceedings of the 28th ACM Symposium on the Theory of Computing, pages 459–468. ACM Press, 1996.
J. Kivinen and M. K. Warmut h. Exponentiated gradient versus gradient descent for linear predictors. Technical Report UCSC-CRL-94–16, University of California at Santa Cruz, 1994. To appear in Information and Computation.
N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108:212–261, 1994.
H. Robbins. Some aspects of the sequential design of experiments. Bullettin of the American Mathematical Society, 55:527–535, 1952.
V. G. Vovk. Aggregating strategies. In Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pages 372–383, 1990.
V.G. Vovk. A game of prediction with expert advice. In Proceedings of the 8th Annual Conference on Computational Learning Theory, pages 51–60, 1995. An extended version is to appear in Journal of Computer and Systems Sciences.
K. Yamanishi. On-line maximum likelihood prediction with respect to general loss functions. In Proceedings of the 2nd European Conference on Computational Learning Theory, pages 84–98. Lecture Notes on Artificial Intelligence, Vol. 904, 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer-Verlag London Limited
About this paper
Cite this paper
Cesa-Bianchi, N., Panizza, S. (1997). Recent Results In On-line Prediction and Boosting. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN VIETRI-96. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0951-8_3
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0951-8_3
Publisher Name: Springer, London
Print ISBN: 978-1-4471-1240-2
Online ISBN: 978-1-4471-0951-8
eBook Packages: Springer Book Archive