Abstract
Machine learning deals with programs that learn from experience, i.e. programs that improve or adapt their performance on a certain task or group of tasks over time. In this tutorial, we outline some issues in machine learning that pertain to ambient and computational intelligence. As an example, we consider programs that are faced with the learning of tasks or concepts which are impossible to learn exactly in finitely bounded time. This leads to the study of programs that form hypotheses that are ‘probably approximately correct’(PAC-learning),with high probability. We also survey a number of meta-learning techniques such as bagging and adaptive boosting, which can improve the performance of machine learning algorithms substantially.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Anthony, M. [1997]. Probabilistic analysis of learning in artificial neural networks: the PAC model and its variants. In Neural Computing Surveys, 1:1-7. (see also: http://www.icsi.berkeley.edu/~jagota/NCS).
Blumer, A., A. Ehrenfeucht, D. Haussler, and M.K. Warmuth [ 1989 ]. Learnability and the Vapnik-Chervonenkis dimension.Journal of the ACM, 36:929-965.
Breiman, L. [ 1996 ]. Bagging predictors. Machine Learning, 24: 123 - 140.
Breiman, L. [ 1999 ]. Prediction games and arcing algorithms. Neural Computation, 11: 1493 - 1517.
COLT. Computational Learning Theory. Archives, web sites and other resources available at http://www.learningtheory.org. http://www.learningtheory.org.
Cristianini, N., and J. Shawe-Taylor [2000]. Support Vector Machines and Other Kernel-Based
learning Methods. Cambridge University Press, Cambridge (UK).
Ehrenfeucht, A., D. Haussler, M. Kearns, and L. Valiant [ 1989 ]. A general lower bound on the
number of examples needed for learning. Information and Computation, 82:247-261.
Freund, Y. [ 1995 ]. Boosting a weak learning algorithm by majority. Information and Computation, 121: 256 - 285.
Freund, Y. [ 2001 ]. An adaptive version of the boost by majority algorithm. Machine Learning, 43: 293 - 318.
Freund, Y., and R.E. Schapire [ 1997 ]. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and Systems Sciences, 55:119-139.
Gold, E.M. [ 1967 ]. Language identification in the limit. Information and Control, 10: 447 - 474.
Kearns, M.J., and U.V. Vazirani [ 1994 ]. An Introduction to Computational Learning Theory. The MIT Press, Cambridge, MA.
Kwek, S., and C. Nguyen [ 2002 ]. Boost: boosting using an mstance-based exponential weighting scheme. In: T. Elomaa, H. Mannila, and H. Toivonen (eds.), Machine Learning: ECML 2002, Proc. 13th European Conference, Lecture Notes in Artificial Intelligence, vol. 2430, Springer-Verlag, Berlin, pages 245 - 257.
Littlestone, N. [ 1987 ]. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2: 285 - 318.
Meir, R., and G. Rätsch [ 2003 ]. An introduction to boosting and leveraging. In: S. Mendelson and A.J. Smola (eds.), ibid, pages 118 - 183.
Mendelson, S., and A.J. Smola (Eds) [ 2003 ]. Advanced Lectures on Machine Learning. Lecture Notes in Artificial Intelligence, vol. 2600, Springer-Verlag, Berlin.
Mitchell, T.M. [ 1997 ]. Machine Learning. WCB/McGraw-Hill, Boston, MA.
Paliouras, G., V. Karkaletsis, and C.D. Spyropoulos (Eds.) [ 2001 ]. Machine Learning and its Applications, Advanced Lectures. Lecture Notes in Artificial Intelligence, vol. 2049, Springer- Verlag, Berlin.
Poole, D., A. Mackworth, and R. Goebel [ 1998 ]. Computational Intelligence - A Logical Approach. Oxford University Press, New York.
Schapire, R.E. [ 1990 ]. The strength of weak learnability. Machine Learning, 5: 197 - 227.
Schapire, R.E. [2002]. The boosting approach to machine learning - An overview. In MSRI Workshop on Nonlinear Estimation and Classification, (available at:http://www.research.att.com/~schapire/publist.html
Schapire, R.E., and Y. Singer [ 1999 ]. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37: 297 - 336.
Skurichina, M., and R.P.W. Duin [ 2002 ]. Bagging, boosting and the random subspace method for linear classifiers. Pattern Analysis & Applications, 5: 121 - 135.
Valiant, L.G. [ 1984 ]. A theory of the learnable. Communications of the ACM, 27: 1134 - 1142.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
van Leeuwen, J. (2004). Approaches in Machine Learning. In: Verhaegh, W.F.J., Aarts, E., Korst, J. (eds) Algorithms in Ambient Intelligence. Philips Research, vol 2. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-0703-9_8
Download citation
DOI: https://doi.org/10.1007/978-94-017-0703-9_8
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-6490-5
Online ISBN: 978-94-017-0703-9
eBook Packages: Springer Book Archive