Skip to main content

Approaches in Machine Learning

  • Chapter
Algorithms in Ambient Intelligence

Part of the book series: Philips Research ((PRBS,volume 2))

Abstract

Machine learning deals with programs that learn from experience, i.e. programs that improve or adapt their performance on a certain task or group of tasks over time. In this tutorial, we outline some issues in machine learning that pertain to ambient and computational intelligence. As an example, we consider programs that are faced with the learning of tasks or concepts which are impossible to learn exactly in finitely bounded time. This leads to the study of programs that form hypotheses that are ‘probably approximately correct’(PAC-learning),with high probability. We also survey a number of meta-learning techniques such as bagging and adaptive boosting, which can improve the performance of machine learning algorithms substantially.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Anthony, M. [1997]. Probabilistic analysis of learning in artificial neural networks: the PAC model and its variants. In Neural Computing Surveys, 1:1-7. (see also: http://www.icsi.berkeley.edu/~jagota/NCS).

    MathSciNet  Google Scholar 

  • Blumer, A., A. Ehrenfeucht, D. Haussler, and M.K. Warmuth [ 1989 ]. Learnability and the Vapnik-Chervonenkis dimension.Journal of the ACM, 36:929-965.

    Article  MathSciNet  MATH  Google Scholar 

  • Breiman, L. [ 1996 ]. Bagging predictors. Machine Learning, 24: 123 - 140.

    MathSciNet  MATH  Google Scholar 

  • Breiman, L. [ 1999 ]. Prediction games and arcing algorithms. Neural Computation, 11: 1493 - 1517.

    Article  Google Scholar 

  • COLT. Computational Learning Theory. Archives, web sites and other resources available at http://www.learningtheory.org. http://www.learningtheory.org.

  • Cristianini, N., and J. Shawe-Taylor [2000]. Support Vector Machines and Other Kernel-Based

    Google Scholar 

  • learning Methods. Cambridge University Press, Cambridge (UK).

    Google Scholar 

  • Ehrenfeucht, A., D. Haussler, M. Kearns, and L. Valiant [ 1989 ]. A general lower bound on the

    Google Scholar 

  • number of examples needed for learning. Information and Computation, 82:247-261.

    Google Scholar 

  • Freund, Y. [ 1995 ]. Boosting a weak learning algorithm by majority. Information and Computation, 121: 256 - 285.

    Article  MathSciNet  MATH  Google Scholar 

  • Freund, Y. [ 2001 ]. An adaptive version of the boost by majority algorithm. Machine Learning, 43: 293 - 318.

    Article  MATH  Google Scholar 

  • Freund, Y., and R.E. Schapire [ 1997 ]. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and Systems Sciences, 55:119-139.

    Article  MathSciNet  MATH  Google Scholar 

  • Gold, E.M. [ 1967 ]. Language identification in the limit. Information and Control, 10: 447 - 474.

    Article  MATH  Google Scholar 

  • Kearns, M.J., and U.V. Vazirani [ 1994 ]. An Introduction to Computational Learning Theory. The MIT Press, Cambridge, MA.

    Google Scholar 

  • Kwek, S., and C. Nguyen [ 2002 ]. Boost: boosting using an mstance-based exponential weighting scheme. In: T. Elomaa, H. Mannila, and H. Toivonen (eds.), Machine Learning: ECML 2002, Proc. 13th European Conference, Lecture Notes in Artificial Intelligence, vol. 2430, Springer-Verlag, Berlin, pages 245 - 257.

    Google Scholar 

  • Littlestone, N. [ 1987 ]. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2: 285 - 318.

    Google Scholar 

  • Meir, R., and G. Rätsch [ 2003 ]. An introduction to boosting and leveraging. In: S. Mendelson and A.J. Smola (eds.), ibid, pages 118 - 183.

    Google Scholar 

  • Mendelson, S., and A.J. Smola (Eds) [ 2003 ]. Advanced Lectures on Machine Learning. Lecture Notes in Artificial Intelligence, vol. 2600, Springer-Verlag, Berlin.

    Google Scholar 

  • Mitchell, T.M. [ 1997 ]. Machine Learning. WCB/McGraw-Hill, Boston, MA.

    Google Scholar 

  • Paliouras, G., V. Karkaletsis, and C.D. Spyropoulos (Eds.) [ 2001 ]. Machine Learning and its Applications, Advanced Lectures. Lecture Notes in Artificial Intelligence, vol. 2049, Springer- Verlag, Berlin.

    Google Scholar 

  • Poole, D., A. Mackworth, and R. Goebel [ 1998 ]. Computational Intelligence - A Logical Approach. Oxford University Press, New York.

    MATH  Google Scholar 

  • Schapire, R.E. [ 1990 ]. The strength of weak learnability. Machine Learning, 5: 197 - 227.

    Google Scholar 

  • Schapire, R.E. [2002]. The boosting approach to machine learning - An overview. In MSRI Workshop on Nonlinear Estimation and Classification, (available at:http://www.research.att.com/~schapire/publist.html

  • Schapire, R.E., and Y. Singer [ 1999 ]. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37: 297 - 336.

    Article  MATH  Google Scholar 

  • Skurichina, M., and R.P.W. Duin [ 2002 ]. Bagging, boosting and the random subspace method for linear classifiers. Pattern Analysis & Applications, 5: 121 - 135.

    Article  MathSciNet  MATH  Google Scholar 

  • Valiant, L.G. [ 1984 ]. A theory of the learnable. Communications of the ACM, 27: 1134 - 1142.

    Article  MATH  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

van Leeuwen, J. (2004). Approaches in Machine Learning. In: Verhaegh, W.F.J., Aarts, E., Korst, J. (eds) Algorithms in Ambient Intelligence. Philips Research, vol 2. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-0703-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-94-017-0703-9_8

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-6490-5

  • Online ISBN: 978-94-017-0703-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics