Skip to main content

Machine Learning

  • Chapter
  • First Online:
Book cover Autonomous Robotics and Deep Learning

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

Abstract

Whenever a problem seems extremely open ended with a large variety of random variables that have an effect on the process, it is impossible for a human programmer to be able to account for every single case. The number of cases increases dramatically with an additional parameter. In such scenarios, probabilistic algorithms have the greatest applicability. The algorithms need to be given a couple of examples of scenarios it might come across and the algorithm would be able to handle a new scenario with reasonable accuracy. The key word in the previous statement is “reasonable”. There is no probabilistic algorithm that will always return the optimum result with a probability of 1. That would make it a deterministic algorithm which, as has just been discussed, cannot handle every potential case. In this chapter, we discuss the algorithms that were employed to successfully complete the experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Barber, D. (2012). Bayesian Reasoning and Machine Learning. Cambridge: University Press.

    Google Scholar 

  • Breazeal, C., Wang, A., & Picard, R. (2007). Experiments with a Robotic Computer: Body, Affect and Cognition Interactions. HRI'07 (pp. 153–160). Arlington, Virginia: ACM.

    Google Scholar 

  • Buşoniu, L., Babuška, R., De Schutter, B., & Ernst, D. (2010). Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press.

    Google Scholar 

  • Harnad, S. (1995). Grounding Symbolic Capacity in Robotic Capacity. New Haven: Lawrence Erlbaum.

    Google Scholar 

  • Kormushev, P., Calinon, S., Saegusa, R., & Metta, G. (2010). Learning the skill of archery by a humanoid iCub. 2010 IEEE-RAS International Conference on Humanoid Robotics. Nashville.

    Google Scholar 

  • Metta, G., Sandini, G., Vernon, D., & Natale, L. (2008). The iCub humanoid robot: an open platform for research in embodied cognition. 8th Workshop on performance metrics for intelligent systems. ACM.

    Google Scholar 

  • Michalski, Carbonell, & Mitchell, T. (1983). Machine Learning. Palo Alto: Tioga Publishing Company.

    Google Scholar 

  • Michie, D. (1986). On Machine Intelligence. New York: John Wiley & Sons.

    Google Scholar 

  • Nath, V., & Levinson, S. (2013a). Learning to Fire at Targets by an iCub Humanoid Robot. AAAI Spring Symposium. Palo Alto: AAAI.

    Google Scholar 

  • Nath, V., & Levinson, S. (2013b). Usage of computer vision and machine learning to solve 3D mazes. Urbana: University of Illinois at Urbana-Champaign.

    Google Scholar 

  • Nath, V., & Levinson, S. (2014). Solving 3D Mazes with Machine Learning: A prelude to deep learning using the iCub Humanoid Robot. Twenty-Eighth AAAI Conference. Quebec City: AAAI

    Google Scholar 

  • Russell, S., & Norvig, P. (2010). Artificial Intelligence, A Modern Approach. New Jersey: Prentice Hall.

    Google Scholar 

  • Sandini, G., Metta, G., & Vernon, G. (2007). The iCub Cognitive Humanoid Robot: An Open-System Research Platform for Enactive Cognition. In 50 years of artificial intelligence (pp. 358–369). Berlin Heidelburg: Springer Berlin Heidelberg.

    Google Scholar 

  • Sigaud, O., & Buffet, O. (2010). Markov Decision Processes in Artificial Intelligence. Wiley.

    Google Scholar 

  • Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An Introduction. Cambridge: MIT Press.

    Google Scholar 

  • Tsagarakis, N., Metta, G., & Vernon, D. (2007). iCUb: The design and realization of an open humanoid platform for cognitive and neuroscience research. Advanced Robots 21.10, (pp. 1151–1175).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 The Author(s)

About this chapter

Cite this chapter

Nath, V., Levinson, S.E. (2014). Machine Learning. In: Autonomous Robotics and Deep Learning. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-05603-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-05603-6_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-05602-9

  • Online ISBN: 978-3-319-05603-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics