Abstract
Whenever a problem seems extremely open ended with a large variety of random variables that have an effect on the process, it is impossible for a human programmer to be able to account for every single case. The number of cases increases dramatically with an additional parameter. In such scenarios, probabilistic algorithms have the greatest applicability. The algorithms need to be given a couple of examples of scenarios it might come across and the algorithm would be able to handle a new scenario with reasonable accuracy. The key word in the previous statement is “reasonable”. There is no probabilistic algorithm that will always return the optimum result with a probability of 1. That would make it a deterministic algorithm which, as has just been discussed, cannot handle every potential case. In this chapter, we discuss the algorithms that were employed to successfully complete the experiment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Barber, D. (2012). Bayesian Reasoning and Machine Learning. Cambridge: University Press.
Breazeal, C., Wang, A., & Picard, R. (2007). Experiments with a Robotic Computer: Body, Affect and Cognition Interactions. HRI'07 (pp. 153–160). Arlington, Virginia: ACM.
Buşoniu, L., Babuška, R., De Schutter, B., & Ernst, D. (2010). Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press.
Harnad, S. (1995). Grounding Symbolic Capacity in Robotic Capacity. New Haven: Lawrence Erlbaum.
Kormushev, P., Calinon, S., Saegusa, R., & Metta, G. (2010). Learning the skill of archery by a humanoid iCub. 2010 IEEE-RAS International Conference on Humanoid Robotics. Nashville.
Metta, G., Sandini, G., Vernon, D., & Natale, L. (2008). The iCub humanoid robot: an open platform for research in embodied cognition. 8th Workshop on performance metrics for intelligent systems. ACM.
Michalski, Carbonell, & Mitchell, T. (1983). Machine Learning. Palo Alto: Tioga Publishing Company.
Michie, D. (1986). On Machine Intelligence. New York: John Wiley & Sons.
Nath, V., & Levinson, S. (2013a). Learning to Fire at Targets by an iCub Humanoid Robot. AAAI Spring Symposium. Palo Alto: AAAI.
Nath, V., & Levinson, S. (2013b). Usage of computer vision and machine learning to solve 3D mazes. Urbana: University of Illinois at Urbana-Champaign.
Nath, V., & Levinson, S. (2014). Solving 3D Mazes with Machine Learning: A prelude to deep learning using the iCub Humanoid Robot. Twenty-Eighth AAAI Conference. Quebec City: AAAI
Russell, S., & Norvig, P. (2010). Artificial Intelligence, A Modern Approach. New Jersey: Prentice Hall.
Sandini, G., Metta, G., & Vernon, G. (2007). The iCub Cognitive Humanoid Robot: An Open-System Research Platform for Enactive Cognition. In 50 years of artificial intelligence (pp. 358–369). Berlin Heidelburg: Springer Berlin Heidelberg.
Sigaud, O., & Buffet, O. (2010). Markov Decision Processes in Artificial Intelligence. Wiley.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An Introduction. Cambridge: MIT Press.
Tsagarakis, N., Metta, G., & Vernon, D. (2007). iCUb: The design and realization of an open humanoid platform for cognitive and neuroscience research. Advanced Robots 21.10, (pp. 1151–1175).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 The Author(s)
About this chapter
Cite this chapter
Nath, V., Levinson, S.E. (2014). Machine Learning. In: Autonomous Robotics and Deep Learning. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-05603-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-05603-6_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05602-9
Online ISBN: 978-3-319-05603-6
eBook Packages: Computer ScienceComputer Science (R0)