Advertisement

Visuospatial Skill Learning

  • Seyed Reza AhmadzadehEmail author
  • Petar Kormushev
Chapter
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 42)

Abstract

This chapter introduces Visuospatial Skill Learning (VSL) , which is a novel interactive robot learning approach. VSL is based on visual perception that allows a robot to acquire new skills by observing a single demonstration while interacting with a tutor. The focus of VSL is placed on achieving a desired goal configuration of objects relative to another. VSL captures the object’s context for each demonstrated action. This context is the basis of the visuospatial representation and encodes implicitly the relative positioning of the object with respect to multiple other objects simultaneously. VSL is capable of learning and generalizing multi-operation skills from a single demonstration, while requiring minimum a priori knowledge about the environment. Different capabilities of VSL such as learning and generalization of object reconfiguration, classification, and turn-takinginteraction are illustrated through both simulation and real-world experiments.

Keywords

Scale Invariant Feature Transform Reproduction Phase Primitive Action Human Tutor Goal Configuration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Abbeel P, Ng AY (2004) Apprenticeship learning via inverse reinforcement learning. In: 21st International Conference on Machine learning (ICML), ACM, p 1Google Scholar
  2. Ahmadzadeh SR, Kormushev P, Caldwell DG (2013a) Interactive robot learning of visuospatial skills. In: IEEE 16th International Conference on Advanced Robotics (ICAR), pp 1–8Google Scholar
  3. Ahmadzadeh SR, Kormushev P, Caldwell DG (2013b) Visuospatial skill learning for object reconfiguration tasks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 685–691Google Scholar
  4. Ahmadzadeh SR, Paikan A, Mastrogiovanni F, Natale L, Kormushev P, Caldwell DG (2015) Learning symbolic representations of actions from human demonstrations. In: IEEE International Conference on Robotics and Automation (ICRA), Seattle, WashingtonGoogle Scholar
  5. Argall BD, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483CrossRefGoogle Scholar
  6. Asada M, Yoshikawa Y, Hosoda K (2000) Learning by observation without three-dimensional reconstruction. Intelligent Autonomous Systems (IAS-6) pp 555–560Google Scholar
  7. Bentivegna DC, Atkeson CG, Ude A, Cheng G (2004) Learning to act from observation and practice. Int J Humanoid Robot 1(4):585–611CrossRefGoogle Scholar
  8. Bohg J, Morales A, Asfour T, Kragic D (2014) Data-driven grasp synthesis: a survey. IEEE Trans Robot 30:289–309CrossRefGoogle Scholar
  9. Chao C, Cakmak M, Thomaz AL (2011) Towards grounding concepts for transfer in goal learning from demonstration. In: IEEE International Conference on Development and Learning (ICDL), vol 2, pp 1–6Google Scholar
  10. Dantam N, Essa I, Stilman M (2012) Linguistic transfer of human assembly tasks to robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 237–242Google Scholar
  11. Ehrenmann M, Rogalla O, Zöllner R, Dillmann R (2001) Teaching service robots complex tasks: programming by demonstration for workshop and household environments. In: International Conference on Field and Service Robots (FSR), vol 1, pp 397–402Google Scholar
  12. Ekvall S, Kragic D (2008) Robot learning from demonstration: a task-level planning approach. Int J Adv Robot Syst 5(3):223–234Google Scholar
  13. Feniello A, Dang H, Birchfield S (2014) Program synthesis by examples for object repositioning tasks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4428–4435Google Scholar
  14. Guadarrama S, Riano L, Golland D, Gouhring D, Jia Y, Klein D, Abbeel P, Darrell T (2013) Grounding spatial relations for human-robot interaction. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1640–1647Google Scholar
  15. Hartley R, Zisserman A (2000) Multiple view geometry in computer vision. Cambridge University Press, cambridgezbMATHGoogle Scholar
  16. Ijspeert AJ, Nakanishi J, Schaal S (2002) Learning attractor landscapes for learning motor primitives. Adv Neural Inf Process Syst 15:1523–1530Google Scholar
  17. Ijspeert AJ, Nakanishi J, Hoffmann H, Pastor P, Schaal S (2013) Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput 25(2):328–373CrossRefMathSciNetzbMATHGoogle Scholar
  18. Ikeuchi K, Suehiro T (1994) Toward an Assembly Plan from Observation. I. Task Recognition with Polyhedral Objects. IEEE Trans Robot Autom 10(3):368–385CrossRefGoogle Scholar
  19. Kormushev P, Calinon S, Caldwell DG (2010) Robot motor skill coordination with EM-based reinforcement learning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 3232–3237Google Scholar
  20. Kormushev P, Calinon S, Caldwell DG (2011) Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input. Adv Robot 25(5):581–603CrossRefGoogle Scholar
  21. Kronander K, Billard A (2012) Online learning of varying stiffness through physical human-robot interaction. In: IEEE International Conference on Robotics and Automation (ICRA), pp 1842–1849Google Scholar
  22. Kuniyoshi Y, Inaba M, Inoue H (1994) Learning by watching: extracting reusable task knowledge from visual observation of human performance. IEEE Trans Robot Autom 10(6):799–822CrossRefGoogle Scholar
  23. Lopes M, Santos-Victor J (2005) Visual learning by imitation with motor representations. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 35(3):438–449CrossRefGoogle Scholar
  24. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  25. Meltzoff AN, Moore MK (1977) Imitation of facial and manual gestures by human neonates. Science 198(4312):75–78CrossRefGoogle Scholar
  26. Niekum S, Osentoski S, Konidaris G, Barto AG (2012) Learning and generalization of complex tasks from unstructured demonstrations. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5239–5246Google Scholar
  27. Park G, Ra S, Kim C, Song J (2008) Imitation learning of robot movement using evolutionary algorithm. In: 17th World Congress, International Federation of Automatic Control (IFAC), pp 730–735Google Scholar
  28. Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996) Premotor cortex and the recognition of motor actions. Cognit Brain Res 3(2):131–141CrossRefGoogle Scholar
  29. Schaal S (1999) Is imitation learning the route to humanoid robots? Trends Cognit Sci 3(6):233–242CrossRefGoogle Scholar
  30. Su Y, Wu Y, Lee K, Du Z, Demiris Y (2012) Robust grasping for an under-actuated anthropomorphic hand under object position uncertainty. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp 719–725Google Scholar
  31. Verma D, Rao R (2005) Goal-based imitation as probabilistic inference over graphical models. Adv Neural Inf Process Syst 18:1393–1400Google Scholar
  32. Vijayakumar S, Schaal S (2000) Locally weighted projection regression: an O(n) algorithm for incremental real time learning in high dimensional space. In: 17th International Conference on Machine Learning (ICML), vol 1, pp 288–293Google Scholar
  33. Wong TY, Kovesi P, Datta A (2007) Projective transformations for image transition animations. In: 14th IEEE International Conference on Image Analysis and Processing, ICIAP, pp 493–500Google Scholar
  34. Yeasin M, Chaudhuri S (2000) Toward automatic robot programming: Learning human skill from visual data. IEEE Trans Syst Man Cybern Part B: Cybern 30(1):180–185CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.iCub FacilityIstituto Italiano di TecnologiaGenoaItaly
  2. 2.Dyson School of Design EngineeringImperial College LondonLondonUK

Personalised recommendations