Skip to main content

Towards Learning to Handle Deviations Using User Preferences in a Human Robot Collaboration Scenario

  • Conference paper
  • First Online:
Intelligent Human Computer Interaction (IHCI 2016)

Abstract

In a human robot collaboration scenario, where robot and human coordinate and cooperate to achieve a common task, the system could encounter with deviations. We propose an approach based on Interactive Reinforcement Learning that learns to handle deviations with the help of user interaction. The interactions with the user can be used to form the preferences of the user and help the robotic system to handle the deviations accordingly. Each user might have a different solution for the same deviation in the assembly process. The approach exploits the problem solving skills of each user and learns different solutions for deviations that could occur in an assembly process. The experimental evaluations show the ability of the robotic system to handle deviations in an assembly process, while taking different user preferences into consideration. In this way, the robotic system could both benefit from interaction with users by learning to handle deviations and operate in a fashion that is preferred by the user.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akkaladevi, S.C., Heindl, C.: Action recognition for human robot interaction in industrial applications. In: CGVIS 2015, pp. 94–99. IEEE (2015)

    Google Scholar 

  2. Akkaladevi, S., et al.: Tracking multiple rigid symmetric and non-symmetric objects in real-time using depth data. In: ICRA 2016, pp. 5644–5649 (2016)

    Google Scholar 

  3. Bauer, A., Wollherr, D., Buss, M.: Human-robot collaboration: a survey. Int. J. Humanoid Robot. 5, 47–66 (2008)

    Article  Google Scholar 

  4. Dautenhahn, K.: Methodology and themes of human-robot interaction: a growing research field. Int. J. Adv. Robot. Sys. 4, 103–108 (2007)

    Google Scholar 

  5. euRobotics: Robitcs 2020 Strategic Research Agenda for Robotics in Europe (2013)

    Google Scholar 

  6. Goodrich, M.A., Schultz, A.C.: Human-robot interaction: a survey. Found. Trends Hum. Comput. Interact. 1(3), 203–275 (2007)

    Article  MATH  Google Scholar 

  7. Griffith, S., et al.: Policy shaping: integrating human feedback with reinforcement learning. In: Advances in Neural Information Processing Systems, pp. 2625–2633 (2013)

    Google Scholar 

  8. Kartoun, U., et al.: A human-robot collaborative reinforcement learning algorithm. J. Intell. Rob. Syst. 60(2), 217–239 (2010)

    Article  MATH  Google Scholar 

  9. Knox, W.B., et al.: Interactively shaping agents via human reinforcement: the TAMER framework. In: Proceedings of International Conference on Knowledge Capture, pp. 9–16 (2009)

    Google Scholar 

  10. Knox, W.B., Stone, P., Breazeal, C.: Training a robot via human feedback: a case study. In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds.) ICSR 2013. LNCS (LNAI), vol. 8239, pp. 460–470. Springer, Heidelberg (2013). doi:10.1007/978-3-319-02675-6_46

    Chapter  Google Scholar 

  11. Lee, S., et al.: Human mental models of humanoid robots. In: IEEE International Conference on Robotics and Automation, pp. 2767–2772 (2005)

    Google Scholar 

  12. Rozo, L., et al.: Learning collaborative impedance-based robot behaviors. In: Twenty-Seventh AAAI Conference on Artificial Intelligence, pp. 1422–1428 (2013)

    Google Scholar 

  13. Rozo, L., et al.: Learning force and position constraints in human-robot cooperative transportation. In: Robot and Human Interactive Communication, pp. 619–624 (2014)

    Google Scholar 

  14. Scassellati, B.: Theory of mind for a humanoid robot. Autonomous Robots 12(1), 13–24 (2002)

    Article  MATH  Google Scholar 

  15. Suay, H.B., Chernova, S.: Effect of human guidance and state space size on interactive reinforcement learning. In: 2011 Ro-Man, pp. 1–6. IEEE (2011)

    Google Scholar 

  16. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction, vol. 1, no. 1. MIT Press, Cambridge (1998)

    Google Scholar 

  17. Tenorth, M., et al.: KnowRob: a knowledge processing infrastructure for cognition enabled robots. Int. J. Robot. Res. 32, 566–590 (2013)

    Article  Google Scholar 

  18. Thomaz, A.L., et al.: Reinforcement learning with human teachers: understanding how people want to teach robots. In: ROMAN 2006, pp. 352–357. IEEE (2006)

    Google Scholar 

  19. Universal Robots, UR10 robot - a collaborative industrial robot. http://www.universal-robots.com/products/ur10-robot/. Accessed 09 Aug 2016

  20. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    MATH  Google Scholar 

Download references

Acknowledgment

This research is funded by the projects KoMoProd (Austrian Ministry for Transport, Innovation and Technology), and CompleteMe (FFG, 849441).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sharath Chandra Akkaladevi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Akkaladevi, S.C., Plasch, M., Eitzinger, C., Maddukuri, S.C., Rinner, B. (2017). Towards Learning to Handle Deviations Using User Preferences in a Human Robot Collaboration Scenario. In: Basu, A., Das, S., Horain, P., Bhattacharya, S. (eds) Intelligent Human Computer Interaction. IHCI 2016. Lecture Notes in Computer Science(), vol 10127. Springer, Cham. https://doi.org/10.1007/978-3-319-52503-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-52503-7_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-52502-0

  • Online ISBN: 978-3-319-52503-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics