Skip to main content

Evaluation of Human-Robot Interaction Quality: A Toolkit for Workplace Design

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 824))

Abstract

The working world is facing a constant change. New technologies emerge enabling new forms of human-system interactions. Especially autonomous robots in services industries as well as manufacturing settings create novel forms of human-robot interaction. Not only researchers but also system integrators as well as practitioners are confronted with the question how to analyze, to evaluate and finally how to design these new working systems in a human-centered way. In this paper we present evaluation criteria as well as a toolkit with concrete measures in order to enable a holistic evaluation of cognitive aspects in human-robot-interactions in work related scenarios. The evaluation criteria comprise technology and human related parameters. Further the paper presents a first empirical validation of the evaluation criteria and their measurements. The validation study uses a manual assembly task accomplished with a lightweight robot. The results indicate that the evaluation criteria can be used to describe the quality of the human-robot interaction.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. BMAS (F.M.o.L.a.S.A.) (2017) White Paper, Work 4.0. Federal Ministry of Labour and Social Affairs Directorate-General for Basic Issues of the Social State, the Working World and the Social Mark et Economy, Berlin

    Google Scholar 

  2. IFR, Website. https://ifr.org/free-downloads/. Accessed 15 May 2018

  3. Barner A et al (2016) Innovationspotenziale der Mensch-Maschine-Interaktion. acatech Dossier, Berlin

    Google Scholar 

  4. Olsen DR, Goodrich MA (2003) Metrics for evaluating human-robot interactions. In: Proceedings of PERMIS

    Google Scholar 

  5. Steinfeld A et al (2006) Common metrics for human-robot interaction. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on human-robot interaction. ACM

    Google Scholar 

  6. Weiss A et al (2009) The USUS evaluation framework for human-robot interaction. In: AISB 2009: proceedings of the symposium on new frontiers in human-robot interaction

    Google Scholar 

  7. Young JE et al (2011) Evaluating human-robot interaction. Int J Soc Robot 3(1):53–67

    Article  Google Scholar 

  8. Onnasch L, Maier X, Jürgensohn T (2016) Mensch-Roboter-Interaktion - Eine Taxonomie für alle Anwendungsfälle.. baua: Fokus, 1 Auflage

    Google Scholar 

  9. Bicchi A, Peshkin MA, Colgate JE (2008) Safety for physical human-robot interaction. In: Springer handbook of robotics. Springer, Heidelberg, pp 1335–1348

    Chapter  Google Scholar 

  10. Rosen PH et al (2016) Mensch-Roboter-Teams – Klassifikation, Gestaltung und Evaluation der Interaktionen im Arbeitssystem. wt Werkstatttechnik online, 106(H9), pp 605–609

    Google Scholar 

  11. Standardization, I.O.f. (2006) ISO 9241-110:2006

    Google Scholar 

  12. Gediga G, Hamborg K-C, Düntsch I (1999) The IsoMetrics usability inventory: an operationalization of ISO 9241-10 supporting summative and formative evaluation of software systems. Behav Inf Technol 18(3):151–164

    Article  Google Scholar 

  13. Legris P, Ingham J, Collerette P (2003) Why do people use information technology? A critical review of the technology acceptance model. Inf Manag 40(3):191–204

    Article  Google Scholar 

  14. Adams DA, Nelson RR, Todd PA (1992) Perceived usefulness, ease of use, and usage of information technology: a replication. MIS Q 16:227–247

    Article  Google Scholar 

  15. Standardization, I.O.f., Ergonomic principles related to mental work-load - Part 1: General terms and definitions, Geneva, Switzerland

    Google Scholar 

  16. Onnasch L et al (2014) Human performance consequences of stages and levels of automation: An integrated meta-analysis. Hum Factors 56(3):476–488

    Article  Google Scholar 

  17. Jimenez P, Dunkl A (2017) The buffering effect of workplace resources on the relationship between the areas of worklife and burnout. Front Psychol 8:12

    Article  Google Scholar 

  18. Demerouti E et al (2002) From mental strain to burnout. Eur J Work Organ Psychol 11(4):423–441

    Article  Google Scholar 

  19. Schnall PL, Landsbergis PA, Baker D (1994) Job strain and cardiovascular disease. Ann Rev Public Health 15(1):381–411

    Article  Google Scholar 

  20. Wieland R, Hammes M (2014) Wuppertaler screening instrument Psychische Beanspruchung (WSIB) Beanspruchungsbilanz und Kontrollerleben als Indikatoren für gesunde Arbeit. J Psychol Alltagshandelns/J Everyday Act 7(1):30–50

    Google Scholar 

  21. Hart SG (2006) NASA-task load index (NASA-TLX); 20 years later. In: Proceedings of the human factors and ergonomics society annual meeting. Sage Publications

    Google Scholar 

  22. Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv Psychol 52:139–183

    Article  Google Scholar 

  23. Weiss HM, Cropanzano R (1996) Affective events theory: a theoretical discussion of the structure, causes and consequences of affective experiences at work. In: Research in organizational behavior: an annual series of analytical essays and critical reviews, vol 18

    Google Scholar 

  24. Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum Comput Stud 59(1):119–155

    Article  Google Scholar 

  25. Powers A et al (2007) Comparing a computer agent with a humanoid robot. In: 2007 2nd ACM/IEEE international conference on human-robot interaction (HRI). IEEE

    Google Scholar 

  26. Watson D, Clark LA, Tellegen A (1988) Development and validation of brief measures of positive and negative affect: the PANAS scales. J Pers Soc Psychol 54(6):1063

    Article  Google Scholar 

  27. Bradley MM, Lang PJ (1994) Measuring emotion: the self-assessment manikin and the semantic differential. J Behav Ther Exp Psychiatry 25(1):49–59

    Article  Google Scholar 

  28. Langner T, Schmidt J, Fischer A (2015) Is it really love? A comparative investigation of the emotional nature of brand and interpersonal love. Psychol Mark 32(6):624–634

    Article  Google Scholar 

  29. Cortina JM (1993) What is coefficient alpha? An examination of theory and applications. J Appl Psychol 78(1):98

    Article  Google Scholar 

  30. Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Erlbaum, Hillsdale

    MATH  Google Scholar 

  31. Nisbett RE, Wilson TD (1977) The halo effect: Evidence for unconscious alteration of judgments. J Pers Soc Psychol 35(4):250

    Article  Google Scholar 

  32. Wanous JP, Reichers AE, Hudy MJ (1997) Overall job satisfaction: how good are single-item measures? J Appl Psychol 82(2):247

    Article  Google Scholar 

  33. Gardner DG et al (1998) Single-item versus multiple-item measurement scales: an empirical comparison. Educ Psychol Measur 58(6):898–915

    Article  Google Scholar 

Download references

Acknowledgments

Parts of this work were developed within the research project Hybr-iT. The research project Hybr-iT is funded by the Federal Ministry of Education and Research and is administrated by the DLR Project Management Agency (reference no. 01IS16026H).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patricia H. Rosen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rosen, P.H., Sommer, S., Wischniwski, S. (2019). Evaluation of Human-Robot Interaction Quality: A Toolkit for Workplace Design. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds) Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). IEA 2018. Advances in Intelligent Systems and Computing, vol 824. Springer, Cham. https://doi.org/10.1007/978-3-319-96071-5_169

Download citation

Publish with us

Policies and ethics