Advertisement

Trust in Sensing Technologies and Human Wingmen: Analogies for Human-Machine Teams

  • Joseph B. Lyons
  • Nhut T. Ho
  • Lauren C. Hoffmann
  • Garrett G. Sadler
  • Anna Lee Van Abel
  • Mark Wilkins
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10915)

Abstract

The true value of a human-machine team (HMT) consisting of a capable human and an automated or autonomous system will depend, in part, on the richness and dynamic nature of the interactions and degree of shared awareness between the human and the technology. Contemporary views of HMTs emphasize the notion of bidirectional transparency, one type of which is Robot-of-Human (RoH) transparency. Technologies that are capable of RoH transparency may have awareness of human physiological and cognitive states, and adapt their behavior based on these states thus providing augmentation to operators. Yet despite the burgeoning presence of health monitoring devices, little is known about how humans feel about an automated system using sensing capabilities to augment them in a work environment. The current study provides some preliminary data on user acceptance of sensing capabilities on automated systems. The present research examines an emerging predictor of trust in automation, Perfect Automation Schema, as a predictor of trust in the sensing capabilities. Additionally, the current study examines trust of a human wingman as an analogy for looking at trust within the context of a HMT. The findings suggest that Perfect Automation Schema is related to some facets of sensing technology acceptance. Further, trust of a human wingman is contingent on familiarity and experience.

Keywords

Trust in automation Autonomy Human-machine teaming Military 

References

  1. 1.
    Burke, C.S., Stagl, K.C., Salas, E., Pierce, L., Kendall, D.: Understanding team adaptation: a conceptual analysis and model. J. Appl. Psychol. 91(6), 1189–1207 (2006)CrossRefGoogle Scholar
  2. 2.
    Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of the human factors issues. IEEE Trans. Hum. Mach. Syst. 44, 13–29 (2014)CrossRefGoogle Scholar
  3. 3.
    Defense Science Board (DSB) Task Force on the Role of Autonomy in Department of Defense (DoD) Systems. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, DC (2012)Google Scholar
  4. 4.
    Defense Science Board (DSB) Summer Study on Autonomy. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, DC (2016)Google Scholar
  5. 5.
    Dorneich, M.C., Ververs, P.M., Mathan, S., Whitlow, S., Hayes, C.C.: Considering etiquette in the design of an adaptive system. J. Cogn. Eng. Decis. Making 6(2), 243–265 (2012)CrossRefGoogle Scholar
  6. 6.
    Hancock, P.A., Jagacinski, R.J., Parasuraman, R., Wickens, C.D., Wilson, G.F., Kaber, D.B.: Human-automation interaction research: past, present, and future. Ergon. Des. 21(9), 9–14 (2013)Google Scholar
  7. 7.
    Ho, N., Johnson, W., Lachter, J., Brandt, S., Panesar, K., Wakeland, K., Sadler, G., Wilson, N., Nguyen, B., Shively, R.: Application of human-autonomy teaming to an advanced ground station for reduced crew operations. In: 36th Digital Avionics Systems Conference, St. Petersburg, Florida, USA, 17–21 September 2017Google Scholar
  8. 8.
    Ho, N.T., Sadler, G.G., Hoffmann, L.C., Lyons, J.B., Fergueson, W.E., Wilkins, M.: A longitudinal field study of auto-GCAS acceptance and trust: first year results and implications. J. Cogn. Eng. Decis. Making 11, 239–251 (2017)CrossRefGoogle Scholar
  9. 9.
    Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Fact. 57, 407–434 (2015)CrossRefGoogle Scholar
  10. 10.
    Inagaki, T.: Smart collaboration between humans and machines based on mutual understanding. Annu. Rev. Control 32, 253–261 (2008)CrossRefGoogle Scholar
  11. 11.
    Lyons, J.B.: Being transparent about transparency: A model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: Papers from the AAAI Spring Symposium (Technical Report SS-13-07). AAAI Press, Menlo Park (2013)Google Scholar
  12. 12.
    Lyons, J.B., Ho, N.T., Van Abel, A.L., Hoffmann, L.C., Eric Fergueson, W., Sadler, G.G., Grigsby, M.A., Burns, A.C.: Exploring trust barriers to future autonomy: a qualitative look. In: Cassenti, D.N. (ed.) AHFE 2017. AISC, vol. 591, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-60591-3_1CrossRefGoogle Scholar
  13. 13.
    Lyons, J.B., Koltai, K.S., Ho, N.T., Johnson, W.B., Smith, D.E., Shively, J.R.: Engineering trust in complex automated systems. Ergon. Des. 24, 13–17 (2016)Google Scholar
  14. 14.
    Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrated model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)CrossRefGoogle Scholar
  15. 15.
    Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Fact. 58(3), 401–415 (2016)CrossRefGoogle Scholar
  16. 16.
    Merritt, S.M., Unnerstall, J.L., Lee, D., Huber, K.: Measuring individual differences in the perfect automation schema. Hum. Fact. 57, 740–753 (2015)CrossRefGoogle Scholar
  17. 17.
    Moss, S.A., Garivaldis, F.J., Toukhsati, S.R.: The perceived similarity of other individuals: the contaminating effects of familiarity and neuroticism. Pers. Individ. Differ. 43, 401–412 (2007)CrossRefGoogle Scholar
  18. 18.
    Onnasch, L., Wickens, C.D., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum. Fact. 56, 476–488 (2014)CrossRefGoogle Scholar
  19. 19.
    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 30, 573–583 (2000)CrossRefGoogle Scholar
  20. 20.
    Pop, V.L., Shrewsbury, A., Durso, F.T.: Individual differences in the calibration of trust in automation. Hum. Fact. 57, 545–556 (2015)CrossRefGoogle Scholar
  21. 21.
    Rice, S.: Examining single and multiple-process theories of trust in automation. J. Gen. Psychol. 136, 303–319 (2009)CrossRefGoogle Scholar
  22. 22.
    Schaefer, K.E., Chen, J.Y., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Fact. 58(3), 377–400 (2016)CrossRefGoogle Scholar
  23. 23.
    Webber, S.S.: Development of cognitive and affective trust in teams. Small Group Res. 39(6), 746–769 (2008)CrossRefGoogle Scholar
  24. 24.
    Wynne, K.T., Lyons, J.B.: An integrative model of autonomous agent teammate-likeness. Theoretical Issues in Ergonomics Science (in press)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Joseph B. Lyons
    • 1
  • Nhut T. Ho
    • 2
  • Lauren C. Hoffmann
    • 2
  • Garrett G. Sadler
    • 2
  • Anna Lee Van Abel
    • 1
  • Mark Wilkins
    • 3
  1. 1.Air Force Research LaboratoryWPAFBUSA
  2. 2.NVH Human Systems Integration, LLCLos AngelesUSA
  3. 3.Office of the Secretary of DefenseArlingtonUSA

Personalised recommendations