Advertisement

Quantifying Human Decision-Making: Implications for Bidirectional Communication in Human-Robot Teams

  • Kristin E. Schaefer
  • Brandon S. Perelman
  • Ralph W. Brewer
  • Julia L. Wright
  • Nicholas Roy
  • Derya Aksaray
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10909)

Abstract

A goal for future robotic technologies is to advance autonomy capabilities for independent and collaborative decision-making with human team members during complex operations. However, if human behavior does not match the robots’ models or expectations, there can be a degradation in trust that can impede team performance and may only be mitigated through explicit communication. Therefore, the effectiveness of the team is contingent on the accuracy of the models of human behavior that can be informed by transparent bidirectional communication which are needed to develop common ground and a shared understanding. For this work, we are specifically characterizing human decision-making, especially in terms of the variability of decision-making, with the eventual goal of incorporating this model within a bidirectional communication system. Thirty participants completed an online game where they controlled a human avatar through a 14 × 14 grid room in order to move boxes to their target locations. Each level of the game increased in environmental complexity through the number of boxes. Two trials were completed to compare path planning for the condition of known versus unknown information. Path analysis techniques were used to quantify human decision-making as well as provide implications for bidirectional communication.

Keywords

Human-robot teaming Bidirectional communication Decision-making 

Notes

Acknowledgment

Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-2-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  1. 1.
    Scholtz, J.: Theory and evaluation of human robot interactions. In: 36th Annual Proceedings of the International Conference on System Sciences. IEEE, Hawaii (2003)Google Scholar
  2. 2.
    Schaefer, K.E., Straub, E.R., Chen, J.Y.C., Putney, J., Evans, A.W.: Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res. 46, 26–39 (2017)CrossRefGoogle Scholar
  3. 3.
    Groom, V., Nass, C.: Can robots be teammates? Benchmarks in human-robot teams. Interact. Stud. 8(3), 483–500 (2007)CrossRefGoogle Scholar
  4. 4.
    Oh, J., et al.: Integrated intelligence for human-robot teams. In: Kulić, D., Nakamura, Y., Khatib, O., Venture, G. (eds.) ISER 2016. SPAR, vol. 1, pp. 309–322. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-50115-4_28CrossRefGoogle Scholar
  5. 5.
    Kaupp, T., Makarenko, A., Durrant-Whyte, H.: Human-robot communication for collaborative decision making – A probabilistic approach. Robot. Auton. Syst. 58(5), 444–456 (2010)CrossRefGoogle Scholar
  6. 6.
    Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: Proceedings of the AAAI Spring Symposium, Palo Alto, CA. AAAI (2013)Google Scholar
  7. 7.
    Sycara, K., Sukthankar, G.: Literature Review of Teamwork Models (Report No. CMU-RI-TR-06-50). Carnegie Mellon University, Pittsburgh (2006)Google Scholar
  8. 8.
    Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Factors 58(3), 377–400 (2016)CrossRefGoogle Scholar
  9. 9.
    Dry, M., Lee, M.D., Vickers, D., Hughes, P.: Human performance on visually presented traveling salesperson problems with varying numbers of nodes. J. Probl. Solving 1(1), 20–32 (2006)Google Scholar
  10. 10.
    MacGregor, J.N., Ormerod, T.: Human performance on the traveling salesman problem. Percept. Psychophys. 58(4), 527–539 (1996)CrossRefGoogle Scholar
  11. 11.
    Perelman, B.S., Mueller, S.T.: Identifying mental models of search in a simulated flight task using a pathmapping approach. In: Proceedings of the 18th International Symposium on Aviation Psychology (2015)Google Scholar
  12. 12.
    Pizlo, Z., Stefanov, E., Saalweachter, J., et al.: Traveling salesman problem: a foveating pyramid model. J. Probl. Solving 1(8), 83–101 (2006)Google Scholar
  13. 13.
    Best, B.J., Simon, H.A.: Simulating human performance on the traveling salesman problem. In: Proceedings of the 3rd International Conference on Cognitive Modeling, Groningen, Netherlands, pp. 42–49. ICCM (2000)Google Scholar
  14. 14.
    Perelman, B.S., Mueller, S.T.: Considerations influencing human TSP solutions and modeling implications. In: Reitter, D., Ritter, F.E. (eds.) Proceedings of the 14th International Conference on Cognitive Modeling. University Park, PA (2016)Google Scholar
  15. 15.
    Kong, X., Schunn, C.D.: Information seeking in complex problem solving. In: Proceedings of the 8th International Conference on Cognitive Modeling, Oxford, UK, pp. 261–266. ICCM (2007)Google Scholar
  16. 16.
    Blum, A., Chawla, S., Karger, D.R., et al.: Approximation algorithms for orienteering and discounted-reward TSP. SIAM J. Comput. 37, 653–670 (2007)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Ragni, M., Wiener, J. M.: Constraints, inferences and the shortest path: Which paths do we prefer? In: Proceedings of the Annual Meeting of the Cognitive Science Society, pp. 2216–2221. CogSci, Sapporo (2012)Google Scholar
  18. 18.
    Tenbrink, T., Seifert, I.: Conceptual layers and strategies in our tour planning. Cogn. Process. 12, 109–125 (2011)CrossRefGoogle Scholar
  19. 19.
    Tenbrink, T., Wiener, J.: The verbalization of multiple strategies in a variant of the traveling salesman problem. Cogn. Process. 10, 143–161 (2009)CrossRefGoogle Scholar
  20. 20.
    Perelman, B.S., Mueller, S.T.: Examining memory for search using a simulated aerial search and rescue task. In: Proceedings of the 17th International Symposium on Aviation Psychology, Dayton, OH (2013)Google Scholar
  21. 21.
    Perelman, B.S., Evans III, A.W., Schaefer, K.E.: Mental model consensus and shifts during navigation system-assisted route planning. In: Proceedings of the Human Factors and Ergonomics Society, 61(1), 1183–1187. HFES, Austin (2017)CrossRefGoogle Scholar
  22. 22.
    Brunyé, T.T., Taylor, H.A.: Extended experience benefits spatial mental model development with route but not survey descriptions. Acta Psychol. 127, 340–354 (2008)CrossRefGoogle Scholar
  23. 23.
    Jahn, G., Johnson-Laird, P.N., Knauff, M.: Reasoning about consistency with spatial mental models: hidden and obvious indeterminacy in spatial descriptions. In: Freksa, C., Knauff, M., Krieg-Brückner, B., Nebel, B., Barkowsky, T. (eds.) Spatial Cognition 2004. LNCS (LNAI), vol. 3343, pp. 165–180. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-32255-9_10CrossRefGoogle Scholar
  24. 24.
    Tversky, B.: Spatial mental models. Psychol. Learn. Motiv. 27, 109–145 (1991)CrossRefGoogle Scholar
  25. 25.
    Clare, A.S., Maere, P.C.P., Cummings, M.L.: Assessing operator strategies for real-time replanning of multiple unmanned vehicles. Intell. Decis. Technol. 6, 221–231 (2012)CrossRefGoogle Scholar
  26. 26.
    Chen, J.Y.C., Barnes, M.J.: Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum.-Mach. Syst. 44(1), 13–29 (2014)CrossRefGoogle Scholar
  27. 27.
    Hancock, P.A., Billings, D.R., Schaefer, K.E.: Can you trust your robot? Ergon. Des. 19(3), 24–29 (2011)Google Scholar
  28. 28.
    Ososky, S., Schuster, D., Phillips, E., Jentsch, F.G.: Building appropriate trust in human-robot teams. In: Proceedings of the AAAI Spring Symposium. AAAI, Stanford (2013)Google Scholar
  29. 29.
    Zakzanis, K.K., Quintin, G., Graham, S.J., Mraz, R.: Age and dementia related differences in spatial navigation within an immersive virtual environment. Med. Sci. Monit. 15(4), 140–150 (2009)Google Scholar
  30. 30.
    Moffat, S.D., Elkins, W., Resnick, S.M.: Age differences in the neural systems supporting human allocentric spatial navigation. Neurobiol. Aging 27(7), 965–972 (2006)CrossRefGoogle Scholar
  31. 31.
    Yuan, P., Daugherty, A.M., Raz, N.: Turning bias in virtual spatial navigation: age-related differences and neuroanatomical correlates. Biol. Psychol. 96, 8–19 (2014)CrossRefGoogle Scholar
  32. 32.
    Zahr, N.M., Rohlfing, T., Pfefferbaum, A., Sullivan, E.V.: Problem solving, working memory, and motor correlates of association and commissural fiber bundles in normal aging: a quantitative fiber tracking study. NeuroImage 44(3), 1050–1062 (2009)CrossRefGoogle Scholar
  33. 33.
    Jansson, E.S.V.: TSBK07 Computer Graphics Project Report for NQ Sokoban (2016). https://caffeineviking.net/share/papers/rnqsok.pdf
  34. 34.
    Mueller, S.T., Perelman, B.S., Veinott, E.S.: An optimization approach for measuring the divergence and correspondence between paths. Behav. Res. Methods 48(1), 53–71 (2016)CrossRefGoogle Scholar
  35. 35.
    Salas, E., Shuffler, M.L., Thayer, A.L., Bewell, W.L., Lazzara, E.H.: Understanding and improving teamwork in organizations: a scientifically based practical guide. Hum. Resour. Manag. 54(4), 599–622 (2015)CrossRefGoogle Scholar
  36. 36.
    Cooke, N.J., Gorman, J.C., Myers, C.W., Duran, J.L.: Interactive team cognition. Cogn. Sci. 37(2), 255–285 (2013)CrossRefGoogle Scholar
  37. 37.
    MacMillan, J., Entin, E.E., Serfaty, D.: Communication overhead: the hidden cost of team cognition. In: Salas, E., Fiore, S.M. (eds.) Team Cognition: Understanding the Factors that Drive Process and Performance, pp. 61–82. American Psychological Association, Washington, DC (2004)CrossRefGoogle Scholar
  38. 38.
    Mercado, J.E., Rupp, M.A., Chen, J.Y.C., et al.: Intelligent agent transparency in human–agent teaming for Multi-UxV management. Hum. Factors 58(3), 401–415 (2016)CrossRefGoogle Scholar
  39. 39.
    Selkowitz, A.R., Larios, C.A., Lakhmani, S.G., Chen, J.Y.C.: Displaying information to support transparency for autonomous platforms. In: Savage-Knepshield, P., Chen, J.Y.C. (eds.) Advances in Human Factors in Robots and Unmanned Systems, pp. 161–173. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-41959-6_14CrossRefGoogle Scholar
  40. 40.
    Chen, J.Y.C., Procci, K., Boyce, M., et al.: Situation awareness-based agent transparency (ARL-TR-6905). US Army Research Laboratory, Aberdeen Proving Ground, MD (2014)Google Scholar
  41. 41.
    Barber, D., Lackey, S., Reinerman-Jones, L., Hudwon, I.: Visual and tactile interfaces for bi-directional human robot communication. In: Proceedings for the International Society for Optics and Photonics, SPIE Defence, Security, and Sensing conference. SPIE, Baltimore (2013)Google Scholar
  42. 42.
    Lackey, S., Barber, D., Reinerman, L., Badler, N.I., Hudson, I.: Defining next-generation multi-modal communication in human robot interaction. In: Proceedings of the Human Factors and Ergonomics Society, vol. 55, no. 1, pp. 461–464 (2011)CrossRefGoogle Scholar
  43. 43.
    Rau, P.P., Li, Y., Li, D.: Effects of communication style and culture on ability to accept recommendations from robots. Comput. Hum. Behav. 25(2), 587–595 (2009)CrossRefGoogle Scholar
  44. 44.
    Selkowitz, A.R., Lakhmani, S.G., Larios, C.N., Chen, J.Y.C.: Agent transparency and the autonomous squad member. In: Proceedings of the Human Factors and Ergonomics Society, vol. 60, no. 1, pp. 1319–1323. Sage, Los Angeles (2016)CrossRefGoogle Scholar
  45. 45.
    Stowers, K., Kasdaglis, N., Rupp, M., Chen, J., Barber, D., Barnes, M.: Insights into human-agent teaming: intelligent agent transparency and uncertainty. In: Savage-Knepshield, P., Chen, J.Y.C. (eds.) Advances in Human Factors in Robots and Unmanned Systems, vol. 499, pp. 149–160. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-41959-6_13CrossRefGoogle Scholar
  46. 46.
    Wright, J.L., Chen, J.Y.C., Barnes, M.J., Hancock, P.A.: Agent reasoning transparency: the influence of information level on automation-induced complacency (ARL-TR-8044). US Army Research Laboratory, Aberdeen Proving Ground, MD (2017)Google Scholar

Copyright information

© This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018

Authors and Affiliations

  • Kristin E. Schaefer
    • 1
  • Brandon S. Perelman
    • 1
  • Ralph W. Brewer
    • 1
  • Julia L. Wright
    • 1
  • Nicholas Roy
    • 2
  • Derya Aksaray
    • 3
  1. 1.US Army Research LaboratoryAdelphiUSA
  2. 2.MITCambridgeUSA
  3. 3.University of MinnesotaMinneapolisUSA

Personalised recommendations