Advertisement

Objective-Analytical Measures of Workload – the Third Pillar of Workload Triangulation?

  • Christina RusnockEmail author
  • Brett Borghetti
  • Ian McQuaid
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9183)

Abstract

The ability to assess operator workload is important for dynamically allocating tasks in a way that allows efficient and effective goal completion. For over fifty years, human factors professionals have relied upon self-reported measures of workload. However, these subjective-empirical measures have limited use for real-time applications because they are often collected only at the completion of the activity. In contrast, objective-empirical measurements of workload, such as physiological data, can be recorded continuously, and provide frequently-updated information over the course of a trial. Linking the low-sample-rate subjective-empirical measurement to the high-sample-rate objective-empirical measurements poses a significant challenge. While the series of objective-empirical measurements could be down–sampled or averaged over a longer time period to match the subjective-empirical sample rate, this process discards potentially relevant information, and may produce meaningless values for certain types of physiological data. This paper demonstrates the technique of using an objective-analytical measurement produced by mathematical models of workload to bridge the gap between subjective-empirical and objective-empirical measures. As a proof of concept, we predicted operator workload from physiological data using VACP, an objective-analytical measure, which was validated against NASA-TLX scores. Strong predictive results pave the way to use the objective-empirical measures in real-time augmentation (such as dynamic task allocation) to improve operator performance.

Keywords

Workload measurement Machine learning VACP IMPRINT 

References

  1. Aldrich, T.B., Szabo, S.M.: A methodology for predicting crew workload in new weapon systems. In: Proceedings of the Human Factors Society 30th Annual Meeting (1986)Google Scholar
  2. Bailey, N.R., Scerbo, M.W., Freeman, F.G., Mikulka, P.J., Scott, L.A.: Comparison of a brain-based adaptive system and a manual adaptable system for invoking automation. Hum. Factors 48(4), 693–709 (2006)CrossRefGoogle Scholar
  3. Bierbaum, C.R., Szabo, S.M., Aldrich, T.B.: Task analysis of the UH-60 mission and decision rules for developing a UH-60 workload prediction model No. AD-A210763. U.S. Army Research Institute for the Behavioral and Social Sciences, Alexandria (1989)Google Scholar
  4. Boles, D.B., Adair, L.P.: The multiple resources questionnaire (MRQ). In: Proceedings of the Human Factors & Ergonomics Society Annual Meeting, pp. 1790–1794 (2001)Google Scholar
  5. Cooper, G.E., Harper, R.P.: The use of pilot ratings in the evaluation of aircraft handling qualities No. NASA Ames Technical Report NASA TN-D-5153). NASA Ames Research Center, Moffett Field (1969)Google Scholar
  6. De Visser, E.J., Parasuraman, R.: Adaptive aiding of human-robot teaming: effects of imperfect automation on performance, trust, and workload. J. Cogn. Eng. Decis. Making 5(2), 209–231 (2011)CrossRefGoogle Scholar
  7. Fong, A., Sibley, C., Cole, A., Baldwin, C., Coyne, J.: A comparison of artificial neural networks, logistic regressions, and classification trees for modeling mental workload in real-time. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 54, pp. 1709–1712 (2010)Google Scholar
  8. Franzblau, Abraham N.: A Primer of Statistics for Non-Statisticians. Harcourt, Brace & World, New York (1958)Google Scholar
  9. Hart, S.G., Staveland, L.E.: Development of NASA-TLX (task load index): results of empirical and theoretical research. In: Hancock, P.A., Meshkati, N. (eds.) Human Mental Workload, pp. 139–183. Elsevier, Amsterdam (1988)CrossRefGoogle Scholar
  10. Jung, H.S.: Establishment of overall workload assessment technique for various tasks and workplaces. Int. J. Ind. Ergon. 28(6), 341–353 (2001)CrossRefGoogle Scholar
  11. Kaber, D.B., Kim, S.H.: Understanding cognitive strategy with adaptive automation in dual-task performance using computational cognitive models. J. Cogn. Eng. Decis. Making 5(3), 309–331 (2011)CrossRefzbMATHGoogle Scholar
  12. Parasuraman, R., Barnes, M., Cosenzo, K.: Adaptive automation for human-robot teaming in future command and control systems. Int. C2 J. 1(2), 43–68 (2007)Google Scholar
  13. Parasuraman, R., Cosenzo, K.A., De Visser, E.J.: Adaptive automation for human supervision of multiple uninhabited vehicles: effects on change detection, situation awareness, and mental workload. Mil. Psychol. 21(2), 270–297 (2009)CrossRefGoogle Scholar
  14. Parasuraman, R., Wilson, G.F.: Putting the brain to work: neuroergonomics past, present, and future. Hum. Factors 50(3), 468–474 (2008)CrossRefGoogle Scholar
  15. Parks, D.L., Boucek Jr., G.P.: Workload prediction, diagnosis, and continuing challenges. In: McMillan, G.R., Beevis, D., Salas, E., Strub, M.H., Sutton, R., Van Breda, L. (eds.) Applications of Human Performance Models to System Design, pp. 47–64. Plenum, New York (1989)CrossRefGoogle Scholar
  16. North, R.A., Riley, V.A.: W/Index: a predictive model of operator workload. In: Mc-Millan, G., Beevis, D., Salas, E., Strub, M., Sutton, R., Van Breda’s, L. (eds.) Applications of Human Performance Models to System Design, pp. 81–90. Springer, USA (1989)CrossRefGoogle Scholar
  17. Quinlan, R.J.: Learning with continuous classes. In: 5th Australian Joint Conference on Artificial Intelligence, pp. 343–348 (1992)Google Scholar
  18. Reid, G.B., Nygren, T.E.: The subjective workload assessment technique: a scaling procedure for measuring mental workload. In: Hancock, P.A., Meshkati, N. (eds.) Human Mental Workload, pp. 185–218. Elsevier, Amsterdam (1988)CrossRefGoogle Scholar
  19. Scerbo, M.W.: Adaptive automation. In: Parasuraman, R., Rizzo, M. (eds.) Neuroergonomics: The Brain at Work, pp. 238–252. Oxford University Press, New York (2007)Google Scholar
  20. Sheridan, T.B.: Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: distinctions and modes of adaptation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 41(4), 662–667 (2011)CrossRefGoogle Scholar
  21. Taylor, G., Reinerman-Jones, L., Szalma, J., Mouloua, M., Hancock, P.: What to automate: addressing the multidimensionality of cognitive resources through system design. J. Cogn. Eng. Decis. Making 7(4), 311–329 (2013)CrossRefGoogle Scholar
  22. Teigen, K.H.: Yerkes-Dodson: a Law for all seasons. Theory Psychol. 4(4), 525–547 (1994)CrossRefGoogle Scholar
  23. Tsang, P.S., Velazquez, V.L.: Diagnosticity and multidimensional subjective workload ratings. Ergonomics 39(3), 358–381 (1996)CrossRefGoogle Scholar
  24. Wang, Y., Witten, I.H.: Induction of model trees for predicting continuous classes. In: Poster Papers of the 9th European Conference on Machine Learning (1997)Google Scholar
  25. Warm, J.S., Parasuraman, R.: Cerebral hemodynamics and vigilance. In: Parasuraman, R., Rizzo, M. (eds.) Neuroergonomics: The Brain at Work, pp. 146–158. Oxford University Press, New York (2007)Google Scholar
  26. Wickens, C.D.: Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 3(2), 159–177 (2002)CrossRefGoogle Scholar
  27. Wilson, G.F., Russell, C.A.: Psychophysiologically determined adaptive aiding in a simulated UCAV task. Human Performance, Situation Awareness, and Automation: Current Research and Trends, pp. 200–204. Erlbaum Associates, Mahwah (2004)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Christina Rusnock
    • 1
    Email author
  • Brett Borghetti
    • 1
  • Ian McQuaid
    • 1
  1. 1.Air Force Institute of TechnologyWright-Patterson AFBFairbornUSA

Personalised recommendations