Advertisement

Human Machine Interactions: Velocity Considerations

  • Joseph Cottam
  • Leslie M. Blaha
  • Kris Cook
  • Mark Whiting
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10916)

Abstract

Measuring change is increasingly a computational task, but understanding change and its implications are fundamentally human challenges. Successful human/machine teams for streaming data analysis effectively balance data velocity with people’s capacity to ingest, reason about, and act upon the data. Computational support is critical to aiding humans with finding what is needed when it is needed. This is particularly evident in supporting complex sensemaking, situation awareness, and decision making in streaming contexts. Herein, we conceptualize human/machine teams as interacting streams of data, generated from the interactions that are core to the human/machine team activity. These streams capture the relative velocities of the human and machine activities, which allows the machine to balance the capabilities of the two halves of the system. We review the known challenges in handling interacting streams that have been distilled in computational systems. And we use this perspective to understand some of the open challenges to designing effective human/machine systems that support the disparate velocities of humans and machines.

Keywords

Big data Human-machine interaction Interactive streaming analytics Visual analytics 

Notes

Acknowledgments

This effort was sponsored by the Analysis in Motion Initiative at the Pacific Northwest National Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.

References

  1. 1.
    Baddeley, A.: Working memory: looking back and looking forward. Nat. Rev. Neurosci. 4(10), 829–839 (2003)CrossRefGoogle Scholar
  2. 2.
    Cowan, N.: The magical mystery four: how is working memory capacity limited, and why? Curr. Dir. Psychol. Sci. 19(1), 51–57 (2010)CrossRefGoogle Scholar
  3. 3.
    Miller, G.A.: The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63(2), 81–97 (1956)CrossRefGoogle Scholar
  4. 4.
    Cowan, N., Elliott, E.M., Saults, J.S., Morey, C.C., Mattox, S., Hismjatullina, A., Conway, A.R.: On the capacity of attention: its estimation and its role in working memory and cognitive aptitudes. Cogn. Psychol. 51(1), 42–100 (2005)CrossRefGoogle Scholar
  5. 5.
    Posner, M.I., Snyder, C.R., Davidson, B.J.: Attention and the detection of signals. J. Exp. Psychol.: Gen. 109(2), 160–174 (1980)CrossRefGoogle Scholar
  6. 6.
    Wickens, C.D.: The structure of attentional resources. Attention Perform. VIII 8, 239–257 (1980)Google Scholar
  7. 7.
    Blaha, L.M.: An examination of task demands on the elicited processing capacity. In: Little, D.R., Altieri, N., Fifić, M., Yang, C.T. (eds.) Systems Factorial Technology: A theory driven methodology for the identification of perceptual and cognitive mechanisms. Academic Press, London (2017)Google Scholar
  8. 8.
    Heathcote, A., Coleman, J.R., Eidels, A., Watson, J.M., Houpt, J., Strayer, D.L.: Working memorys workload capacity. Mem. Cogn. 43(7), 973–989 (2015)CrossRefGoogle Scholar
  9. 9.
    Gluck, K.A., Gunzelmann, G.: Computational process modeling and cognitive stressors: background and prospects for application in cognitive engineering. In: Lee, J.D., Kirk, A. (eds.) The Oxford Handbook of Cognitive Engineering, pp. 424–432. Oxford University Press, Oxford (2013)Google Scholar
  10. 10.
    Su, H., Deng, J., Fei-Fei, L.: Crowdsourcing Annotations for Visual Object Detection. In: AAAI Human Computation Workshop (2012)Google Scholar
  11. 11.
    Gartner Group: Pattern-based strategy: Getting value from big data (2011)Google Scholar
  12. 12.
    McAfee, A., Brynjolfsson, E., Davenport, T.H., et al.: Big data: the management revolution. Harvard Bus. Rev. 90(10), 60–68 (2012)Google Scholar
  13. 13.
    Newell, A.: Unified Theories of Cognition. Harvard University Press, Cambridge (1994)Google Scholar
  14. 14.
    Dasgupta, A., Arendt, D.L., Franklin, L.R., Wong, P.C., Cook, K.A.: Human factors in streaming data analysis: Challenges and opportunities for information visualization. Computer Graphics Forum in publication (2017)Google Scholar
  15. 15.
    Amershi, S., Cakmak, M., Knox, W.B., Kulesza, T.: Power to the people: the role of humans in interactive machine learning. AI Mag. 35(4), 105–120 (2014)CrossRefGoogle Scholar
  16. 16.
    Jasper, R.J., Blaha, L.M.: Interface metaphors for interactive machine learning. In: Proceedings of Human-Computer Interaction International: Augmented Cognition 2017, Vancouver, Canada, July 2017Google Scholar
  17. 17.
    Luce, R.D.: Response Times: Their Role in Inferring Elementary Mental Organization, vol. 8. Oxford University Press, Oxford (1986)Google Scholar
  18. 18.
    Townsend, J.T., Ashby, F.G.: Stochastic Modeling of Elementary Psychological Processes. CUP Archive, Cambridge (1983)zbMATHGoogle Scholar
  19. 19.
    Amershi, S., Fogarty, J., Weld, D.: Regroup: Interactive machine learning for on-demand group creation in social networks. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 21–30. ACM (2012)Google Scholar
  20. 20.
    Arendt, D., Komurlu, C., Blaha, L.M.: CHISSL: A human-machine collaboration space for unsupervised learning. In: Proceedings of Human-Computer Interaction International: Augmented Cognition 2017, Vancouver, Canada (July 2017)Google Scholar
  21. 21.
    Endert, A., Fiaux, P., North, C.: Semantic interaction for sensemaking: Inferring analytical reasoning for model steering. IEEE Transactions on Visualization and Computer Graphics 18(12), 2879–2888 (2012)CrossRefGoogle Scholar
  22. 22.
    Cottam, J.A., Blaha, L., Zarzhitsky, D., Thomas, M., Skomski, E.: Crossing the streams: Fuzz testing with user input. In: 2017 IEEE International Conference on Big Data (Big Data). (Dec 2017) 4362–4371Google Scholar
  23. 23.
    Horvitz, E.: Principles of mixed-initiative user interfaces. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, ACM (1999) 159–166Google Scholar
  24. 24.
    Cook, K., Cramer, N., Israel, D., Wolverton, M., Bruce, J., Burtner, R., Endert, A.: Mixed-initiative visual analytics using task-driven recommendations. In: 2015 IEEE Conference on Visual Analytics Science and Technology (VAST). (Oct 2015) 9–16Google Scholar
  25. 25.
    Felger, W., Schröder, F.: The visualization input pipeline-enabling semantic interaction in scientific visualization. In: Computer Graphics Forum. Volume 11, Wiley Online Library (1992) 139–151CrossRefGoogle Scholar
  26. 26.
    Gotz, D., Zhou, M.X.: Characterizing users’ visual analytic activity for insight provenance. Information Visualization 8(1), 42–55 (2009)CrossRefGoogle Scholar
  27. 27.
    Salvucci, D.D., Taatgen, N.A.: The multitasking mind. Oxford University Press (2010)CrossRefGoogle Scholar
  28. 28.
    Watson, J.M., Strayer, D.L.: Supertaskers: profiles in extraordinary multitasking ability. Psychonomic Bulletin & Review 17(4), 479–485 (2010)CrossRefGoogle Scholar
  29. 29.
    Freeman, S., Pryce, N.: Growing Object-Oriented Software, Guided by Tests. 1st edn. Addison-Wesley Professional (2009)Google Scholar
  30. 30.
    Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21(7), 558–565 (1978)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Joseph Cottam
    • 1
  • Leslie M. Blaha
    • 1
  • Kris Cook
    • 1
  • Mark Whiting
    • 1
  1. 1.Pacific Northwest National LaboratoryRichlandUSA

Personalised recommendations