Advertisement

Artificial Intelligence for Advanced Human-Machine Symbiosis

  • Scott S. Grigsby
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10915)

Abstract

Human capabilities such as memory, attention, sensory bandwidth, comprehension, and visualization are critically important but all have innate limitations. However, these human abilities can benefit from rapidly growing computational capabilities. We can apply computational power to support and augment cognitive skills that will bolster the limited human cognitive resource and provide new capabilities through this symbiosis. We now have the ability to design human-computer interaction capabilities where the computer anticipates, predicts, and augments the performance of the user and where the human supports, aids, and enhances the learning and performance of the computer. Augmented cognition seeks to advance this human-machine symbiosis through both machine understanding of the human (such as physical state sensing, cognitive state sensing, psychophysiology, emotion detection, and intent projection) and human understanding of the machine (such as explainable AI, shared situation awareness, trust enhancement, and advanced UX). The ultimate result being a truly interactive symbiosis where humans and computers are tightly coupled in productive partnerships that merge the best of the human with the best of the machine. As advances in artificial intelligence (AI) accelerate across a myriad of applications, we seek to understand the current state-of-the-art of AI and how it may be best applied for advancing human-machine symbiosis.

Keywords

Artificial intelligence Human-machine teaming Augmented Cognition Human-machine symbiosis Situation awareness 

References

  1. 1.
    Nirenburg, S.: Cognitive systems: toward human-level functionality. AI Mag. 38(7), 5–12 (2017)CrossRefGoogle Scholar
  2. 2.
    Launchbury, J.: A DARPA Perspective on Artificial Intelligence (2017). https://www.youtube.com/watch?time_continue=5&v=-O01G3tSYpU
  3. 3.
    McCurduck, P.: Machines Who Think, 2nd edn. AK Peters Ltd., Natick (2004). ISBN 1-56881-205-1Google Scholar
  4. 4.
  5. 5.
    Kleene, S.C.: Representation of events in nerve nets and finite automata. Ann. Math. Stud. 34, 3–41 (1956)MathSciNetGoogle Scholar
  6. 6.
  7. 7.
    Langley, P.: The cognitive systems paradigm. Adv. Cogn. Syst. 1, 3–13 (2012)Google Scholar
  8. 8.
    Laird, J.E.: The Soar Cognitive Architecture. MIT Press, Cambridge (2012)Google Scholar
  9. 9.
    Profanter, S.: Cognitive architectures. Hauptseminar Human-Robot Interaction (2012). http://profanter.me/static/publications/SeminarCogArch/elaboration.pdf
  10. 10.
    Laird, J.E., Lebiere, C., Rosenbloom, P.S.: A standard model of the mind: toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. AI Mag. 38(7), 13–26 (2017)CrossRefGoogle Scholar
  11. 11.
    Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., Qin, Y.: An integrated theory of the mind. Psychol. Review 111(4), 1036–1060 (2004)CrossRefGoogle Scholar
  12. 12.
    Newell, A.: Unified Theories of Cognition. Harvard University Press, Cambridge (1990)Google Scholar
  13. 13.
    The Role of Autonomy in DOD Systems. Defense Science Board, July 2012Google Scholar
  14. 14.
    Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37(1), 32–64 (1995)CrossRefGoogle Scholar
  15. 15.
    Grigsby, S., Crossman, J., Purman, B., Frederiksen, R., Schmorrow, D.: Dynamic task sharing within human-UxS teams: computational situation awareness. In: Schmorrow, D., Fidopiastis, C. (eds.) AC 2017, Part II. LNCS, vol. 10285, pp. 443–460. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58625-0_32CrossRefGoogle Scholar
  16. 16.
    Schmorrow, D., McBride, D.: Augmented cognition (special issue). Int. J. Hum. Comput. Interact. 17(2), 127–130 (2004)CrossRefGoogle Scholar
  17. 17.
    Reinerman-Jones, L., Barber, D., Lackey, S., Nicholson, D.: Developing methods for utilizing physiological measures. In: Advances in Understanding Human Performance: Neuroergonomics, Human Factors Design, and Special Populations. CRC Press, Boca Raton (2010)Google Scholar
  18. 18.
    Matthews, G., Reinerman-Jones, L.E., Barber, D.J., Abich, J.: The psychometrics of mental workload: multiple measures are sensitive but divergent. Hum. Factors J. Hum. Factors Ergon. Soc. 57(1), 125–143 (2015)CrossRefGoogle Scholar
  19. 19.
    Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)CrossRefGoogle Scholar
  20. 20.
    Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: Sofge, D., Kruijff, G.J., Lawless, W.F. (eds.) Trust and Autonomous Systems: Papers from the AAI Spring Symposium (Technical report SS-13-07). AAAI Press, Menlo Park (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Soar Technologies, Inc.Ann ArborUSA

Personalised recommendations