Advertisement

Trustworthy Human-Centered Automation Through Explainable AI and High-Fidelity Simulation

Conference paper
  • 575 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1206)

Abstract

As we become more competent developers of artificially intelligent systems, the level of deployment and associated implicit trust in these systems will increase in kind. While this is an attractive concept, with an already-demonstrated capability to positively disrupt industries around the world, it remains a dangerous premise that demands attention and intentional resource allocation to ensure that these systems’ behaviors match our expectations. Until we can develop explainable AI techniques or high-fidelity simulators to enable us to examine the models’ underlying logic for the situations we intend to utilize them in, it will be irresponsible to place our trust in their ability to act on our behalf. In this work we describe and provide guidelines for ongoing efforts in using novel explainable AI techniques and high-fidelity simulation to help establish shared expectations between autonomous systems and the humans who interact with them, discussing collaborative robotics and cybersecurity domains.

Keywords

Explainable AI Interpretable machine learning Cyber range Cybersecurity Gamification Human-robot interaction 

References

  1. 1.
    Kypson, A.P., Nifong, L.W., Chitwood Jr., W.R.: Robot-assisted surgery: training and re-training surgeons. Int. J. Med. Robot. Comput. Assist. Surg. 1(1), 70–76 (2004)CrossRefGoogle Scholar
  2. 2.
    Kavak, H., Padilla, J.J., Vernon-Bido, D.: A characterization of cybersecurity simulation scenarios. In: Proceedings of the 19th Communications & Networking Symposium (CNS), p. 3, 3 April 2016Google Scholar
  3. 3.
    Shah, S., Dey, D., Lovett, C., Kapoor, A.: AirSim: high-fidelity visual and physical simulation for autonomous vehicles. In: Field and Service Robotics 2018, pp. 621–635. Springer, Cham (2018)Google Scholar
  4. 4.
    Hayes, B., Scassellati, B.: Challenges in shared-environment human-robot collaboration. In: Proceedings of the Collaborative Manipulation Workshop at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 3 March 2013Google Scholar
  5. 5.
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. arXiv preprint arXiv:1606.06565, 21 June 2016
  6. 6.
    Nikolaidis, S., Shah, J.: Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy. In: 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 33–40. IEEE, 3 March 2013Google Scholar
  7. 7.
    Fridman, L., Ding, L., Jenik, B., Reimer, B.: Arguing machines: human supervision of black box AI systems that make life-critical decisions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)Google Scholar
  8. 8.
    Confalonieri, R., Besold, T.R., Weyde, T., Creel, K., Lombrozo, T., Mueller, S., Shafto, P.: What makes a good explanation? Cognitive dimensions of explaining intelligent machines. In: CogSci 2019: Creativity + Cognition + Computation (2019)Google Scholar
  9. 9.
    Tabrez, A., Agrawal, S., Hayes, B.: Explanation-based reward coaching to improve human performance via reinforcement learning. In: 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 249–257. IEEE, 11 March 2019Google Scholar
  10. 10.
    Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 28 February 2017
  11. 11.
    Besold, T.R., Uckelman, S.L.: The what, the why, and the how of artificial explanations in automated decision-making. arXiv preprint arXiv:1808.07074, 21 August 2018
  12. 12.
    Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., Klein, G.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876, 5 February 2019
  13. 13.
    Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 303–312. IEEE, 6 March 2017Google Scholar
  14. 14.
    Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794, 2 October 2017
  15. 15.
    Fonash, P., Schneck, P.: Cybersecurity: from months to milliseconds. Computer 48(1), 42–50 (2015)CrossRefGoogle Scholar
  16. 16.
    Fournaris, A.P., Lalos, A.S., Serpanos, D.: Generative adversarial networks in ai-enabled safety-critical systems: friend or foe? Computer 52(9), 78–81 (2019)CrossRefGoogle Scholar
  17. 17.
    Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: Openai gym. arXiv preprint arXiv:1606.01540, 5 June 2016
  18. 18.
    Matsas, E., Vosniakos, G.C.: Design of a virtual reality training system for human–robot collaboration in manufacturing tasks. Int. J. Interact. Des. Manuf. (IJIDeM) 11(2), 139–153 (2017)CrossRefGoogle Scholar
  19. 19.
    Codd-Downey, R., Forooshani, P.M., Speers, A., Wang, H., Jenkin, M.: From ROS to unity: leveraging robot and virtual environment middleware for immersive teleoperation. In: 2014 IEEE International Conference on Information and Automation (ICIA), pp. 932–936. IEEE, 28 July 2014Google Scholar
  20. 20.
    Brunner, S.G., Lehner, P., Schuster, M.J., Riedel, S., Belder, R., Leidner, D., Wedler, A., Beetz, M., Stulp, F.: Design, execution, and postmortem analysis of prolonged autonomous robot operations. IEEE Robot. Autom. Lett. 3(2), 1056–1063 (2018)CrossRefGoogle Scholar
  21. 21.
    Schmid, U., Zeller, C., Besold, T., Tamaddoni-Nezhad, A., Muggleton, S.: How does predicate invention affect human comprehensibility? In: The International Conference on Inductive Logic Programming, 4 September 2016, pp. 52–67. Springer, Cham (2016)Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Colorado BoulderBoulderUSA
  2. 2.Circadence CorporationBoulderUSA

Personalised recommendations