An Initial Generic Assessment Framework for the Consideration of Risk in the Implementation of Autonomous Systems

  • K. Tara SmithEmail author
  • Lynne Coventry
  • Robert GreenSmith
Conference paper
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 544)


This paper considers some of the issues around autonomous systems and the different types of risk involved in their implementation. These risks are both barriers to the implementation of a successful autonomous system and risks that are consequences of the use of such systems. The different levels of automation, and different approaches to categorizing these levels, as presented in a variety of frameworks, are summarized and discussed.

The paper presents an initial generic assessment structure, with the aim of providing a useful construct for the design and development of acceptable autonomous systems that are intended to replace elements of the human cognitive process, specifically in situations involving decision-making. It introduces the concept of the “logos chasm”: the gap between achievable autonomous systems and those which currently only exist in the realm of science fiction; and discusses possible reasons for its existence.


Autonomous systems Automation risks Automation frameworks 


  1. 1.
    Endsley, M.: The application of human factors to the development of expert systems for advanced cockpits. In: 31st Annual Meeting of Human Factors Society, pp. 1388–1392 (1987)CrossRefGoogle Scholar
  2. 2.
    Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. MIT Press, Cambridge (1992)Google Scholar
  3. 3.
    Parasuraman, R.: Designing automation for human use: empirical studies and quantitative models. Ergonomics 43(7), 931–951 (2000)CrossRefGoogle Scholar
  4. 4.
    Proud, R.W., Hart, J.J.: FLOAAT, A Tool for Determining Levels of Autonomy and Automation, Applied to Human-Rated Space Systems. Infotech@Aerospace, American Institute of Aeronautics and Astronautics (AIAA) 2005-7061. AIAA, Arlington (2005)Google Scholar
  5. 5.
    Osinga, F.: Science, Strategy and War: The Strategic Theory of John Boyd. Eburon Academic Publishers, Delft, The Netherlands (2005)Google Scholar
  6. 6.
    Bannon, L.: Reimagining HCI: toward a more human-centered perspective. Interactions 18(4), 50–57 (2011)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Kun, A.L., Boll, S., Schmidt, A.: Shifting gears: user interfaces in the age of autonomous driving. IEEE Pervasive Comput. 15(1), 32–38 (2016). Scholar
  8. 8.
    Cramer, H.S., Evers, V., Van Someren, M.W., Wielinga, B.J.: Awareness, training and trust in interaction with adaptive spam filters. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 909–912. ACM (2009)Google Scholar
  9. 9.
    McCarthy, J.: Bridging the gaps between HCI and social media. Interactions 18(2), 15–18 (2011)CrossRefGoogle Scholar
  10. 10.
    Cresci, E.: Chatbot that overturned 160,000 parking fines now helping refugees claim asylum. The Guardian (2017).
  11. 11.
    Saunder, J., Hunt, P., Hollywood, J.S.: Predictions put into place: a quasi-experimental evaluation of Chicago’s predictive policing pilot. J. Exp. Criminol. 12(3), 347–371 (2016)CrossRefGoogle Scholar
  12. 12.
    Caringi, A.: The FDA approved the world’s first autonomous insulin pump vocativ: health (2016).
  13. 13.
    Platania, R., Shams, S., Yang, S., Zhang, J., Lee, K., Park, S-J.: Automated breast cancer diagnosis using deep learning and region of interest detection (BC-DROID). In: Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics (ACM-BCB 2017), pp. 536–543. ACM, New York (2017).
  14. 14.
    J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems. SAE International (2014)Google Scholar
  15. 15.
    Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R.: Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. David McKay Company, New York (1956)Google Scholar
  16. 16.
    Shieber, S.M.: Lessons from a restricted Turing Test. Commun. ACM 37(6), 70–78 (1994)CrossRefGoogle Scholar
  17. 17.
    Shelburne, W.A.: Mythos and Logos in the Thought of Carl Jung: The theory of the Collective Unconscious in Scientific Perspective. State University of New York Press, Albany N. Y (1988)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • K. Tara Smith
    • 1
    Email author
  • Lynne Coventry
    • 2
  • Robert GreenSmith
    • 1
  1. 1.Human Factors Engineering Solutions LtdDunfermlineScotland
  2. 2.Northumbria UniversityNewcastle upon TyneUK

Personalised recommendations