Skip to main content

Knowledge-to-Information Translation Training (KITT): An Adaptive Approach to Explainable Artificial Intelligence

  • Conference paper
  • First Online:
Adaptive Instructional Systems (HCII 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12214))

Included in the following conference series:

Abstract

Modern black-box artificial intelligence algorithms are computationally powerful yet fallible in unpredictable ways. While much research has gone into developing techniques to interpret these algorithms, less have also integrated the requirement to understand the algorithm as a function of their training data. In addition, few have examined the human requirements for explainability, so these interpretations provide the right quantity and quality of information to each user. We argue that Explainable Artificial Intelligence (XAI) frameworks need to account the expertise and goals of the user in order to gain widespread adoptance. We describe the Knowledge-to-Information Translation Training (KITT) framework, an approach to XAI that considers a number of possible explanatory models that can be used to facilitate users’ understanding of artificial intelligence. Following a review of algorithms, we provide a taxonomy of explanation types and outline how adaptive instructional systems can facilitate knowledge translation between developers and users. Finally, we describe limitations of our approach and paths for future research opportunities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Decision trees get more complex when the decisions are probabilistic and become less explainable even though they are still technically interpretable.

  2. 2.

    For the sake of analogy, we will refer to the black-box processes and output, such as the hidden layers in a neural network leading to the output layer, as an algorithm’s cognition.

  3. 3.

    We acknowledge that there is potentially overlap between some explanation types.

References

  1. Lipton, Z.C.: The mythos of model interpretability. In: ICML Workshop on Human Interpretability in Machine Learning, New York (2016)

    Google Scholar 

  2. Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability and Transparency, pp. 648–657 (2020)

    Google Scholar 

  3. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)

    Article  Google Scholar 

  4. Andras, P., et al.: Trusting intelligent machines: deepening trust within socio-technical systems. IEEE Technol. Soc. Mag. 37(4), 76–83 (2018)

    Article  Google Scholar 

  5. Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–134 (2019)

    Google Scholar 

  6. Caliskan, A.B.J., Narayanan, A.: Semantic derived automatically from language corpora contain human-like biases. Science 6334(356), 183–186 (2017)

    Article  Google Scholar 

  7. Zou, J., Schiebinger, L.: AI can be sexist and racist - it’s time to make it fair. Nat. Comments 559, 324–326 (2018)

    Article  Google Scholar 

  8. BCC: Google apologises for photos app’s racist blunder. BBC (2015). https://www.bbc.com/news/technology-33347866. Accessed 15 Dec 2019

  9. Kasperkevic, J.: Google says sorry for racist auto-tag in photo app. The Guardian (2015). https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app. Accessed 14 Dec 2019

  10. Hern, A.: Google’s solution to accidental algorithmic racism: ban gorillas. The Guardian (2018). https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people. Accessed 15 Dec 2019

  11. Edwards, L., Veale, M.: Slave to the algorithm: why a right to an explanation is probably not the remedy you are looking for. Duke Law Technol. Rev. 16, 18–84 (2017)

    Google Scholar 

  12. Gunning, D.: DARPA XAI BAA. DARPA (2016). https://www.darpa.mil/attachments/DARPA-BAA-16–53.pdf. Accessed 20 Feb 2020

  13. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions, and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  14. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  15. Deeks, A.: The judicial demand for explainable artificial intelligence. Columbia Law Rev. 119(7), 1829–1850 (2019)

    Google Scholar 

  16. Yin, M., Wortman, V., Wallach, H.: Understanding the effect of accuracy on trust in machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2019)

    Google Scholar 

  17. Straunch, R.: Squishy problems and quantitative method. Policy Sci. 6, 175–184 (1975)

    Article  Google Scholar 

  18. Lakkaraju, H., Bastani, O.: “How do I fool you?”: manipulating user trust via misleading black box explanations. In: Proceedings of AAAI/ACM Conference on AI, Ethics, and Society (2020)

    Google Scholar 

  19. Miller, T.: Artif. Intell. 267, 1–38 (2019)

    Article  Google Scholar 

  20. Hoffman, R., Klein, G., Mueller, S.: Explaining explanation for “Explainable AI”. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, pp. 197–201 (2018)

    Google Scholar 

  21. Gilpin, L., Bau, D., Yuan, B., Baiwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Proceedings of IEEE 5th International Conference on Data Science and Advanced Analytics, pp. 80–89 (2018)

    Google Scholar 

  22. Došilović, F., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: Proceedings of 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 210–215 (2018)

    Google Scholar 

  23. Fagan, L.M., Shortliffe, E.H., Buchanan, B.G.: Computer-based medical decision making: from MYCIN to VM. Automedica 3, 97–108 (1980)

    Google Scholar 

  24. Shortliffe, E.H.: Computer-Based Medical Consultations: MYCIN. Elsevier/North Holland, New York (1976)

    Google Scholar 

  25. Gorry, G.A.: Computer-assisted clinical decision making. Methods Inf. Med. 12, 45–51 (1973)

    Article  Google Scholar 

  26. Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1

    Chapter  Google Scholar 

  27. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  28. Aditya, S.: Explainable image understanding using vision and reasoning. In: Proceedings of Thirty-First AAAI Conference on Artificial Intelligence (2017)

    Google Scholar 

  29. Somers, S., Mtisupoulos, C., Lebiere, C., Thomson, R.: Explaining the decisions of a deep reinforcement learners with a cognitive architecture. In: Proceedings of International Conference on Cognitive Modeling (2018)

    Google Scholar 

  30. Somers, S., Mitsopoulos, K., Lebiere, C., Thomson, R.: Cognitive-level salience for explainable artificial intelligence. In: Proceedings of International Conference on Cognitive Modeling, Montreal (2019)

    Google Scholar 

  31. Ribeiro, M., Singh, S., Guestrin, C.: “Why should I trust you?” explaining the predictions of any classifier. In: Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) (2016)

    Google Scholar 

  32. Ras, G., van Gerven, M., Haselager, P.: Explanation methods in deep learning: users, values, concerns and challenges. In: Escalante, H.J., Escalera, S., Guyon, I., Baró, X., Güçlütürk, Y., Güçlü, U., van Gerven, M. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 19–36. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_2

    Chapter  Google Scholar 

  33. Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., MacNeille, P.: A Bayesian framework for learning rule sets for interpretable classification. J. Mach. Learn. Res. 70(18), 1–37 (2017)

    MathSciNet  MATH  Google Scholar 

  34. Keneni, B., et al.: Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7, 17001–17016 (2019)

    Article  Google Scholar 

  35. Erwig, M., Fern, A., Murali, M., Koul, A.: Explaining deep adaptive programs via reward decomposition. In: Proceedings of International Joint Conference on Artificial Intelligence - Working on Explainable Artificial Intelligence (2018)

    Google Scholar 

  36. Yang, S., Shafto, P.: Explainable artificial intelligence via Bayesian teaching. In: Proceedings of 31st Conference on Neural Information Processing Systems, Long Beach (2017)

    Google Scholar 

  37. Shafto, P., Goodman, N., Griffiths, T.: A rational account of pedagogical reasoning: teaching by, and learning from, examples. Cogn. Psychol. 71, 55–89 (2014)

    Article  Google Scholar 

  38. Keil, F.C., Wilson, R.A., Wilson, R.A.: Explanation and Cognition. MIT Press, Cambridge (2000)

    Book  Google Scholar 

  39. Marr, D.: Vision: A Computational Approach. Freeman & Co., San Francisco (1982)

    Google Scholar 

  40. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: Proceedings of IJCAI-2017 Workshop on Explainable Artificial Intelligence (XAI) (2017)

    Google Scholar 

  41. Park, D.H., Hendricks, L.A., Akata, Z., Schiele, B., Darrell, T., Rohrbach, M.: Attentive explanations: justifying decisions and pointing to the evidence. arXiv preprint arXiv:1612.04757 (2016)

  42. Doran, D., Schulz, S. Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794 (2017)

  43. Schoenherr, J.R.: Adapting the zone of proximal development to the wicked environments of professional practice. In: Proceedings of HCII 2020, Copenhagen, HCI International (2020)

    Google Scholar 

  44. Dennett, D.: The Intentional Stance. MIT Press, Cambridge (1987)

    Google Scholar 

  45. Anderson, J.R., Gluck, K.: What role do cognitive architectures play in intelligent tutoring systems? In: Klahr, V., Carver, S.M. (eds.) Cognition Instruction: Twenty-Five Years Progress, pp. 227–262. Lawrence Erlbaum Associates, Mahwah (2001)

    Google Scholar 

  46. Nwana, H.S.: Intelligent tutoring systems: an overview. Artif. Intell. Rev. 4, 251–277 (1990)

    Article  Google Scholar 

  47. Ohlsson, S.: Some principles of intelligent tutoring. Instr. Sci. 14, 293–326 (1986)

    Article  Google Scholar 

  48. Polson, M.C., Richardson, J.J.: Foundations of Intelligent Tutoring Systems. Psychology Press (2013)

    Google Scholar 

  49. Vygotsky, L.S.: Thought and Language. MIT Press, Cambridge (1934/1986)

    Google Scholar 

  50. Vygotsky, L.S.: Mind in Society: The Development of Higher Mental Processes. Harvard University Press, Cambridge (1930–1934/1978)

    Google Scholar 

  51. Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., Gray, J.R.: The seductive allure of neuroscience explanations. J. Cogn. Neurosci. 20, 470–477 (2008)

    Article  Google Scholar 

  52. Rhodes, R.E., Rodriguez, F., Shah, P.: Explaining the alluring influence of neuroscience information on scientific reasoning. J. Exp. Psychol. Learn. Mem. Cogn. 40, 1432–1440 (2014)

    Article  Google Scholar 

  53. Schoenherr, J.R., Thomson, R., Davies, J.: What makes an explanation believable? Mechanistic and anthropomorphic explanations of natural phenomena. In: Proceedings of the 33rd Annual Meeting of the Cognitive Science Society. Cognitive Science Society, Boston (2011)

    Google Scholar 

  54. Bartov, H.: Teaching students to understand the advantages and disadvantages of teleological and anthropomorphic statements in biology. J. Res. Sci. Teach. 18, 79–86 (1981)

    Article  Google Scholar 

  55. Talanquer, V.: Explanations and teleology in chemistry education. Int. J. Sci. Educ. 29, 853–870 (2007)

    Article  Google Scholar 

  56. Talanquer, V.: Exploring dominant types of explanations built by general chemistry students. Int. J. Sci. Educ. 32, 2393–2412 (2010)

    Article  Google Scholar 

  57. Tamir, P., Zohar, A.: Anthropomorphism and teleology in reasoning about biological phenomena. Sci. Educ. 75, 57–67 (1991)

    Article  Google Scholar 

  58. Zohar, A., Ginossar, S.: Lifting the taboo regarding teleology and anthropomorphism in biology education—heretical suggestions. Sci. Educ. 82, 679–697 (1998)

    Article  Google Scholar 

  59. Bardapurkar, A.: Do students see the selection in organic evolution? A critical review of the causal structure of student explanations. Evol. Educ. Outreach 1(3), 299–305 (2008)

    Article  Google Scholar 

  60. Ziegler, D.: The question of purpose. Evol. Educ. Outreach 1, 44–45 (2008)

    Article  Google Scholar 

  61. Barnes, M.E., et al.: Teleological reasoning, not acceptance of evolution, impacts students’ ability to learn natural selection. Evol. Educ. Outreach 10(1), 7 (2017)

    Article  Google Scholar 

  62. Thulin, S., Pramling, N.: Anthropomorphically speaking: on communication between teachers and children in early childhood biology education. Int. J. Early Years Educ. 17, 137–150 (2009)

    Article  Google Scholar 

  63. Karmiloff-Smith, A.: Beyond Modularity. MIT Press/Bradford Books, Cambridge (1992)

    Google Scholar 

  64. Zeki, S.: The disunity of consciousness. Trends Cogn. Sci. 7, 214–218 (2003)

    Article  Google Scholar 

  65. Dehaene, S., et al.: Conscious, preconscious, and subliminal processing: a testable taxonomy. Trends Cogn. Sci. 10(5), 204–211 (2006)

    Article  Google Scholar 

Download references

Acknowledgements

Research was sponsored by the Army Research Laboratory and was accomplished under the Cooperative Agreement Number W911NF-19-2-0223. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for the Government purposed notwithstanding any copyright notation herein.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Robert Thomson or Jordan Richard Schoenherr .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thomson, R., Schoenherr, J.R. (2020). Knowledge-to-Information Translation Training (KITT): An Adaptive Approach to Explainable Artificial Intelligence. In: Sottilare, R.A., Schwarz, J. (eds) Adaptive Instructional Systems. HCII 2020. Lecture Notes in Computer Science(), vol 12214. Springer, Cham. https://doi.org/10.1007/978-3-030-50788-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50788-6_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50787-9

  • Online ISBN: 978-3-030-50788-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics