Abstract
Modern black-box artificial intelligence algorithms are computationally powerful yet fallible in unpredictable ways. While much research has gone into developing techniques to interpret these algorithms, less have also integrated the requirement to understand the algorithm as a function of their training data. In addition, few have examined the human requirements for explainability, so these interpretations provide the right quantity and quality of information to each user. We argue that Explainable Artificial Intelligence (XAI) frameworks need to account the expertise and goals of the user in order to gain widespread adoptance. We describe the Knowledge-to-Information Translation Training (KITT) framework, an approach to XAI that considers a number of possible explanatory models that can be used to facilitate users’ understanding of artificial intelligence. Following a review of algorithms, we provide a taxonomy of explanation types and outline how adaptive instructional systems can facilitate knowledge translation between developers and users. Finally, we describe limitations of our approach and paths for future research opportunities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Decision trees get more complex when the decisions are probabilistic and become less explainable even though they are still technically interpretable.
- 2.
For the sake of analogy, we will refer to the black-box processes and output, such as the hidden layers in a neural network leading to the output layer, as an algorithm’s cognition.
- 3.
We acknowledge that there is potentially overlap between some explanation types.
References
Lipton, Z.C.: The mythos of model interpretability. In: ICML Workshop on Human Interpretability in Machine Learning, New York (2016)
Bhatt, U., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability and Transparency, pp. 648–657 (2020)
Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)
Andras, P., et al.: Trusting intelligent machines: deepening trust within socio-technical systems. IEEE Technol. Soc. Mag. 37(4), 76–83 (2018)
Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–134 (2019)
Caliskan, A.B.J., Narayanan, A.: Semantic derived automatically from language corpora contain human-like biases. Science 6334(356), 183–186 (2017)
Zou, J., Schiebinger, L.: AI can be sexist and racist - it’s time to make it fair. Nat. Comments 559, 324–326 (2018)
BCC: Google apologises for photos app’s racist blunder. BBC (2015). https://www.bbc.com/news/technology-33347866. Accessed 15 Dec 2019
Kasperkevic, J.: Google says sorry for racist auto-tag in photo app. The Guardian (2015). https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app. Accessed 14 Dec 2019
Hern, A.: Google’s solution to accidental algorithmic racism: ban gorillas. The Guardian (2018). https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people. Accessed 15 Dec 2019
Edwards, L., Veale, M.: Slave to the algorithm: why a right to an explanation is probably not the remedy you are looking for. Duke Law Technol. Rev. 16, 18–84 (2017)
Gunning, D.: DARPA XAI BAA. DARPA (2016). https://www.darpa.mil/attachments/DARPA-BAA-16–53.pdf. Accessed 20 Feb 2020
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions, and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Deeks, A.: The judicial demand for explainable artificial intelligence. Columbia Law Rev. 119(7), 1829–1850 (2019)
Yin, M., Wortman, V., Wallach, H.: Understanding the effect of accuracy on trust in machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2019)
Straunch, R.: Squishy problems and quantitative method. Policy Sci. 6, 175–184 (1975)
Lakkaraju, H., Bastani, O.: “How do I fool you?”: manipulating user trust via misleading black box explanations. In: Proceedings of AAAI/ACM Conference on AI, Ethics, and Society (2020)
Miller, T.: Artif. Intell. 267, 1–38 (2019)
Hoffman, R., Klein, G., Mueller, S.: Explaining explanation for “Explainable AI”. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, pp. 197–201 (2018)
Gilpin, L., Bau, D., Yuan, B., Baiwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Proceedings of IEEE 5th International Conference on Data Science and Advanced Analytics, pp. 80–89 (2018)
Došilović, F., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: Proceedings of 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 210–215 (2018)
Fagan, L.M., Shortliffe, E.H., Buchanan, B.G.: Computer-based medical decision making: from MYCIN to VM. Automedica 3, 97–108 (1980)
Shortliffe, E.H.: Computer-Based Medical Consultations: MYCIN. Elsevier/North Holland, New York (1976)
Gorry, G.A.: Computer-assisted clinical decision making. Methods Inf. Med. 12, 45–51 (1973)
Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Aditya, S.: Explainable image understanding using vision and reasoning. In: Proceedings of Thirty-First AAAI Conference on Artificial Intelligence (2017)
Somers, S., Mtisupoulos, C., Lebiere, C., Thomson, R.: Explaining the decisions of a deep reinforcement learners with a cognitive architecture. In: Proceedings of International Conference on Cognitive Modeling (2018)
Somers, S., Mitsopoulos, K., Lebiere, C., Thomson, R.: Cognitive-level salience for explainable artificial intelligence. In: Proceedings of International Conference on Cognitive Modeling, Montreal (2019)
Ribeiro, M., Singh, S., Guestrin, C.: “Why should I trust you?” explaining the predictions of any classifier. In: Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) (2016)
Ras, G., van Gerven, M., Haselager, P.: Explanation methods in deep learning: users, values, concerns and challenges. In: Escalante, H.J., Escalera, S., Guyon, I., Baró, X., Güçlütürk, Y., Güçlü, U., van Gerven, M. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 19–36. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_2
Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., MacNeille, P.: A Bayesian framework for learning rule sets for interpretable classification. J. Mach. Learn. Res. 70(18), 1–37 (2017)
Keneni, B., et al.: Evolving rule-based explainable artificial intelligence for unmanned aerial vehicles. IEEE Access 7, 17001–17016 (2019)
Erwig, M., Fern, A., Murali, M., Koul, A.: Explaining deep adaptive programs via reward decomposition. In: Proceedings of International Joint Conference on Artificial Intelligence - Working on Explainable Artificial Intelligence (2018)
Yang, S., Shafto, P.: Explainable artificial intelligence via Bayesian teaching. In: Proceedings of 31st Conference on Neural Information Processing Systems, Long Beach (2017)
Shafto, P., Goodman, N., Griffiths, T.: A rational account of pedagogical reasoning: teaching by, and learning from, examples. Cogn. Psychol. 71, 55–89 (2014)
Keil, F.C., Wilson, R.A., Wilson, R.A.: Explanation and Cognition. MIT Press, Cambridge (2000)
Marr, D.: Vision: A Computational Approach. Freeman & Co., San Francisco (1982)
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: Proceedings of IJCAI-2017 Workshop on Explainable Artificial Intelligence (XAI) (2017)
Park, D.H., Hendricks, L.A., Akata, Z., Schiele, B., Darrell, T., Rohrbach, M.: Attentive explanations: justifying decisions and pointing to the evidence. arXiv preprint arXiv:1612.04757 (2016)
Doran, D., Schulz, S. Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794 (2017)
Schoenherr, J.R.: Adapting the zone of proximal development to the wicked environments of professional practice. In: Proceedings of HCII 2020, Copenhagen, HCI International (2020)
Dennett, D.: The Intentional Stance. MIT Press, Cambridge (1987)
Anderson, J.R., Gluck, K.: What role do cognitive architectures play in intelligent tutoring systems? In: Klahr, V., Carver, S.M. (eds.) Cognition Instruction: Twenty-Five Years Progress, pp. 227–262. Lawrence Erlbaum Associates, Mahwah (2001)
Nwana, H.S.: Intelligent tutoring systems: an overview. Artif. Intell. Rev. 4, 251–277 (1990)
Ohlsson, S.: Some principles of intelligent tutoring. Instr. Sci. 14, 293–326 (1986)
Polson, M.C., Richardson, J.J.: Foundations of Intelligent Tutoring Systems. Psychology Press (2013)
Vygotsky, L.S.: Thought and Language. MIT Press, Cambridge (1934/1986)
Vygotsky, L.S.: Mind in Society: The Development of Higher Mental Processes. Harvard University Press, Cambridge (1930–1934/1978)
Weisberg, D.S., Keil, F.C., Goodstein, J., Rawson, E., Gray, J.R.: The seductive allure of neuroscience explanations. J. Cogn. Neurosci. 20, 470–477 (2008)
Rhodes, R.E., Rodriguez, F., Shah, P.: Explaining the alluring influence of neuroscience information on scientific reasoning. J. Exp. Psychol. Learn. Mem. Cogn. 40, 1432–1440 (2014)
Schoenherr, J.R., Thomson, R., Davies, J.: What makes an explanation believable? Mechanistic and anthropomorphic explanations of natural phenomena. In: Proceedings of the 33rd Annual Meeting of the Cognitive Science Society. Cognitive Science Society, Boston (2011)
Bartov, H.: Teaching students to understand the advantages and disadvantages of teleological and anthropomorphic statements in biology. J. Res. Sci. Teach. 18, 79–86 (1981)
Talanquer, V.: Explanations and teleology in chemistry education. Int. J. Sci. Educ. 29, 853–870 (2007)
Talanquer, V.: Exploring dominant types of explanations built by general chemistry students. Int. J. Sci. Educ. 32, 2393–2412 (2010)
Tamir, P., Zohar, A.: Anthropomorphism and teleology in reasoning about biological phenomena. Sci. Educ. 75, 57–67 (1991)
Zohar, A., Ginossar, S.: Lifting the taboo regarding teleology and anthropomorphism in biology education—heretical suggestions. Sci. Educ. 82, 679–697 (1998)
Bardapurkar, A.: Do students see the selection in organic evolution? A critical review of the causal structure of student explanations. Evol. Educ. Outreach 1(3), 299–305 (2008)
Ziegler, D.: The question of purpose. Evol. Educ. Outreach 1, 44–45 (2008)
Barnes, M.E., et al.: Teleological reasoning, not acceptance of evolution, impacts students’ ability to learn natural selection. Evol. Educ. Outreach 10(1), 7 (2017)
Thulin, S., Pramling, N.: Anthropomorphically speaking: on communication between teachers and children in early childhood biology education. Int. J. Early Years Educ. 17, 137–150 (2009)
Karmiloff-Smith, A.: Beyond Modularity. MIT Press/Bradford Books, Cambridge (1992)
Zeki, S.: The disunity of consciousness. Trends Cogn. Sci. 7, 214–218 (2003)
Dehaene, S., et al.: Conscious, preconscious, and subliminal processing: a testable taxonomy. Trends Cogn. Sci. 10(5), 204–211 (2006)
Acknowledgements
Research was sponsored by the Army Research Laboratory and was accomplished under the Cooperative Agreement Number W911NF-19-2-0223. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for the Government purposed notwithstanding any copyright notation herein.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply
About this paper
Cite this paper
Thomson, R., Schoenherr, J.R. (2020). Knowledge-to-Information Translation Training (KITT): An Adaptive Approach to Explainable Artificial Intelligence. In: Sottilare, R.A., Schwarz, J. (eds) Adaptive Instructional Systems. HCII 2020. Lecture Notes in Computer Science(), vol 12214. Springer, Cham. https://doi.org/10.1007/978-3-030-50788-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-50788-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50787-9
Online ISBN: 978-3-030-50788-6
eBook Packages: Computer ScienceComputer Science (R0)