Skip to main content

Interactive Collaborative Learning with Explainable Artificial Intelligence

  • Conference paper
  • First Online:
Learning in the Age of Digital and Green Transition (ICL 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 633))

Included in the following conference series:

  • 790 Accesses

Abstract

In the summer term 2021, students of computer science have developed and implemented several variants of an Artificial Intelligence that is able to learn string patterns from examples. Every AI is able to answer questions about its behavior, thus, being Explainable AI (XAI). In the summer term 2022, such an XAI is deployed in higher education. Students are encouraged to collaboratively experiment with the XAI. The learning goal is to find out what the XAI is doing and why it is acting in the way observed. There is no need of a human teacher interference. Students learn collaboratively by interacting with the XAI and from chatting with the system about the way it is doing its job. In a sense, the XAI is a domain expert introducing students to its business and disseminating its topical knowledge when being asked to do so. The recent XAI deployment demonstrates the effectiveness of this approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Defence Advanced Research Projects Agency (DARPA): Broad agency announcement: Explainable Artificial Intelligence (XAI). DARPA-BAA-16-53. Arlington, VA, USA (2016)

    Google Scholar 

  2. Tjoa, E., Guan, C.: A survey on explainable Artificial Intelligence (XAI): toward medical XAI. Trans. Neural Netw. Learn. Syst. 32, 4793–4813 (2021)

    Article  Google Scholar 

  3. Arieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  4. Wilson, R.A., Keil, F.C.: The shadows and shallows of explanation. Mind. Mach. 8, 137–157 (1998)

    Article  Google Scholar 

  5. Keil, F.C.: Explanation and understanding. Annu. Rev. Psychol. 57, 227–254 (2006)

    Article  Google Scholar 

  6. Strube, G.: Wörterbuch der Kognitionswissenschaft. Klett-Cotta, Stuttgart (1996)

    Google Scholar 

  7. Castelvecchi, D.: Can we open the blackbox of AI? Nature 538, 20–23 (2016)

    Article  Google Scholar 

  8. Wahlster, W.: Mehr vom Menschen lernen. Frankfurter Allgemeine Zeitung, 20 September 2020

    Google Scholar 

  9. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16, 31–57 (2018)

    Article  Google Scholar 

  10. Lane, H., Howard, C., Hapke, H.M.: Natural Language Processing in Action: Understanding, Analyzing, and Generating Text with Python. Manning Publishing, Shelter Island (2019)

    Google Scholar 

  11. Ortega, A., Fierrez J., Moralez, A., Wang, Z., Ribeiro, T.: Symbolic AI for XAI: evaluating LFIT inductive programming for fair and explainable automatic recruitment. In: 2021 IEEE Winter Conference on Applications of Computer Vision Workshop, Waikola, HI, USA, pp. 78–87 (2021)

    Google Scholar 

  12. Ribeiro, T., Inoue, K.: Learning prime implicant conditions from interpretation transition. In: Davis, J., Ramon, J. (eds.) ILP 2014. LNCS (LNAI), vol. 9046, pp. 108–125. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23708-4_8

    Chapter  Google Scholar 

  13. Ribeiro, T., Inoue, K., Sakama, C.: A BDD-based algorithm for learning from interpretation transition. In: Zaverucha, G., Santos Costa, V., Paes, A. (eds.) ILP 2013. LNCS (LNAI), vol. 8812, pp. 47–63. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44923-3_4

    Chapter  Google Scholar 

  14. Ribeiro , T.: Studies on learning dynamics of systems from state transitions. Doctoral thesis. The Graduate University of Advanced Studies, School of Multidisciplinary Sciences, Tokyo, Japan (2015)

    Google Scholar 

  15. Inoue, K., Ribeiro, T., Sakama, C.: Learning from interpretation transition. Mach. Learn. 94(1), 51–79 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Jantke, K.P.: Explainable artificial intelligence (XAI): explainability by design (invited keynote). In: The World Industrial Design Conference WIDC 2021, Yantai, China (2021). https://www.gdio.org/news/173.html

  17. Jantke, K.P., Knauf, R.: Didactic design through storyboarding: standard concepts for standard tools. In: Baltes, B.R., Edwards, L., Galindo, F., et al. (eds.), Proceedings of the 4th International Symposium on Information and Communication Technologies, Cape Town, South Africa, pp. 20–25. Computer Science Press: Trinity College Dublin (2005)

    Google Scholar 

  18. Arnold,O.: Die Therapiesteuerungskomponente einer wissensbasierten Systemarchitektur für Aufgaben der Prozeßführung. DISKI, vol. 130, infix: St. Augustin (1996)

    Google Scholar 

  19. Arnold, O., Jantke, K.P.: Therapy plan generation as program synthesis. In: Arikawa, S., Jantke, K.P. (eds.) AII/ALT -1994. LNCS, vol. 872, pp. 40–55. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-58520-6_52

    Chapter  Google Scholar 

  20. Arnold, O., Jantke, K.P.: Planning is learning. In: Dilger, W., Schlosser, M., Zeidler, J., Ittner, A. (eds.), Machine Learning, 1996 Annual Meeting of the Special Interest Group of Machine Learning of the German Computer Science Society (GI), pp. 12-17. TU Chemnitz (1996)

    Google Scholar 

  21. Arnold, O., Jantke, K.P.: AI planning for unique learning experiences: the time travel exploratory games approach. In: Csapó, B., Umohobi, J. (eds.), Proceedings of the 13th International Conference on Computer Supported Education CSEDU 2013, vol. 1, pp. 124–132. SciTePress, Setúbal (2021)

    Google Scholar 

  22. Arnold, O., Franke, R., Jantke, K.P., Wache, H.-H.: Professional training for industrial accident prevention with time travel games. Int. J. Adv. Corp. Learn. 15(1), 20–34 (2022)

    Article  Google Scholar 

  23. Arnold, O., Franke, R., Jantke, K.P., Wache, H.-H.: Dynamic plan generation and digital storyboarding for the professional training of accident prevention with time travel games. In: Guralnick, D., Auer, M.E., Poce, A. (eds.) TLIC 2021. LNNS, vol. 349, pp. 3–18. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-90677-1_1

    Chapter  Google Scholar 

  24. Arnold, O., Jantke, K.P.: Therapy plans as hierarchically structured graphs. In: Fifth International Workshop on Graph Grammars and Their Application to Computer Science, Williamsburg, VA, USA (1994)

    Google Scholar 

  25. Angluin, D.: Finding patterns common to a set of strings. J. Comput. Syst. Sci. 21, 46–62 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  26. Nix, R.P.: Editing by example. ACM Trans. Program. Lang. Syst. 7, 600–621 (1985)

    Article  MATH  Google Scholar 

  27. Arikawa, S., Kuhara, S., Miyano, S., Mukouchi, Y., Shinohara, A., Shinohara, T.: A machine discovery from amino acid sequences by decision trees over regular patterns. In: Proceedings of the International Conference on Fifth Generation Computer Systems, pp. 618–625 (1992)

    Google Scholar 

  28. Jantke, K.P.: Patterns of game playing behavior as indicators of mastery. In: Ifenthaler, D., Ezeryel, D., Ge, X. (eds.) Assessment in Game-Based Learning: Foundations, Innovations, and Perspectives, pp. 85–103. Springer, Heidelberg (2012). https://doi.org/10.1007/978-1-4614-3546-4_6

    Chapter  Google Scholar 

  29. Garey, D.S., Johnson, M.: Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, London (1979)

    Google Scholar 

  30. Lange, S., Wiehagen, R.: Polynomial-time inference of arbitrary pattern languages. N. Gener. Comput. 8, 361–370 (1991)

    Article  MATH  Google Scholar 

  31. Zeugmann, T.: Lange and Wiehagen’s pattern language learning algorithm: an average-case analysis with respect to its total learning time. Ann. Math. Artif. Intell. 23, 117–145 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  32. Lombrozo, T.: Explanation and abductive inference. In: Holyoak, K.J., Morrison, R.G. (eds.) Oxford Handbook of Thinking and Reasoning, pp. 260–276 (2012)

    Google Scholar 

Download references

Acknowledgement

The German Federal Ministry of Labour and Social Affairs has supported this work by an award for the authors’ concept of “Hypothesizing Explainable AI”.

The authors gratefully acknowledge the inspiring and productive exchange of ideas with Leonhard Bollmann, Hannes Dröse, Justin Kraft, Pascal Pflügner, Johannes Veith, Markus Weißflog, and Stefan Woyde. They all contributed to the XAI endeavor within the framework of our Learning Systems module in 2021. They all implemented their own XAI more or less similar to the one demonstrated in the present paper. In this way, they contributed abundant evidence for the possibility to provide Explainable AI to learn with and to learn from.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Klaus P. Jantke .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Arnold, O., Golchert, S., Rennert, M., Jantke, K.P. (2023). Interactive Collaborative Learning with Explainable Artificial Intelligence. In: Auer, M.E., Pachatz, W., Rüütmann, T. (eds) Learning in the Age of Digital and Green Transition. ICL 2022. Lecture Notes in Networks and Systems, vol 633. Springer, Cham. https://doi.org/10.1007/978-3-031-26876-2_2

Download citation

Publish with us

Policies and ethics