Advertisement

Accountability in Human and Artificial Intelligence Decision-Making as the Basis for Diversity and Educational Inclusion

  • Kaśka Porayska-PomstaEmail author
  • Gnanathusharan Rajendran
Chapter
Part of the Perspectives on Rethinking and Reforming Education book series (PRRE)

Abstract

Accountability is an important dimension of decision-making in human and artificial intelligence (AI). We argue that it is of fundamental importance to inclusion, diversity and fairness of both the AI-based and human-controlled interactions and any human-facing interventions aiming to change human development, behaviour and learning. Less debated, however, is the nature and role of biases that emerge from theoretical or empirical models that underpin AI algorithms and the interventions driven by such algorithms. Biases emerging from the theoretical and empirical models also affect human-controlled educational systems and interventions. However, the key mitigating difference between AI and human decision-making is that human decisions involve individual flexibility, context-relevant judgements, empathy, as well as complex moral judgements, missing from AI. In this chapter, we argue that our fascination with AI, which predates the current craze by centuries, resides in its ability to act as a ‘mirror’ reflecting our current understandings of human intelligence. Such understandings also inevitably encapsulate biases emerging from our intellectual and empirical limitations. We make a case for the need for diversity to mitigate against biases becoming inbuilt into human and machine systems, and with reference to specific examples, we outline one compelling future for inclusive and accountable AI and educational research and practice.

Keywords

Accountability AI agents Autism spectrum Bias Decision-making Neurodiversity 

References

  1. Aleven, V. A., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26(2), 147–179.CrossRefGoogle Scholar
  2. Baron-Cohen, S. (2017). Editorial perspective: Neurodiversity—A revolutionary concept for autism and psychiatry. Journal of Child Psychology and Psychiatry, 58(6), 744–747.  https://doi.org/10.1111/jcpp.12703.CrossRefGoogle Scholar
  3. Bernardini, S., Porayska-Pomsta, K., & Smith, T. J. (2014). ECHOES: An intelligent serious game for fostering social communication in children with autism. Information Sciences, 264, 41–60.CrossRefGoogle Scholar
  4. Brinkrolf, J., & Hammer B. (2018). Interpretable machine learning with reject option. De Gruyter Oldenbourg at—Automatisierungstechnik, 66(4), 283–290.Google Scholar
  5. Bull, S., & Kay, J. (2016). SMILI: A framework for inter-faces to learning data in open learner models, learning analytics and related fields. International Journal of Artificial Intelligence in Education, 26(1), 293–331. ISSN 1560-4306.  https://doi.org/10.1007/s40593-015-0090-8,  https://doi.org/10.1007/s40593-015-0090-8.
  6. Conati, C., Porayska-Pomsta, K., & Mavrkis, M. (2018). AI in education needs interpretable machine learning: Lessons from open learner modelling. CML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden.Google Scholar
  7. Crawford, K., & Calo, R. (2016). There is a blind spot in AI. Nature Comment, 538(7625).Google Scholar
  8. Curry, A. C., & Reiser, V. (2018). #MeToo Alexa: How conversational systems respond to sexual harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing (pp. 7–14). New Orleans, Louisiana, June 5, 2018.Google Scholar
  9. Davis, R. J. (1996). What are intelligence? And why? 1996 AAAI presidential address. The American Association for Artificial Intelligence.Google Scholar
  10. Davis, R., Shrobe, H., & Szolovits, P. (1993). What is knowledge representation? AI Magazine, 14(1), 17–33.Google Scholar
  11. Dias, J., & Paiva, A. (2005). Feeling and reasoning: A computational model for emotional characters. In Lecture Notes in Computer Science, Vol. 3808. Progress in artificial intelligence (pp. 127–140). Berlin, Heidelberg: Springer.Google Scholar
  12. Dobson, S. D., & Brent, L. J. (2013). On the evolution of the serotonin transporter linked polymorphic region (5-HTTLPR) in primates. Frontiers in Human Neuroscience, 7, 588.CrossRefGoogle Scholar
  13. Dubnick, M. J. (2014). Toward an ethical theory of accountable governance. International Political Science Association meeting, July 19–24, Montreal.Google Scholar
  14. Epstein, R. (1984). The principle of parsimony and some applications in psychology. The Journal of Mind and Behavior, 5(2), 119–130.Google Scholar
  15. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906–911.CrossRefGoogle Scholar
  16. Fromm, E. (1941). Escape from freedom. New York: Reinhart.Google Scholar
  17. Gavalas, A. (2014). Brain parsimony and its effects on decision making. OKS Review, 3(1), EN, 1–14.Google Scholar
  18. Hernandez-Orallo, J., & Vold, K. (2019). AI extenders: The ethical and societal implications of human cognitively extended by AI, AAAI 2019.Google Scholar
  19. Houdé, O. (2013). The psychology of a child. Thessaloniki: Vesta Editions.Google Scholar
  20. Jones, H., Sabouret, N., Damian, I., Baur, T., André, E., Porayska-Pomsta, K., et al. (2014). Interpreting social cues to generate credible affective reactions of virtual job interviewers. IDGEI 2014, ACM. arXiv preprint arXiv:1402.5039.
  21. Lai, E. R. (2011). Metacognition: A literature review (Research Report, Pearson). https://images.pearsonassessments.com/images/tmrs/Metacognition_Literature_Review_Final.pdf.
  22. Lipton, Z., & Steinhardt, J. (2018). Troubling trends in machine learning scholarship. In ICML 2018: The Debates. arXiv:1807.03341.
  23. Moshman, D. (2011). Adolescent rationality and development. Routledge.Google Scholar
  24. O’Neil, S. (2008). The meaning of autism: Beyond disorder. Disability & Society, 23(7), 787–799.  https://doi.org/10.1080/09687590802469289.CrossRefGoogle Scholar
  25. Paul, R. W., & Binkler, J. A. (1990). Critical thinking: What every person needs to survive in a rapidly changing world. Rohnert Park, CA: Center for Critical Thinking and Moral Critique.Google Scholar
  26. Porayska-Pomsta, K. (2016). AI as a methodology for supporting educational praxis and teacher metacognition. International Journal of Artificial Intelligence in Education, 26(2), 679–700.CrossRefGoogle Scholar
  27. Porayska-Pomsta, K., & Bernardini, S. (2013). In Sage handbook of digital technology research. http://dx.doi.org/10.4135/9781446282229.n30.
  28. Porayska-Pomsta, K., & Chryssafidou, E. (2018). Adolescents’ self-regulation during job interviews through an AI coaching environment. In International Conference on Artificial Intelligence in Education (pp. 281–285). Cham: Springer.Google Scholar
  29. Porayska-Pomsta, K., Rizzo, P., Damian, I., Baur, T., André, E., Sabouret, N., et al. (2014), Who’s afraid of job interviews? Definitely a question for user modelling. In International Conference on User Modelling, Adaptation and Personalization (pp. 411–422). Cham: Springer.CrossRefGoogle Scholar
  30. Porayska-Pomsta, K., Alcorn, A. M., Avramides, K., Beale, S., Bernardini, S., Foster, M.-E., et al. (2018). Blending human and artificial intelligence to support autistic children’s social communication skills. ACM Transactions on Human-Computer Interaction (TOCHI) TOCHI, 25(6), December 2018, Article No. 35, New York, NY, USA: ACM.Google Scholar
  31. Prizant, B. M., Wetherby, A. M., Rubin, E., & Laurent, A. C. (2003). The SCERTS model: A transactional, family-centered approach to enhancing communication and socioemotional ability in children with autism spectrum disorder. Infants and Young Children, 16(4), 296–316.Google Scholar
  32. Prizant, B. M., Wetherby, A. M., Rubin, E., Laurent, A. C., & Rydell, P. J. (2006). The SCERTS® model: A comprehensive educational approach for children with autism spectrum disorders. Brookes.Google Scholar
  33. Rajendran, G. (2013). Virtual environments and autism: A developmental psychopathological approach. Journal of Computer Assisted learning, 29(4), 334–347.  https://doi.org/10.1111/jcal.12006.CrossRefGoogle Scholar
  34. Reisman, D., Schultz, J., Crawford, K., & Whittacker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute Report, April 2018.Google Scholar
  35. Remington, A. (2018, July). Autism can bring extra abilities and now we’re finding out why. New Scientist. https://www.newscientist.com/article/mg23931860-200-autism-can-bring-extra-abilities-and-now-were-finding-out-why/.
  36. Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138(2), 353.CrossRefGoogle Scholar
  37. Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Prentice Hall.Google Scholar
  38. Satpathy, J. (2012). Issues in neuro-management decision making. Opinion: International Journal of Business management, 2(2), 23–36.Google Scholar
  39. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550, 354–359.CrossRefGoogle Scholar
  40. Strebel, P. (1996). Why do employees resist change? Harvard Business Review on Change (pp. 139–157), USA.Google Scholar
  41. Sutton, R. S., & Barto, A. G. (2000). Reinforcement learning: An introduction. The MIT Press.Google Scholar
  42. Terricone, P. (2011). The taxonomy of metacognition. Psychology Press.Google Scholar
  43. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.Google Scholar
  44. Woolf, B. (2008). Building intelligent tutoring systems. Morgan Kaufman.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Kaśka Porayska-Pomsta
    • 1
    Email author
  • Gnanathusharan Rajendran
    • 2
  1. 1.UCL Knowledge LabUniversity College London, UCL Institute of EducationLondonUK
  2. 2.Edinburgh Centre for Robotics, Department of PsychologyHeriot-Watt UniversityEdinburghUK

Personalised recommendations