Accountability in Human and Artificial Intelligence Decision-Making as the Basis for Diversity and Educational Inclusion
Accountability is an important dimension of decision-making in human and artificial intelligence (AI). We argue that it is of fundamental importance to inclusion, diversity and fairness of both the AI-based and human-controlled interactions and any human-facing interventions aiming to change human development, behaviour and learning. Less debated, however, is the nature and role of biases that emerge from theoretical or empirical models that underpin AI algorithms and the interventions driven by such algorithms. Biases emerging from the theoretical and empirical models also affect human-controlled educational systems and interventions. However, the key mitigating difference between AI and human decision-making is that human decisions involve individual flexibility, context-relevant judgements, empathy, as well as complex moral judgements, missing from AI. In this chapter, we argue that our fascination with AI, which predates the current craze by centuries, resides in its ability to act as a ‘mirror’ reflecting our current understandings of human intelligence. Such understandings also inevitably encapsulate biases emerging from our intellectual and empirical limitations. We make a case for the need for diversity to mitigate against biases becoming inbuilt into human and machine systems, and with reference to specific examples, we outline one compelling future for inclusive and accountable AI and educational research and practice.
KeywordsAccountability AI agents Autism spectrum Bias Decision-making Neurodiversity
- Brinkrolf, J., & Hammer B. (2018). Interpretable machine learning with reject option. De Gruyter Oldenbourg at—Automatisierungstechnik, 66(4), 283–290.Google Scholar
- Bull, S., & Kay, J. (2016). SMILI: A framework for inter-faces to learning data in open learner models, learning analytics and related fields. International Journal of Artificial Intelligence in Education, 26(1), 293–331. ISSN 1560-4306. https://doi.org/10.1007/s40593-015-0090-8, https://doi.org/10.1007/s40593-015-0090-8.
- Conati, C., Porayska-Pomsta, K., & Mavrkis, M. (2018). AI in education needs interpretable machine learning: Lessons from open learner modelling. CML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden.Google Scholar
- Crawford, K., & Calo, R. (2016). There is a blind spot in AI. Nature Comment, 538(7625).Google Scholar
- Curry, A. C., & Reiser, V. (2018). #MeToo Alexa: How conversational systems respond to sexual harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing (pp. 7–14). New Orleans, Louisiana, June 5, 2018.Google Scholar
- Davis, R. J. (1996). What are intelligence? And why? 1996 AAAI presidential address. The American Association for Artificial Intelligence.Google Scholar
- Davis, R., Shrobe, H., & Szolovits, P. (1993). What is knowledge representation? AI Magazine, 14(1), 17–33.Google Scholar
- Dias, J., & Paiva, A. (2005). Feeling and reasoning: A computational model for emotional characters. In Lecture Notes in Computer Science, Vol. 3808. Progress in artificial intelligence (pp. 127–140). Berlin, Heidelberg: Springer.Google Scholar
- Dubnick, M. J. (2014). Toward an ethical theory of accountable governance. International Political Science Association meeting, July 19–24, Montreal.Google Scholar
- Epstein, R. (1984). The principle of parsimony and some applications in psychology. The Journal of Mind and Behavior, 5(2), 119–130.Google Scholar
- Fromm, E. (1941). Escape from freedom. New York: Reinhart.Google Scholar
- Gavalas, A. (2014). Brain parsimony and its effects on decision making. OKS Review, 3(1), EN, 1–14.Google Scholar
- Hernandez-Orallo, J., & Vold, K. (2019). AI extenders: The ethical and societal implications of human cognitively extended by AI, AAAI 2019.Google Scholar
- Houdé, O. (2013). The psychology of a child. Thessaloniki: Vesta Editions.Google Scholar
- Jones, H., Sabouret, N., Damian, I., Baur, T., André, E., Porayska-Pomsta, K., et al. (2014). Interpreting social cues to generate credible affective reactions of virtual job interviewers. IDGEI 2014, ACM. arXiv preprint arXiv:1402.5039.
- Lai, E. R. (2011). Metacognition: A literature review (Research Report, Pearson). https://images.pearsonassessments.com/images/tmrs/Metacognition_Literature_Review_Final.pdf.
- Lipton, Z., & Steinhardt, J. (2018). Troubling trends in machine learning scholarship. In ICML 2018: The Debates. arXiv:1807.03341.
- Moshman, D. (2011). Adolescent rationality and development. Routledge.Google Scholar
- Paul, R. W., & Binkler, J. A. (1990). Critical thinking: What every person needs to survive in a rapidly changing world. Rohnert Park, CA: Center for Critical Thinking and Moral Critique.Google Scholar
- Porayska-Pomsta, K., & Bernardini, S. (2013). In Sage handbook of digital technology research. http://dx.doi.org/10.4135/9781446282229.n30.
- Porayska-Pomsta, K., & Chryssafidou, E. (2018). Adolescents’ self-regulation during job interviews through an AI coaching environment. In International Conference on Artificial Intelligence in Education (pp. 281–285). Cham: Springer.Google Scholar
- Porayska-Pomsta, K., Alcorn, A. M., Avramides, K., Beale, S., Bernardini, S., Foster, M.-E., et al. (2018). Blending human and artificial intelligence to support autistic children’s social communication skills. ACM Transactions on Human-Computer Interaction (TOCHI) TOCHI, 25(6), December 2018, Article No. 35, New York, NY, USA: ACM.Google Scholar
- Prizant, B. M., Wetherby, A. M., Rubin, E., & Laurent, A. C. (2003). The SCERTS model: A transactional, family-centered approach to enhancing communication and socioemotional ability in children with autism spectrum disorder. Infants and Young Children, 16(4), 296–316.Google Scholar
- Prizant, B. M., Wetherby, A. M., Rubin, E., Laurent, A. C., & Rydell, P. J. (2006). The SCERTS® model: A comprehensive educational approach for children with autism spectrum disorders. Brookes.Google Scholar
- Reisman, D., Schultz, J., Crawford, K., & Whittacker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute Report, April 2018.Google Scholar
- Remington, A. (2018, July). Autism can bring extra abilities and now we’re finding out why. New Scientist. https://www.newscientist.com/article/mg23931860-200-autism-can-bring-extra-abilities-and-now-were-finding-out-why/.
- Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Prentice Hall.Google Scholar
- Satpathy, J. (2012). Issues in neuro-management decision making. Opinion: International Journal of Business management, 2(2), 23–36.Google Scholar
- Strebel, P. (1996). Why do employees resist change? Harvard Business Review on Change (pp. 139–157), USA.Google Scholar
- Sutton, R. S., & Barto, A. G. (2000). Reinforcement learning: An introduction. The MIT Press.Google Scholar
- Terricone, P. (2011). The taxonomy of metacognition. Psychology Press.Google Scholar
- Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.Google Scholar
- Woolf, B. (2008). Building intelligent tutoring systems. Morgan Kaufman.Google Scholar