Collection

Between the Ethics and Epistemology of Explainable AI (XAI)

This Topical Collection investigates the epistemic and normative aspects of explainable AI (XAI), and in particular looks at their interaction. For while the primary goal of XAI is epistemic – providing knowledge or understanding of the inner workings of AI models – it is often seen as an important aspect of the responsible use of AI. Normative questions about transparency, responsibility, and accountability frequently involve and interact with XAI. In this respect, this Topical Collection aims at a synergy between epistemological concerns with non-epistemological ones (e.g., ethical, political, economic, societal). On the one hand, the epistemic status of XAI tools can help inform their role as a solution to non-epistemological/normative questions. If current XAI tools fail to provide understanding of the inner workings of AI models, e.g. yielding only limited knowledge of the importance of input features, what role can they play for facilitating trust and meaningful human control? To what extent can they support human agency and clarify accountability questions? Being clearer on the epistemic status of users can yield more fine-grained answers to these philosophical questions. On the other hand, the normative questions can further inform what the appropriate epistemic goals are for (not yet developed) XAI tools. To what extent is there a normative requirement for the explainability of decisions? Are specific epistemic states required to adhere to norms of practical reasoning? By considering the ultimate goals of explanations it can become clearer what the requirements on explanations are. When is a model explainable? We thus welcome contributions that further discuss the explainability of AI models, and in particular those that focus on the interaction between epistemic and normative aspects of explainable AI.

Editors

Articles

Articles will be displayed here once they are published.