Collection

The Culture of Trustworthy AI. Public debate, Education and Practical Learning

Over the last years, the European Union has committed towards responsible and sustainable Artificial Intelligence research, development and innovation. In 2019, the High-Level Expert Group on AI (AI HLEG) delivered the Ethics Guidelines for Trustworthy AI and in 2021 the Commission put forward the proposal of a regulatory framework to address different AI risk levels, known as the AI Act. Besides rules and principles, building a Trustworthy AI culture poses several challenges to the whole AI ecosystem, such as:

1) how to create meaningful and constructive debates involving experts with multidisciplinary backgrounds, but also citizens and people who might be directly or indirectly affected by AI systems;

2) the cultural equipment needed to help future AI experts cope with the complexity of societal and ethical changes generated by AI and data-intensive applications;

3) how to translate these cultural resources into working experience with a view to creating a mutual and beneficial interaction between the theory and the practice of Trustworthy AI.

This topical collection aims to explore how we can get closer to a Trustworthy AI Culture sharing investigations and good practices moving along the trajectories suggested by the AI HLEG guidelines: public debate, education and practical learning. This topical collection calls for research papers, project reports, or position papers addressing, but not limited to the following topics:

- Experiences of multidisciplinary perspectives and methodologies that contribute to building a Trustworthy AI culture;

- Critical and constructive analysis of ideas and strategies aimed at building an ecosystem of trust;

- Contributions to the identification of disciplinary gaps (conceptual, language, skills and social diversity) and how to address them;

- Analysis of methodologies or approaches that can help AI experts address tensions and trade-offs among ethical principles in play;

- Approaches to the definition of educational strategies, content and skills to be included in courses dealing with Trustworthy AI;

- Approaches that can contribute to a better integration of the humanities into AI research and development;

- Methods to apply Trustworthy AI concepts and requirements into practice and processes to validate and verify them;

- Proposals of participatory methods that involve all stakeholders of the AI system life-cycle, including the developers, researchers, policy-makers, governments, private and public sectors and the society.

To read more, please click here

Editors

  • Teresa Scantamburlo

    she is Assistant Professor at the Department of Environmental Sciences, Informatics and Statistics, at Ca’ Foscari University of Venice (Italy). Her research focuses on the ethical and social impact of AI and Big Data. She contributed to the AI4EU project and now works on the IRIS project, a collaborative research aimed at understanding ‘infodemic’ and promoting healthy information systems. Before now, she worked as a postdoctoral researcher at the European Centre for Living Technology in Venice and at the University of Bristol (UK).

  • Atia Cortés

    Atia Cortés is a recognised researcher in the Social Link Analytics unit of Barcelona Supercomputing Center, and also board member of the Bioinfo4Women programme. She is a computer science engineer, MsC and PhD in Artificial Intelligence (Universitat Politècnica de Catalunya). She has done research in EU and national funded projects applying AI and robotics to healthcare. Her current focus is on designing tools to implement responsible AI and study its social impact. She was the co-director of the AI4EU Observatory on Society and AI.

  • Andrea Aler Tubella

    Andrea Aler Tubellais a senior research engineer in the Responsible AI group of UMU (Umeå University, Sweden). Her research revolves around logical formalizations for responsible AI. She has degrees in both mathematics (University of Barcelona, University of Cambridge) and computer science (University of Bath). She has taught mathematics and computer science subjects at undergraduate and master’s level, courses in logic and computing at the ESSLLI summer school, and doctoral training on Ethical, Legal, Social and Cultural impacts of AI.

  • Cristian Barrué

    Cristian Barrué is a senior researcher in the Institute of Robotics and Industrial Informatics (CSIC-UPC) and visiting researcher in the Department of Computer Science at the UPC. He is a member of the Knowledge Engineering and Machine Learning group (KEMLg) and the Centre for Research in Intelligent Data Science and Artificial Intelligence (IDEAI). He has been an active member of The Observatory on Society and Artificial Intelligence launched by the European AI on Demand Platform.

  • Francesca Foffano

    Francesca Foffano is a Ph.D. student at the University of York for IGGI (Intelligent Games and Game Intelligence). Her interest focuses on HCI and user perception, including ethical and social concerns in the relationship between AI and users. She previously worked on AI and ethics for the European project AI4EU at ECLT (Ca' Foscari University of Venice) and on the player’s perception of adaptive videogames at Reykjavik University.

Articles (5 in this collection)