Skip to main content

Privacy-Aware Explanations for Team Formation

  • Conference paper
  • First Online:
PRIMA 2022: Principles and Practice of Multi-Agent Systems (PRIMA 2022)

Abstract

Over the recent years there is a growing move towards explainable AI (XAI). The widespread use of AI systems in a large variety of applications that support human’s decisions leads to the imperative need for providing explanations regarding the AI system’s functionality. That is, explanations are necessary for earning the user’s trust regarding the AI systems. At the same time, recent legislation such as GDPR regarding data privacy require that any attempt towards explainability shall not disclose private data and information to third-parties. In this work we focus on providing privacy-aware explanations in the realm of team formation scenarios. We propose the means to analyse whether an explanation leads an explainability algorithm to incur in privacy breaches when computing explanation for a user.

Research supported by projects AI4EU (H2020-825619), TAILOR (H2020-952215), 2019DI17, Humane-AI-Net (H2020-952026), Crowd4SDG (H2020-872944), and grant PID2019-104156GB-I00 funded by MCIN/AEI/10.13039/501100011033.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Anagnostopoulos, A., Becchetti, L., Castillo, C., Gionis, A., Leonardi, S.: Power in unity: forming teams in large-scale community systems, pp. 599–608 (01 2010)

    Google Scholar 

  3. Andrejczuk, E., Berger, R., Rodríguez-Aguilar, J.A., Sierra, C., Marín-Puchades, V.: The composition and formation of effective teams: computer science meets organizational psychology. Knowl. Eng. Rev. 33, e17 (2018)

    Article  Google Scholar 

  4. Antognini, D., Musat, C., Faltings, B.: Interacting with explanations through critiquing. In: Zhou, Z.H. (ed.) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 515–521 (8 2021), main Track

    Google Scholar 

  5. Boixel, A., Endriss, U.: Automated justification of collective decisions via constraint solving. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp. 168–176. AAMAS ’20 (2020)

    Google Scholar 

  6. Borg, A., Bex, F.: Contrastive explanations for argumentation-Based conclusions. In: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (2022)

    Google Scholar 

  7. Capezzuto, L., Tarapore, D., Ramchurn, S.D.: Anytime and efficient coalition formation with spatial and temporal constraints (2020)

    Google Scholar 

  8. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)

    Google Scholar 

  9. Crawford, C., Rahaman, Z., Sen, S.: Evaluating the efficiency of robust team formation algorithms. In: Osman, N., Sierra, C. (eds.) Autonomous Agents and Multiagent Systems, pp. 14–29. Springer International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-46882-2_2

  10. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215 (2018)

    Google Scholar 

  11. Frith, C., Frith, U.: Theory mind. Curr. biol. 15(17), R644–R645 (2005)

    Article  Google Scholar 

  12. Georgara, A., Rodriguez-Aguilar, J.A., Sierra, C.: Allocating teams to tasks: an anytime heuristic competence-based approach. In: Baumeister, D., Rothe, J. (eds.) Multi-Agent Systems - 19th European Conference, EUMAS 2019, pp. 14–16. Germany, September, Düsseldorf (2022)

    Google Scholar 

  13. Georgara, A., Rodríguez-Aguilar, J.A., Sierra, C.: Building contrastive explanations for multi-agent team formation. In: Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems (AAMAS ’22), pp. 516–524 (2022)

    Google Scholar 

  14. Georgara, A., et al.: An anytime heuristic algorithm for allocating many teams to many tasks. In: Proceedings of the 21st International Conference on Autonomous Agents and MultiAgent Systems, pp. 1598–1600 (2022)

    Google Scholar 

  15. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation’’. AI Magazine 38(3), 50–57 (2017)

    Article  Google Scholar 

  16. Holzinger, A.: From machine learning to explainable ai. In: 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), pp. 55–66 (2018)

    Google Scholar 

  17. Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30. RecSys ’18, Association for Computing Machinery, New York, NY, USA (2018)

    Google Scholar 

  18. Kökciyan, N., Yolum, P.: PriGuard: a semantic approach to detect privacy violations in online social networks. IEEE Trans. Knowl. Data Eng. 28(10), 2724–2737 (2016)

    Article  Google Scholar 

  19. Kraus, S., et al.: Ai for explaining decisions in multi-agent environments. Proceedings of the AAAI Conference on Artificial Intelligence, pp. 13534–13538 (2020)

    Google Scholar 

  20. Kunkel, J., Donkers, T., Michael, L., Barbu, C.M., Ziegler, J.: Let me explain: impact of personal and impersonal explanations on trust in recommender systems. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12. Association for Computing Machinery, New York, NY, USA (2019)

    Google Scholar 

  21. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  22. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  23. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable ai systems (2020)

    Google Scholar 

  24. Mosca, F., Such, J.M.: ELVIRA: an Explainable Agent for Value and Utility-Driven Multiuser Privacy, pp. 916–924. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2021)

    Google Scholar 

  25. Nardi, O., Boixel, A., Endriss, U.: A graph-based algorithm for the automated justification of collective decisions. In: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’22), pp. 935–943 (2022)

    Google Scholar 

  26. Puiu, A., Vizitiu, A., Nita, C., Itu, L., Sharma, P., Comaniciu, D.: Privacy-preserving and explainable AI for cardiovascular imaging. Stud. Inf. Control, ISSN 1220–1766 30(2), 21–32 (2021)

    Google Scholar 

  27. Rattanasawad, T., Saikaew, K.R., Buranarach, M., Supnithi, T.: A review and comparison of rule languages and rule-based inference engines for the semantic web. In: 2013 International Computer Science and Engineering Conference (ICSEC), pp. 1–6 (2013)

    Google Scholar 

  28. Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Autonom. Agent. Multi-Agent Syst. 33(6), 673–705 (2019)

    Article  Google Scholar 

  29. Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1

    Chapter  Google Scholar 

  30. Sörries, P., et al.: Privacy needs reflection: conceptional design rationales for privacy-preserving explanation user interfaces. Mensch und Computer 2021-Workshopband

    Google Scholar 

  31. Sovrano, F., Vitali, F., Palmirani, M.: Making things explainable vs explaining: requirements and challenges under the GDPR. In: Rodríguez-Doncel, V., Palmirani, M., Araszkiewicz, M., Casanovas, P., Pagallo, U., Sartor, G. (eds.) AICOL/XAILA 2018/2020. LNCS (LNAI), vol. 13048, pp. 169–182. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89811-3_12

    Chapter  Google Scholar 

  32. Such, J.M., Criado, N.: Multiparty privacy in social media. Commun. ACM 61(8), 74–81 (2018)

    Article  Google Scholar 

  33. Zhang, Y., Chen, X.: Explainable recommendation: a survey and new perspectives. Found. Trends Inf. Retrieval 14(1), 1–101 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Athina Georgara .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Georgara, A., Rodríguez-Aguilar, J.A., Sierra, C. (2023). Privacy-Aware Explanations for Team Formation. In: Aydoğan, R., Criado, N., Lang, J., Sanchez-Anguix, V., Serramia, M. (eds) PRIMA 2022: Principles and Practice of Multi-Agent Systems. PRIMA 2022. Lecture Notes in Computer Science(), vol 13753. Springer, Cham. https://doi.org/10.1007/978-3-031-21203-1_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21203-1_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21202-4

  • Online ISBN: 978-3-031-21203-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics