Advertisement

Personalising Explainable Recommendations: Literature and Conceptualisation

  • Mohammad NaisehEmail author
  • Nan Jiang
  • Jianbing Ma
  • Raian Ali
Conference paper
  • 301 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1160)

Abstract

Explanations in intelligent systems aim to enhance a users’ understandability of their reasoning process and the resulted decisions and recommendations. Explanations typically increase trust, user acceptance and retention. The need for explanations is on the rise due to the increasing public concerns about AI and the emergence of new laws, such as the General Data Protection Regulation (GDPR) in Europe. However, users are different in their needs for explanations, and such needs can depend on their dynamic context. Explanations suffer the risk of being seen as information overload, and this makes personalisation more needed. In this paper, we review literature around personalising explanations in intelligent systems. We synthesise a conceptualisation that puts together various aspects being considered important for the personalisation needs and implementation. Moreover, we identify several challenges which would need more research, including the frequency of explanation and their evolution in tandem with the ongoing user experience.

Keywords

Explanations Personalisation Human-computer interaction Intelligent systems 

Notes

Acknowledgments

This work is partially funded by iQ HealthTech and Bournemouth university PGR development fund.

References

  1. 1.
    Barria-Pineda, J., Akhuseyinoglu, K., Brusilovsky, P.: Explaining need-based educational recommendations using interactive open learner models. In: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization UMAP’19 Adjunct, pp. 273–277. ACM, New York (2019).  https://doi.org/10.1145/3314183.3323463
  2. 2.
    Bofeng, Z., Na, W., Gengfeng, W., Sheng, L.: Research on a personalized expert system explanation method based on fuzzy user model. In: Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No. 04EX788), vol. 5, pp. 3996–4000. IEEE (2004)Google Scholar
  3. 3.
    Bofeng, Z., Yue, L.: Customized explanation in expert system for earthquake prediction. In: 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2005), pp. 367–371. IEEE (2005) Google Scholar
  4. 4.
    Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. In: The Cambridge handbook of artificial intelligence, vol. 1, pp. 316–334 (2014)Google Scholar
  5. 5.
    Bunt, A., Lount, M., Lauzon, C.: Are explanations always important?: a study of deployed, low-cost intelligent interactive systems. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, pp. 169–178. ACM (2012)Google Scholar
  6. 6.
    Bussone, A., Stumpf, S., O’Sullivan, D.: The role of explanations on trust and reliance in clinical decision support systems. In: 2015 International Conference on Healthcare Informatics, pp. 160–169. IEEE (2015)Google Scholar
  7. 7.
    Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 258–262. ACM (2019)Google Scholar
  8. 8.
    Chang, S., Harper, F.M., Terveen, L.G.: Crowd-based personalized natural language explanations for recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 175–182. ACM (2016)Google Scholar
  9. 9.
    Chen, X., Chen, H., Xu, H., Zhang, Y., Cao, Y., Qin, Z., Zha, H.: Personalized fashion recommendation with visual explanations based on multimodal attention network: Towards visually explainable recommendation. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 765–774. ACM (2019)Google Scholar
  10. 10.
    Coba, L., Rook, L., Zanker, M., Symeonidis, P.: Decision making strategies differ in the presence of collaborative explanations: two conjoint studies. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 291–302. ACM (2019)Google Scholar
  11. 11.
    De Carolis, B., de Rosis, F., Grasso, F., Rossiello, A., Berry, D.C., Gillie, T.: Generating recipient-centered explanations about drug prescription. Artif. Intell. Med. 8(2), 123–145 (1996)CrossRefGoogle Scholar
  12. 12.
    Díaz-Agudo, B., Recio-Garcia, J.A., Jimenez-Díaz, G.: Data explanation with CBR. In: ICCBR, p. 64 (2018)Google Scholar
  13. 13.
    Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. ACM (2019)Google Scholar
  14. 14.
    Dragovic, N., Madrazo Azpiazu, I., Pera, M.S.: From recommendation to curation: when the system becomes your personal docent IntRS (2018)Google Scholar
  15. 15.
    Dragovic, N., Pera, M.S.: Exploiting reviews to generate personalized and justified recommendations to guide users’ selections. In: The Thirtieth International Flairs Conference (2017)Google Scholar
  16. 16.
    Falcone, R., Castelfranchi, C.: Trust dynamics: how trust is influenced by direct experiences and by trust itself. In: 2004 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems AAMAS 2004, pp. 740–747. IEEE (2004)Google Scholar
  17. 17.
    Fan, H., Poole, M.: Perspectives on personalization. In: AMCIS 2003 Proceedings p. 273 (2003)Google Scholar
  18. 18.
    Feng, S., Boyd-Graber, J.: What can AI do for me?: evaluating machine learning interpretations in cooperative play. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 229–239. ACM (2019)Google Scholar
  19. 19.
    Garcia, F.J.C., Robb, D.A., Liu, X., Laskov, A., Patron, P., Hastie, H.: Explainable autonomy: a study of explanation styles for building clear mental models. In: Proceedings of the 11th International Conference on Natural Language Generation, pp. 99–108 (2018)Google Scholar
  20. 20.
    Goodman, B., Flaxman, S.: EU regulations on algorithmic decision-making and a “right to explanation". In: ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY (2016)Google Scholar
  21. 21.
    Gregor, S., Benbasat, I.: Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q. 23, 497–530 (1999)CrossRefGoogle Scholar
  22. 22.
    Groce, A., Kulesza, T., Zhang, C., Shamasunder, S., Burnett, M., Wong, W.K., Stumpf, S., Das, S., Shinsel, A., Bice, F., et al.: You are the only possible oracle: effective test selection for end users of interactive machine learning systems. IEEE Trans. Softw. Eng. 40(3), 307–323 (2013)CrossRefGoogle Scholar
  23. 23.
    Hind, M., Wei, D., Campbell, M., Codella, N.C., Dhurandhar, A., Mojsilović, A., Natesan Ramamurthy, K., Varshney, K.R.: TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 123–129. ACM (2019)Google Scholar
  24. 24.
    Huang, S.H., Bhatia, K., Abbeel, P., Dragan, A.D.: Establishing appropriate trust via critical states. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3929–3936. IEEE (2018)Google Scholar
  25. 25.
    Kaptein, F., Broekens, J., Hindriks, K., Neerincx, M.: Personalised self-explanation by robots: the role of goals versus beliefs in robot-action explanation for children and adults. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 676–682. IEEE (2017)Google Scholar
  26. 26.
    Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30. ACM (2018)Google Scholar
  27. 27.
    Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., Getoor, L.: Personalized explanations for hybrid recommender systems. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 379–390. ACM (2019)Google Scholar
  28. 28.
    Krebs, L.M., Alvarado Rodriguez, O.L., Dewitte, P., Ausloos, J., Geerts, D., Naudts, L., Verbert, K.: Tell me what you know: GDPR implications on designing transparency and accountability for news recommender systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM (2019). LBW2610Google Scholar
  29. 29.
    Kroll, J.A., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., Yu, H.: Accountable algorithms. U. Pa. L. Rev. 165, 633 (2016)Google Scholar
  30. 30.
    Kulesza, T., Burnett, M., Wong, W.K., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126–137. ACM (2015)Google Scholar
  31. 31.
    Lamche, B., Adıgüzel, U., Wörndl, W.: Interactive explanations in mobile shopping recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, p. 14 (2014)Google Scholar
  32. 32.
    Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)CrossRefGoogle Scholar
  33. 33.
    Lim, B.Y., Dey, A.K.: Assessing demand for intelligibility in context-aware applications. In: Proceedings of the 11th International Conference on Ubiquitous Computing, pp. 195–204. ACM (2009)Google Scholar
  34. 34.
    Lu, Y., Dong, R., Smyth, B.: Why i like it: multi-task learning for recommendation and explanation. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 4–12. ACM (2018)Google Scholar
  35. 35.
    McInerney, J., Lacker, B., Hansen, S., Higley, K., Bouchard, H., Gruson, A., Mehrotra, R.: Explore, exploit, and explain: personalizing explainable recommendations with bandits. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 31–39. ACM (2018)Google Scholar
  36. 36.
    Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI, pp. 397–407 (2019)Google Scholar
  37. 37.
    Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)MathSciNetCrossRefGoogle Scholar
  38. 38.
    Milliez, G., Lallement, R., Fiore, M., Alami, R.: Using human knowledge awareness to adapt collaborative plan generation, explanation and monitoring. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 43–50. IEEE Press (2016)Google Scholar
  39. 39.
    Muhammad, K., Lawlor, A., Rafter, R., Smyth, B.: Great explanations: opinionated explanations for recommendations. In: International Conference on Case-Based Reasoning, pp. 244–258. Springer (2015)Google Scholar
  40. 40.
    Musto, C., Lops, P., de Gemmis, M., Semeraro, G.: Justifying recommendations through aspect-based sentiment analysis of users reviews. In: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization, pp. 4–12. ACM (2019)Google Scholar
  41. 41.
    Musto, C., Narducci, F., Lops, P., de Gemmis, M., Semeraro, G.: Linked open data-based explanations for transparent recommender systems. Int. J. Hum Comput Stud. 121, 93–107 (2019)CrossRefGoogle Scholar
  42. 42.
    Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., Mordvintsev, A.: The building blocks of interpretability. Distill 3(3), e10 (2018)CrossRefGoogle Scholar
  43. 43.
    Quijano-Sanchez, L., Sauer, C., Recio-Garcia, J.A., Diaz-Agudo, B.: Make it personal: a social explanation system applied to group recommendations. Expert Syst. Appl. 76, 36–48 (2017)CrossRefGoogle Scholar
  44. 44.
    Ribera, M., Lapedriza, À.: Can we do better explanations? A proposal of user-centered explainable AI. In: IUI Workshops (2019)Google Scholar
  45. 45.
    Roitman, H., Messika, Y., Tsimerman, Y., Maman, Y.: Increasing patient safety using explanation-driven personalized content recommendation. In: Proceedings of the 1st ACM International Health Informatics Symposium, pp. 430–434. ACM (2010)Google Scholar
  46. 46.
    Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agents Multi-Agent Syst. 33, 673–705 (2019)CrossRefGoogle Scholar
  47. 47.
    Sokol, K., Flach, P.A.: Glass-box: explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In: IJCAI, pp. 5868–5870 (2018)Google Scholar
  48. 48.
    Srinivasan, R., Chander, A., Pezeshkpour, P.: Generating user-friendly explanations for loan denials using GANs. arXiv:1906.10244 (2019)
  49. 49.
    Stumpf, S., Rajaram, V., Li, L., Wong, W.K., Burnett, M., Dietterich, T., Sullivan, E., Herlocker, J.: Interacting meaningfully with machine learning systems: three experiments. Int. J. Hum Comput Stud. 67(8), 639–662 (2009)CrossRefGoogle Scholar
  50. 50.
    Suzuki, T., Oyama, S., Kurihara, M.: Toward explainable recommendations: generating review text from multicriteria evaluation data. In: 2018 IEEE International Conference on Big Data (Big Data), pp. 3549–3551. IEEE (2018)Google Scholar
  51. 51.
    Svrcek, M., Kompan, M., Bielikova, M.: Towards understandable personalized recommendations: hybrid explanations. Comput. Sci. Inf. Syst. 16(1), 179–203 (2019)CrossRefGoogle Scholar
  52. 52.
    Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems (2018)Google Scholar
  53. 53.
    Wohlin, C.: Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, p. 38. Citeseer (2014)Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Mohammad Naiseh
    • 1
    Email author
  • Nan Jiang
    • 1
  • Jianbing Ma
    • 2
  • Raian Ali
    • 3
  1. 1.Faculty of Science and TechnologyBournemouth UniversityPooleUK
  2. 2.Chengdu University of Information TechnologyChengduChina
  3. 3.Division of Information and Computing Technology, College of Science and EngineeringHamad Bin Khalifa UniversityDohaQatar

Personalised recommendations