FaiRecSys: mitigating algorithmic bias in recommender systems

  • Bora Edizel
  • Francesco BonchiEmail author
  • Sara Hajian
  • André Panisson
  • Tamir Tassa
Regular Paper


Recommendation and personalization are useful technologies which influence more and more our daily decisions. However, as we show empirically in this paper, the bias that exists in the real world and which is reflected in the training data can be modeled and amplified by recommender systems and in the end returned as biased recommendations to the users. This feedback process creates a self-perpetuating loop which progressively strengthens the filter bubbles we live in. Biased recommendations can also reinforce stereotypes such as those based on gender or ethnicity, possibly resulting in disparate impact. In this paper we address the problem of algorithmic bias in recommender systems. In particular, we highlight the connection between predictability of sensitive features and bias in the results of recommendations and we then offer a theoretically founded bound on recommendation bias based on that connection. We continue to formalize a fairness constraint and the price that one has to pay, in terms of alterations in the recommendation matrix, in order to achieve fair recommendations. Finally, we propose FaiRecSys—an algorithm that mitigates algorithmic bias by post-processing the recommendation matrix with minimum impact on the utility of recommendations provided to the end-users.


Algorithmic bias Recommender systems Fairness Privacy 



  1. 1.
    Abdollahpouri, H., Burke, R., Mobasher, B.: Controlling popularity bias in learning-to-rank recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems (RecSys 2017), Como, Italy, 27–31 August, 2017, pp. 42–46 (2017)Google Scholar
  2. 2.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefzbMATHGoogle Scholar
  3. 3.
    Burke, R.: Multisided fairness for recommendation. CoRR (2017). arXiv:1707.00093
  4. 4.
    Burke, R., Sonboli, N., Ordonez-Gauger, A.: Balanced neighborhoods for multi-sided fairness in recommendation. In Conference on Fairness, Accountability and Transparency, FAT 2018, 23–24 February 2018, New York, NY, USA, pp. 202–214 (2018)Google Scholar
  5. 5.
    Calders, T., Verwer, S.: Three naive bayes approaches for discrimination-free classification. Data Min. Knowl. Discov. 21(2), 277–292 (2010)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Datta, A., Tschantz, M.C., Datta, A.: Automated experiments on ad privacy settings. Proc. Priv. Enhanc. Technol. 2015(1), 92–112 (2015)CrossRefGoogle Scholar
  7. 7.
    Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226. ACM (2012)Google Scholar
  8. 8.
    Ekstrand, M.D., Pera, M.S.: The demographics of cool. In Poster Proceedings at ACM RecSys. ACM, Como, Italy (2017)Google Scholar
  9. 9.
    Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268. ACM (2015)Google Scholar
  10. 10.
    Goldberg, A.V.: An efficient implementation of a scaling minimum-cost flow algorithm. J. Algorithms 22(1), 1–29 (1997)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Goldberg, A.V., Kharitonov, M.: On implementing scaling push-relabel algorithms for the minimum-cost flow problem, vol. 12. DIMACS Series in Discrete Mathematics and Theoretical Computer Science (1993)Google Scholar
  12. 12.
    Goldberg, A.V., Tarjan, R.E.: Finding minimum-cost circulations by successive approximation. Math. Oper. Res. 15(3), 430–466 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Hajian, S., Bonchi, F., Castillo, C.: Algorithmic bias: From discrimination discovery to fairness-aware data mining. In KDD (2016)Google Scholar
  14. 14.
    Hajian, S., Domingo-Ferrer, J.: A methodology for direct and indirect discrimination prevention in data mining. IEEE Trans. Knowl. Data Eng. 25(7), 1445–1459 (2013)CrossRefGoogle Scholar
  15. 15.
    Hajian, S., Domingo-Ferrer, J., Farràs, O.: Generalization-based privacy preservation and discrimination prevention in data publishing and mining. Data Min. Knowl. Discov. 28(5–6), 1158–1188 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Hajian, S., Domingo-Ferrer, J., Monreale, A., Pedreschi, D., Giannotti, F.: Discrimination-and privacy-aware patterns. Data Min. Knowl. Discov. 29(6), 1733–1782 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Harper, F.M., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. (TiiS) 5(4), 19 (2016)Google Scholar
  18. 18.
    Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272. IEEE (2008)Google Scholar
  19. 19.
    Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)CrossRefGoogle Scholar
  20. 20.
    Kamiran, F., Calders, T., Pechenizkiy, M.: Discrimination aware decision tree learning. In 2010 IEEE 10th International Conference on Data Mining (ICDM), pp. 869–874. IEEE (2010)Google Scholar
  21. 21.
    Kamiran, F., Karim, A., Zhang, X.: Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference onData Mining (ICDM), pp. 924–929. IEEE (2012)Google Scholar
  22. 22.
    Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Enhancement of the neutrality in recommendation. In The 2nd Workshop on Human Decision Making in Recommender Systems (Decisions) (2012)Google Scholar
  23. 23.
    Kamishima, T., Akaho, S., Asoh, H., Sato, I.: Model-based approaches for independence-enhanced recommendation. In The IEEE 16th International Conference on Data Mining Workshops (ICDMW), pp. 860–867 (2016)Google Scholar
  24. 24.
    Koren, Y.: Collaborative filtering with temporal dynamics. Commun. ACM 53(4), 89–97 (2010)CrossRefGoogle Scholar
  25. 25.
    Levy, M., Jack, K.: Efficient top-n recommendation by linear regression. In Proceedings of Large Scale Recommender System Workshop at ACM RecSys. ACM, Hong Kong, China (2013)Google Scholar
  26. 26.
    Lim, D., McAuley, J., Lanckriet, G.: Top-n recommendation with missing implicit feedback. In Proceedings of the 9th ACM Conference on Recommender Systems, pp. 309–312. ACM (2015)Google Scholar
  27. 27.
    Luong, B.T., Ruggieri, S., Turini, F.: k-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 502–510. ACM (2011)Google Scholar
  28. 28.
    Mancuhan, K., Clifton, C.: Combating discrimination using bayesian networks. Artif. Intell. Law 22(2), 211–238 (2014)CrossRefGoogle Scholar
  29. 29.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Pedreschi, D., Ruggieri, S., Turini, F.: Measuring discrimination in socially-sensitive decision records. In SDM, pp. 581–592. SIAM (2009)Google Scholar
  31. 31.
    Pedreshi, D., Ruggieri, S., Turini, F.: Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 560–568. ACM (2008)Google Scholar
  32. 32.
    Ruggieri, S., Hajian, S., Kamiran, F., Zhang, X.: Anti-discrimination analysis using privacy attack strategies. In Machine Learning and Knowledge Discovery in Databases, pp. 694–710. Springer (2014)Google Scholar
  33. 33.
    Ruggieri, S., Pedreschi, D., Turini, F.: Data mining for discrimination discovery. ACM Trans. Knowl. Discov. Data (TKDD) 4(2), 9 (2010)Google Scholar
  34. 34.
    Sweeney, L.: Discrimination in online ad delivery. Queue 11(3), 10 (2013)CrossRefGoogle Scholar
  35. 35.
    Yao, S., Huang, B.: Beyond parity: Fairness objectives for collaborative filtering. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems (NIPS 2017), 4–9 December 2017, Long Beach, CA, USA, pp. 2925–2934 (2017)Google Scholar
  36. 36.
    Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 325–333 (2013)Google Scholar
  37. 37.
    Zhu, Z., Hu, X., Caverlee, J.: Fairness-aware tensor-based recommendation. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, 22–26 October, 2018, pp. 1153–1162 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Pompeu Fabra UniversityBarcelonaSpain
  2. 2.ISI FoundationTorinoItaly
  3. 3.EurecatBarcelonaSpain
  4. 4.The Open UniversityRa’ananaIsrael

Personalised recommendations