Advertisement

Achieving Fair Treatment in Algorithmic Classification

  • Andrew MorganEmail author
  • Rafael Pass
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11239)

Abstract

Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS’16), we consider and formalize the task of sanitizing an unfair classifier \(\mathcal {C}\) into a classifier \(\mathcal {C}'\) satisfying an approximate notion of “equalized odds” or fair treatment. Our main result shows how to take any (possibly unfair) classifier \(\mathcal {C}\) over a finite outcome space, and transform it—by just perturbing the output of \(\mathcal {C}\)—according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier \(\mathcal {C}'\) that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.

References

  1. 1.
    Angwin, J., Larson, J., Mattu, S., Kirchner, L.: How we analyzed the COMPAS recidivism algorithm. ProPublica (2016). https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  2. 2.
    Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: risk assessments in criminal sentencing. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  3. 3.
    Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. In: FATML (2016)Google Scholar
  4. 4.
    Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 1–12. Springer, Heidelberg (2006).  https://doi.org/10.1007/11787006_1CrossRefGoogle Scholar
  5. 5.
    Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS 2012, pp. 214–226. ACM, New York (2012)Google Scholar
  6. 6.
    Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NIPS (2016)Google Scholar
  7. 7.
    Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. In: ITCS (2017)Google Scholar
  8. 8.
    Pearl, J.: Direct and indirect effects. In: Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, UAI 2001, pp. 411–420. Morgan Kaufmann Publishers Inc., San Francisco (2001)Google Scholar
  9. 9.
    Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment and disparate impact: learning classification without disparate mistreatment (2016). https://arxiv.org/abs/1610.08452

Copyright information

© International Association for Cryptologic Research 2018

Authors and Affiliations

  1. 1.Cornell UniversityIthacaUSA
  2. 2.Cornell TechNew York CityUSA

Personalised recommendations