Advertisement

Artificial Intelligence and Discrimination: Discriminating Against Discriminatory Systems

  • Alexander TischbirekEmail author
Chapter

Abstract

AI promises to provide fast, consistent, and rational assessments. Nevertheless, algorithmic decision-making, too, has proven to be potentially discriminatory. EU antidiscrimination law is equipped with an appropriate doctrinal tool kit to face this new phenomenon. This is particularly true in view of the legal recognition of indirect discriminations, which no longer require certain proofs of causality, but put the focus on conspicuous correlations, instead. As a result, antidiscrimination law highly depends on knowledge about vulnerable groups, both on a conceptual as well as on a factual level. This Chapter hence recommends a partial realignment of the law towards a paradigm of knowledge creation when being faced with potentially discriminatory AI.

References

  1. Ainsworth C (2015) Sex redefined. Nature 518:288–291.  https://doi.org/10.1038/518288a CrossRefGoogle Scholar
  2. Baer S (2009) Chancen und Grenzen positiver Maßnahmen nach § 5 AGG. www.rewi.hu-berlin.de/de/lf/ls/bae/w/files/ls_aktuelles/09_adnb_baer.pdf. Accessed 4 Feb 2019
  3. Barocas S, Selbst AD (2016) Big data’s disparate impact. Calif Law Rev 104:671–732Google Scholar
  4. Barskanmaz C (2011) Rasse - Unwort des Antidiskriminierungsrechts. Kritische Justiz 44:382–389CrossRefGoogle Scholar
  5. Berghahn S, Klapp M, Tischbirek A (2016) Evaluation des Allgemeinen Gleichbehandlungsgesetzes. Reihe der Antidiskriminierungsstelle des Bundes. Nomos, Baden-BadenGoogle Scholar
  6. Calders T, Žliobaitė I (2013) Why unbiased computational processes can lead to discriminative decision procedures. In: Custers B, Calders T, Schermer B, Zarsky T (eds) Discrimination and privacy in the information society. Studies in applied philosophy, epistemology and rational ethics. Springer, Berlin, pp 27–42Google Scholar
  7. Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356:183–186.  https://doi.org/10.1126/science.aal4230 CrossRefGoogle Scholar
  8. Chayes A (1976) The role of the judge in public law litigation. Harv Law Rev 89:1281–1316.  https://doi.org/10.2307/1340256 CrossRefGoogle Scholar
  9. Crenshaw K (1989) Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. Univ Chicago Legal Forum 1989:139–167Google Scholar
  10. Danziger S, Levav J, Avnaim-Pesso L (2011) Extraneous factors in judicial decisions. Proc Natl Acad Sci USA 108:6889–6892CrossRefGoogle Scholar
  11. Davis KC (1942) An approach to problems of evidence in the administrative process. Harv Law Rev 55:364–425.  https://doi.org/10.2307/1335092 CrossRefGoogle Scholar
  12. Delgado R, Stefancic J (2017) Critical race theory, 3rd edn. New York University Press, New YorkGoogle Scholar
  13. Ellis E, Watson P (2012) EU anti-discrimination law. Oxford EU law library, 2nd edn. Oxford University Press, OxfordCrossRefGoogle Scholar
  14. Fassbender B (2006) Wissen als Grundlage staatlichen Handelns. In: Isensee J, Kirchhof P (eds) Handbuch des Staatsrechts, vol IV, 3rd edn. C.F. Müller, Heidelberg, pp 243–312Google Scholar
  15. Feldmann D, Hoffmann J, Keilhauer A, Liebold R (2018) “Rasse” und “ethnische Herkunft” als Merkmale des AGG. Rechtswissenschaft 9:23–46CrossRefGoogle Scholar
  16. Ferguson A (2015) Big data and predictive reasonable suspicion. Univ Pa Law Rev 163:327–410Google Scholar
  17. Fredman S (2011) Discrimination law. Clarendon law series, 2nd edn. Oxford University Press, OxfordGoogle Scholar
  18. Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Trans Inf Syst 14:330–347.  https://doi.org/10.1145/230538.23056 CrossRefGoogle Scholar
  19. Gellert R, de Vries K, de Hert P, Gutwirth S (2013) A comparative analysis of anti-discrimination and data protection legislations. In: Custers B, Calders T, Schermer B, Zarsky T (eds) Discrimination and privacy in the information society. Studies in applied philosophy, epistemology and rational ethics. Springer, Berlin, pp 61–88Google Scholar
  20. Glass A (2010) Privacy and law. In: Blatterer H, Johnson P (eds) Modern privacy. Palgrave Macmillan, London, pp 59–72CrossRefGoogle Scholar
  21. Hacker P (2018) Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Mark Law Rev 55:1143–1186Google Scholar
  22. Hand DJ (2006) Classifier technology and the illusion of progress. Stat Sci 21:1–14.  https://doi.org/10.1214/088342306000000060 CrossRefGoogle Scholar
  23. Jolls C, Sunstein CR (2006) The law of implicit bias. Calif Law Rev 94:969–996CrossRefGoogle Scholar
  24. Keaton TD (2010) The politics of race-blindness: (anti)blackness and category-blindness in contemporary France. Du Bois Rev Soc Sci Res Race 7:103–131.  https://doi.org/10.1017/S1742058X10000202 CrossRefGoogle Scholar
  25. Krieger LH (1995) The content of our categories: a cognitive bias approach to discrimination and equal employment opportunity. Stanford Law Rev 47:1161–1248CrossRefGoogle Scholar
  26. Kroll JA, Huey J, Barocas S, Felten EW, Reidenberg JR, Robinson DG, Yu H (2017) Accountable algorithms. Univ Pa Law Rev 165:633–795Google Scholar
  27. Ladeur K-H (2012) Die Kommunikationsinfrastruktur der Verwaltung. In: Hoffmann-Riem W, Schmidt-Assmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 2. C.H. Beck, München, pp 35–106Google Scholar
  28. Liebscher D, Naguib T, Plümecke T, Remus J (2012) Wege aus der Essentialismusfalle: Überlegungen zu einem postkategorialen Antidiskriminierungsrecht. Kritische Justiz 45:204–218CrossRefGoogle Scholar
  29. Mangold AK (2016) Demokratische Inklusion durch Recht. Habilitation Treatise, Johann-Wolfgang von Goethe-Universität, Frankfurt a.MGoogle Scholar
  30. Möllers C (2015) Die Möglichkeit der Normen. Suhrkamp, BerlinGoogle Scholar
  31. O’Neil C (2016) Weapons of math destruction. How big data increases inequality and threatens democracy. Crown, New YorkGoogle Scholar
  32. Polanyi M (1966) The tacit dimension. The University of Chicago Press, ChicagoGoogle Scholar
  33. Rademacher T (2017) Predictive Policing im deutschen Polizeirecht. Archiv des öffentlichen Rechts 142:366–416CrossRefGoogle Scholar
  34. Regan J (2016) New Zealand passport robot tells applicant of Asian descent to open eyes. Available via Reuters. www.reuters.com/article/us-newzealand-passport-error/new-zealand-passport-robot-tells-applicant-of-asian-descent-to-open-eyes-idUSKBN13W0RL. Accessed 10 Oct 2018
  35. Statistisches Bundesamt (2017) Verkehrsunfälle: Unfälle von Frauen und Männern im Straßenverkehr 2016. Available via DESTATIS. www.destatis.de/DE/Publikationen/Thematisch/TransportVerkehr/Verkehrsunfaelle/UnfaelleFrauenMaenner5462407167004.pdf?__blob=publicationFile. Accessed 10 Oct 2018
  36. The Social Issues Research Center Oxford (2004) Sex differences in driving and insurance risk. www.sirc.org/publik/driving.pdf. Accessed 10 Oct 2018
  37. The World Bank (2017) Data: Life expectancy at birth. data.worldbank.org/indicator/SP.DYN.LE00.MA.IN (male) and data.worldbank.org/indicator/SP.DYN.LE00.FE.IN (female). Accessed 10 Oct 2018
  38. Thüsing G (2015) Allgemeines Gleichbehandlungsgesetz. In: Säcker FJ (ed) Münchener Kommentar zum BGB, vol 1, 7th edn. Beck, München, pp 2423–2728Google Scholar
  39. Tischbirek A (2019) Wissen als Diskriminierungsfrage. In: Münkler (ed) Dimensionen des Wissens im Recht. Mohr Siebeck, Tübingen, pp 67–88Google Scholar
  40. Tourkochoriti I (2017) Jenkins v. Kinsgate and the migration of the US disparate impact doctrine in EU law. In: Nicola F, Davies B (eds) EU law stories. Cambridge University Press, Cambridge, pp 418–445CrossRefGoogle Scholar
  41. Wischmeyer T (2018) Regulierung intelligenter Systeme. Archiv des öffentlichen Rechts 143:1–66.  https://doi.org/10.1628/aoer-2018-0002 CrossRefGoogle Scholar
  42. Wolfangel E (2017) Künstliche Intelligenz voller Vorurteile. Available via NZZ. www.nzz.ch/wissenschaft/selbstlernende-algorithmen-kuenstliche-intelligenz-voller-vorurteile-ld.1313680. Accessed 10 Oct 2018
  43. Žliobaitė I, Custers B (2016) Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artif Intell Law 24:183–201.  https://doi.org/10.1007/s10506-016-9182-5 CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Faculty of LawHumboldt-Universität zu BerlinBerlinGermany

Personalised recommendations