Abstract
This paper explores the implications of big data and AI regarding discrimination. First, we will analyse the technical implications of Artificial Intelligence, algorithms and machine learning for protecting privacy and equality. The second part of the paper will be devoted to an analysis of the EU legal framework in order to set a minimum guide for protecting against discrimination. Finally, we will conclude by proposing a compliance program for preventing discrimination and privacy violations by the use of AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alder, P., et al.: Auditing black-box models for indirect influence. In: Proceedings of the IEEE International Conference on Data Mining (ICDM) (2016)
Angwin, J., et al.: Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica. Article 29 Data Protection Working Party (2016)
Bertrand, M., Mullainathan, S.: Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination, NBER Working Paper Series, Working Paper No. 9873 (2003)
Bickel, P.J., et al.: Sex bias in graduate admissions: data from Berkeley. Science 187(4175), 398–404 (1975)
Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Proceedings of Machine Learning Research Conference on Fairness, Accountability, and Transparency, vol. 81, pp. 1–15 (2018)
Chouldechova, A.: Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, February 2017. arXiv:1703.00056 [stat.AP]
Chopin, I., Farkas, L., Germaine, C.: Ethnic origin and disability data collection in Europe – Comparing discrimination, Migration Policy Group for Open Society Foundations (2014)
Council of Europe. Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data, T-PD(2017)01, 23 January 2017 (2017a)
Council of Europe. Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications, Committee of experts on internet intermediaries, MSI-NET(2016)06 rev6 (2017b)
European Data Protection Supervisor (EDPS). EDPS Opinion on coherent enforcement of fundamental rights in the age of big data, Opinion 8/2016 (2016)
European Parliament. Fundamental rights implications of big data, P8_TA-PROV(2017)0076 (2017a)
European Parliament. Civil Law Rules on Robotics, P8_TA(2017)0051 (2017b)
Flores, A., et al.: False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. Fed. Probation 80(2), 38–46 (2016)
FRA (European Union Agency for Fundamental Rights): Towards More Effective Policing, Understanding and preventing discriminatory ethnic profiling: A guide. Publications Office of the European Union (Publications Office), Luxembourg (2010)
FRA: Surveillance by Intelligence Services: Fundamental Rights Safeguards and Remedies in the EU. Publications Office, Luxembourg (2017)
Goodman, B., Flaxman, S.: EU Regulations on Algorithmic Decision-Making and A “right to Explanation” (2016)
van Hoboken, J.: From Collection to Use in Privacy Regulation? A Forward Looking Comparison of European and U.S. Frameworks for Personal Data Processing. In: Van Der Sloot, B., Broeders, D., Schrijvers, E. (eds.) Exploring the Boundaries of Big Data. Netherlands Scientific Council for Government Policy, pp. 231–259 (2016)
Information Accountability Foundation (the) (2016), Effective Data Protection Governance Project: Improving Operational Efficiency and Regulatory Certainty in a Digital Age, July 2016
Information Commissioner’s Office. Big Data, artificial intelligence, machine learning and data protection, 1 March 2017 (2017)
Kamiran, F., Žliobaitė, I., Calders, T.: Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowl. Inf. Syst. 35(3), 613–644 (2013)
Kleinberg, J., et al.: Human Decisions and Machine Predictions, NBER Working Paper No. 23180 (2017)
Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The Ethics of algorithms: mapping the debate. Big Data Soc. 3(2) (2016). https://doi.org/10.1177/2053951716679679
Simon, P.: “Ethnic” statistics and data protection in the Council of Europe countries. Council of Europe, Strasbourg (2007)
Sandvig, C., et al.: Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms, Paper presented to “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, Seattle, WA, USA, 22 May (2014)
Selbst, A., Powles, J.: Meaningful information and the right to explanation. Int. Data Priv. Law 7(4), 233–242 (2017)
The White House. Big Data: A report on Algorithmic Systems, Opportunity and Civil Rights (2016)
Zliobaite, I., Clusters, B.: Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artif. Intell. Law 24(2), 183–201 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ancos, H. (2020). AI and Discrimination. A Proposal for a Compliance System for Protecting Privacy and Equality. In: Pons, J. (eds) Inclusive Robotics for a Better Society. INBOTS 2018. Biosystems & Biorobotics, vol 25. Springer, Cham. https://doi.org/10.1007/978-3-030-24074-5_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-24074-5_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24073-8
Online ISBN: 978-3-030-24074-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)