Skip to main content

AI and Discrimination. A Proposal for a Compliance System for Protecting Privacy and Equality

  • Conference paper
  • First Online:
Inclusive Robotics for a Better Society (INBOTS 2018)

Part of the book series: Biosystems & Biorobotics ((BIOSYSROB,volume 25))

Included in the following conference series:

  • 624 Accesses

Abstract

This paper explores the implications of big data and AI regarding discrimination. First, we will analyse the technical implications of Artificial Intelligence, algorithms and machine learning for protecting privacy and equality. The second part of the paper will be devoted to an analysis of the EU legal framework in order to set a minimum guide for protecting against discrimination. Finally, we will conclude by proposing a compliance program for preventing discrimination and privacy violations by the use of AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Alder, P., et al.: Auditing black-box models for indirect influence. In: Proceedings of the IEEE International Conference on Data Mining (ICDM) (2016)

    Google Scholar 

  • Angwin, J., et al.: Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica. Article 29 Data Protection Working Party (2016)

    Google Scholar 

  • Bertrand, M., Mullainathan, S.: Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination, NBER Working Paper Series, Working Paper No. 9873 (2003)

    Google Scholar 

  • Bickel, P.J., et al.: Sex bias in graduate admissions: data from Berkeley. Science 187(4175), 398–404 (1975)

    Article  Google Scholar 

  • Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Proceedings of Machine Learning Research Conference on Fairness, Accountability, and Transparency, vol. 81, pp. 1–15 (2018)

    Google Scholar 

  • Chouldechova, A.: Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, February 2017. arXiv:1703.00056 [stat.AP]

  • Chopin, I., Farkas, L., Germaine, C.: Ethnic origin and disability data collection in Europe – Comparing discrimination, Migration Policy Group for Open Society Foundations (2014)

    Google Scholar 

  • Council of Europe. Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data, T-PD(2017)01, 23 January 2017 (2017a)

    Google Scholar 

  • Council of Europe. Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications, Committee of experts on internet intermediaries, MSI-NET(2016)06 rev6 (2017b)

    Google Scholar 

  • European Data Protection Supervisor (EDPS). EDPS Opinion on coherent enforcement of fundamental rights in the age of big data, Opinion 8/2016 (2016)

    Google Scholar 

  • European Parliament. Fundamental rights implications of big data, P8_TA-PROV(2017)0076 (2017a)

    Google Scholar 

  • European Parliament. Civil Law Rules on Robotics, P8_TA(2017)0051 (2017b)

    Google Scholar 

  • Flores, A., et al.: False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. Fed. Probation 80(2), 38–46 (2016)

    Google Scholar 

  • FRA (European Union Agency for Fundamental Rights): Towards More Effective Policing, Understanding and preventing discriminatory ethnic profiling: A guide. Publications Office of the European Union (Publications Office), Luxembourg (2010)

    Google Scholar 

  • FRA: Surveillance by Intelligence Services: Fundamental Rights Safeguards and Remedies in the EU. Publications Office, Luxembourg (2017)

    Google Scholar 

  • Goodman, B., Flaxman, S.: EU Regulations on Algorithmic Decision-Making and A “right to Explanation” (2016)

    Google Scholar 

  • van Hoboken, J.: From Collection to Use in Privacy Regulation? A Forward Looking Comparison of European and U.S. Frameworks for Personal Data Processing. In: Van Der Sloot, B., Broeders, D., Schrijvers, E. (eds.) Exploring the Boundaries of Big Data. Netherlands Scientific Council for Government Policy, pp. 231–259 (2016)

    Google Scholar 

  • Information Accountability Foundation (the) (2016), Effective Data Protection Governance Project: Improving Operational Efficiency and Regulatory Certainty in a Digital Age, July 2016

    Google Scholar 

  • Information Commissioner’s Office. Big Data, artificial intelligence, machine learning and data protection, 1 March 2017 (2017)

    Google Scholar 

  • Kamiran, F., Žliobaitė, I., Calders, T.: Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowl. Inf. Syst. 35(3), 613–644 (2013)

    Article  Google Scholar 

  • Kleinberg, J., et al.: Human Decisions and Machine Predictions, NBER Working Paper No. 23180 (2017)

    Google Scholar 

  • Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The Ethics of algorithms: mapping the debate. Big Data Soc. 3(2) (2016). https://doi.org/10.1177/2053951716679679

    Article  Google Scholar 

  • Simon, P.: “Ethnic” statistics and data protection in the Council of Europe countries. Council of Europe, Strasbourg (2007)

    Google Scholar 

  • Sandvig, C., et al.: Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms, Paper presented to “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, Seattle, WA, USA, 22 May (2014)

    Google Scholar 

  • Selbst, A., Powles, J.: Meaningful information and the right to explanation. Int. Data Priv. Law 7(4), 233–242 (2017)

    Article  Google Scholar 

  • The White House. Big Data: A report on Algorithmic Systems, Opportunity and Civil Rights (2016)

    Google Scholar 

  • Zliobaite, I., Clusters, B.: Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artif. Intell. Law 24(2), 183–201 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helena Ancos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ancos, H. (2020). AI and Discrimination. A Proposal for a Compliance System for Protecting Privacy and Equality. In: Pons, J. (eds) Inclusive Robotics for a Better Society. INBOTS 2018. Biosystems & Biorobotics, vol 25. Springer, Cham. https://doi.org/10.1007/978-3-030-24074-5_19

Download citation

Publish with us

Policies and ethics