Skip to main content

Ethics and Bias in Machine Learning: A Technical Study of What Makes Us “Good”

  • Chapter
  • First Online:
Book cover The Transhumanism Handbook

Abstract

The topic of machine ethics is growing in recognition and energy, but bias in machine learning algorithms outpaces it to date. Bias is a complicated term with good and bad connotations in the field of algorithmic prediction making. Especially in circumstances with legal and ethical consequences, we must study the results of these machines to ensure fairness. This paper attempts to address ethics at the algorithmic level of autonomous machines. There is no one solution to solving machine bias, it depends on the context of the given system and the most reasonable way to avoid biased decisions while maintaining the highest algorithmic functionality. To assist in determining the best solution, we turn to machine ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Varshney, K. R., & Alemzadeh, H. (2016, October 5). On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products. Retrieved December 04, 2016, from https://arxiv.org/abs/1610.01256

  2. M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. (2017). Counterfactual fairness. arXiv preprint arXiv:1703.06856

    Google Scholar 

  3. Ng, A. (n.d.). What is Machine Learning? - Stanford University. Retrieved December 09, 2017, from https://www.coursera.org/learn/machine-learning/lecture/Ujm7v/what-is-machine-learning

  4. Brownlee, J. (2016, September 21). Parametric and Nonparametric Machine Learning Algorithms. Retrieved December 09, 2017, from https://machinelearningmastery.com/parametric-and-nonparametric-machine-learning-algorithms/

  5. Garcia, M. (2017, January 07). Racist in the Machine: The Disturbing Implications of Algorithmic Bias. Retrieved December 03, 2017, from http://muse.jhu.edu/article/645268/pdf

  6. Dietterich, T. G. & Kong, E. B. (1995). Machine learning bias, statistical bias, and statistical variance of decision tree algorithms.Technical Report, Department of Computer Science, Oregon State University, Corvallis, Oregon. Available from ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-bias.ps.gz.

  7. Reese, H. (2016). Bias in Machine Learning, and How to Stop It. TechRepublic. Retrieved October 9, 2017, from http://www.techrepublic.com/google-amp/article/bias-in-machine-learning-and-how-to-stop-it/

  8. Howard, A., Zhang, C., & Horvitz, E. (2017). Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems. 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). doi:https://doi.org/10.1109/arso.2017.8025197

  9. Devlin, H. (2017, April 13). AI programs exhibit racial and gender biases, research reveals. Retrieved October 09, 2017, from https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

  10. M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. (2017). Counterfactual fairness. arXiv preprint arXiv:1703.06856

    Google Scholar 

  11. Headleand, C. J., & Teahan, W. (2016). Towards ethical robots: Revisiting Braitenbergs vehicles. 2016 SAI Computing Conference (SAI), 469–477. doi:https://doi.org/10.1109/sai.2016.7556023

  12. Anderson, Michael & Anderson, Susan. (2007). Machine Ethics: Creating an Ethical Intelligent Agent.. AI Magazine. 28. 15–26.

    Google Scholar 

  13. May, P. (2017, November 21). Watch out for ‘killer robots,’ UC Berkeley professor warns in video. Retrieved December 09, 2017, from http://www.mercurynews.com/2017/11/20/watch-out-for-killer-robots-uc-berkeley-professor-warns-in-video/

  14. Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016, May 23). How We Analyzed the COMPAS Recidivism Algorithm. Retrieved December 03, 2017, from https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

  15. SIGCAS - Computers & Society. (2003). Retrieved December 05, 2017, from http://www.sigcas.org/awards-1/awards-winners/moor

  16. J.H. Moor. (2006). "The Nature Importance and Difficulty of Machine Ethics", Intelligent Systems IEEE, vol. 21, pp. 18–21, 2006, ISSN 1541-1672.

    Google Scholar 

  17. Petrasic, K., Saul, B., & Greig, J. (2017, January 20). Algorithms and bias: What lenders need to know. Retrieved December 05, 2017, from https://www.lexology.com/library/detail.aspx?g=c806d996-45c5-4c87-9d8a-a5cce3f8b5ff

  18. Akhtar, A. (2016, August 09). Is Pokémon Go racist? How the app may be redlining communities of color. Retrieved December 09, 2017, from https://www.usatoday.com/story/tech/news/2016/08/09/pokemon-go-racist-app-redlining-communities-color-racist-pokestops-gyms/87732734/

  19. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. Retrieved December 03, 2017, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  20. Eremenko, K., & De Ponteves, H. (2017, November 02). Machine Learning A-Z™: Hands-On Python & R In Data Science. Retrieved December 05, 2017, from https://www.udemy.com/machinelearning/

  21. Rajaraman, A. (2008, March 24). More data usually beats better algorithms. Retrieved December 09, 2017, from http://anand.typepad.com/datawocky/2008/03/more-data-usual.html

  22. Brennan, T., Dieterich, W., & Ehret, B. (2008). Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System. Criminal Justice and Behavior, 36(1), 21–40. doi:https://doi.org/10.1177/0093854808326545

    Article  Google Scholar 

  23. Blomberg, T., Bales, W., Mann, K., Meldrum, R., & Nedelec, J. (2010). Validation of the COMPAS risk assessment classification instrument. Retrieved from the Florida State University website: http://www.criminologycenter.fsu.edu/p/pdf/pretrial/Broward%20Co.%20COMPAS%20 Validation%202010.pdf

    Google Scholar 

  24. Tantithamthavorn, C., Mcintosh, S., Hassan, A. E., & Matsumoto, K. (2016). Comments on “Researcher Bias: The Use of Machine Learning in Software Defect Prediction”. IEEE Transactions on Software Engineering, 42(11), 1092–1094. doi:https://doi.org/10.1109/tse.2016.2553030

    Article  Google Scholar 

  25. About Us. (n.d.). Retrieved December 05, 2017, from https://www.propublica.org/about/

  26. Smith, M., Patil, D., & Muñoz, C. (2016, May 4). Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights. Retrieved December 03, 2017, from https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-data-and-civil-rights

  27. ONeil, C. (2017). Weapons of math destruction: how big data increases inequality and threatens democracy. London: Penguin Books.

    Google Scholar 

  28. AJL -ALGORITHMIC JUSTICE LEAGUE. (n.d.). Retrieved December 05, 2017, from https://www.ajlunited.org/

  29. Steusloff, H. (2016). Humans Are Back in the Loop! Would Production Process Related Ethics Support the Design, Operating, and Standardization of Safe, Secure, and Efficient Human-Machine Collaboration? 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), 348–350. https://doi.org/10.1109/w-ficloud.2016.76

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Shadowen, N. (2019). Ethics and Bias in Machine Learning: A Technical Study of What Makes Us “Good”. In: Lee, N. (eds) The Transhumanism Handbook. Springer, Cham. https://doi.org/10.1007/978-3-030-16920-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16920-6_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16919-0

  • Online ISBN: 978-3-030-16920-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics