Advertisement

Ethics and Bias in Machine Learning: A Technical Study of What Makes Us “Good”

  • Nicole Shadowen
Chapter

Abstract

The topic of machine ethics is growing in recognition and energy, but bias in machine learning algorithms outpaces it to date. Bias is a complicated term with good and bad connotations in the field of algorithmic prediction making. Especially in circumstances with legal and ethical consequences, we must study the results of these machines to ensure fairness. This paper attempts to address ethics at the algorithmic level of autonomous machines. There is no one solution to solving machine bias, it depends on the context of the given system and the most reasonable way to avoid biased decisions while maintaining the highest algorithmic functionality. To assist in determining the best solution, we turn to machine ethics.

References

  1. 1.
    Varshney, K. R., & Alemzadeh, H. (2016, October 5). On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products. Retrieved December 04, 2016, from https://arxiv.org/abs/1610.01256
  2. 2.
    M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. (2017). Counterfactual fairness. arXiv preprint arXiv:1703.06856Google Scholar
  3. 3.
    Ng, A. (n.d.). What is Machine Learning? - Stanford University. Retrieved December 09, 2017, from https://www.coursera.org/learn/machine-learning/lecture/Ujm7v/what-is-machine-learning
  4. 4.
    Brownlee, J. (2016, September 21). Parametric and Nonparametric Machine Learning Algorithms. Retrieved December 09, 2017, from https://machinelearningmastery.com/parametric-and-nonparametric-machine-learning-algorithms/
  5. 5.
    Garcia, M. (2017, January 07). Racist in the Machine: The Disturbing Implications of Algorithmic Bias. Retrieved December 03, 2017, from http://muse.jhu.edu/article/645268/pdf
  6. 6.
    Dietterich, T. G. & Kong, E. B. (1995). Machine learning bias, statistical bias, and statistical variance of decision tree algorithms.Technical Report, Department of Computer Science, Oregon State University, Corvallis, Oregon. Available from ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-bias.ps.gz.
  7. 7.
    Reese, H. (2016). Bias in Machine Learning, and How to Stop It. TechRepublic. Retrieved October 9, 2017, from http://www.techrepublic.com/google-amp/article/bias-in-machine-learning-and-how-to-stop-it/
  8. 8.
    Howard, A., Zhang, C., & Horvitz, E. (2017). Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems. 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). doi: https://doi.org/10.1109/arso.2017.8025197
  9. 9.
    Devlin, H. (2017, April 13). AI programs exhibit racial and gender biases, research reveals. Retrieved October 09, 2017, from https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals
  10. 10.
    M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. (2017). Counterfactual fairness. arXiv preprint arXiv:1703.06856Google Scholar
  11. 11.
    Headleand, C. J., & Teahan, W. (2016). Towards ethical robots: Revisiting Braitenbergs vehicles. 2016 SAI Computing Conference (SAI), 469–477. doi: https://doi.org/10.1109/sai.2016.7556023
  12. 12.
    Anderson, Michael & Anderson, Susan. (2007). Machine Ethics: Creating an Ethical Intelligent Agent.. AI Magazine. 28. 15–26.Google Scholar
  13. 13.
    May, P. (2017, November 21). Watch out for ‘killer robots,’ UC Berkeley professor warns in video. Retrieved December 09, 2017, from http://www.mercurynews.com/2017/11/20/watch-out-for-killer-robots-uc-berkeley-professor-warns-in-video/
  14. 14.
    Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016, May 23). How We Analyzed the COMPAS Recidivism Algorithm. Retrieved December 03, 2017, from https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
  15. 15.
    SIGCAS - Computers & Society. (2003). Retrieved December 05, 2017, from http://www.sigcas.org/awards-1/awards-winners/moor
  16. 16.
    J.H. Moor. (2006). "The Nature Importance and Difficulty of Machine Ethics", Intelligent Systems IEEE, vol. 21, pp. 18–21, 2006, ISSN 1541-1672.Google Scholar
  17. 17.
    Petrasic, K., Saul, B., & Greig, J. (2017, January 20). Algorithms and bias: What lenders need to know. Retrieved December 05, 2017, from https://www.lexology.com/library/detail.aspx?g=c806d996-45c5-4c87-9d8a-a5cce3f8b5ff
  18. 18.
    Akhtar, A. (2016, August 09). Is Pokémon Go racist? How the app may be redlining communities of color. Retrieved December 09, 2017, from https://www.usatoday.com/story/tech/news/2016/08/09/pokemon-go-racist-app-redlining-communities-color-racist-pokestops-gyms/87732734/
  19. 19.
    Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. Retrieved December 03, 2017, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  20. 20.
    Eremenko, K., & De Ponteves, H. (2017, November 02). Machine Learning A-Z™: Hands-On Python & R In Data Science. Retrieved December 05, 2017, from https://www.udemy.com/machinelearning/
  21. 21.
    Rajaraman, A. (2008, March 24). More data usually beats better algorithms. Retrieved December 09, 2017, from http://anand.typepad.com/datawocky/2008/03/more-data-usual.html
  22. 22.
    Brennan, T., Dieterich, W., & Ehret, B. (2008). Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System. Criminal Justice and Behavior, 36(1), 21–40. doi: https://doi.org/10.1177/0093854808326545CrossRefGoogle Scholar
  23. 23.
    Blomberg, T., Bales, W., Mann, K., Meldrum, R., & Nedelec, J. (2010). Validation of the COMPAS risk assessment classification instrument. Retrieved from the Florida State University website: http://www.criminologycenter.fsu.edu/p/pdf/pretrial/Broward%20Co.%20COMPAS%20 Validation%202010.pdfGoogle Scholar
  24. 24.
    Tantithamthavorn, C., Mcintosh, S., Hassan, A. E., & Matsumoto, K. (2016). Comments on “Researcher Bias: The Use of Machine Learning in Software Defect Prediction”. IEEE Transactions on Software Engineering, 42(11), 1092–1094. doi: https://doi.org/10.1109/tse.2016.2553030CrossRefGoogle Scholar
  25. 25.
    About Us. (n.d.). Retrieved December 05, 2017, from https://www.propublica.org/about/
  26. 26.
    Smith, M., Patil, D., & Muñoz, C. (2016, May 4). Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights. Retrieved December 03, 2017, from https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-data-and-civil-rights
  27. 27.
    ONeil, C. (2017). Weapons of math destruction: how big data increases inequality and threatens democracy. London: Penguin Books.Google Scholar
  28. 28.
    AJL -ALGORITHMIC JUSTICE LEAGUE. (n.d.). Retrieved December 05, 2017, from https://www.ajlunited.org/
  29. 29.
    Steusloff, H. (2016). Humans Are Back in the Loop! Would Production Process Related Ethics Support the Design, Operating, and Standardization of Safe, Secure, and Efficient Human-Machine Collaboration? 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), 348–350.  https://doi.org/10.1109/w-ficloud.2016.76

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Nicole Shadowen
    • 1
  1. 1.John Jay College of Criminal JusticeNew York CityUSA

Personalised recommendations