Abstract
The topic of machine ethics is growing in recognition and energy, but bias in machine learning algorithms outpaces it to date. Bias is a complicated term with good and bad connotations in the field of algorithmic prediction making. Especially in circumstances with legal and ethical consequences, we must study the results of these machines to ensure fairness. This paper attempts to address ethics at the algorithmic level of autonomous machines. There is no one solution to solving machine bias, it depends on the context of the given system and the most reasonable way to avoid biased decisions while maintaining the highest algorithmic functionality. To assist in determining the best solution, we turn to machine ethics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Varshney, K. R., & Alemzadeh, H. (2016, October 5). On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products. Retrieved December 04, 2016, from https://arxiv.org/abs/1610.01256
M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. (2017). Counterfactual fairness. arXiv preprint arXiv:1703.06856
Ng, A. (n.d.). What is Machine Learning? - Stanford University. Retrieved December 09, 2017, from https://www.coursera.org/learn/machine-learning/lecture/Ujm7v/what-is-machine-learning
Brownlee, J. (2016, September 21). Parametric and Nonparametric Machine Learning Algorithms. Retrieved December 09, 2017, from https://machinelearningmastery.com/parametric-and-nonparametric-machine-learning-algorithms/
Garcia, M. (2017, January 07). Racist in the Machine: The Disturbing Implications of Algorithmic Bias. Retrieved December 03, 2017, from http://muse.jhu.edu/article/645268/pdf
Dietterich, T. G. & Kong, E. B. (1995). Machine learning bias, statistical bias, and statistical variance of decision tree algorithms.Technical Report, Department of Computer Science, Oregon State University, Corvallis, Oregon. Available from ftp://ftp.cs.orst.edu/pub/tgd/papers/tr-bias.ps.gz.
Reese, H. (2016). Bias in Machine Learning, and How to Stop It. TechRepublic. Retrieved October 9, 2017, from http://www.techrepublic.com/google-amp/article/bias-in-machine-learning-and-how-to-stop-it/
Howard, A., Zhang, C., & Horvitz, E. (2017). Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems. 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). doi:https://doi.org/10.1109/arso.2017.8025197
Devlin, H. (2017, April 13). AI programs exhibit racial and gender biases, research reveals. Retrieved October 09, 2017, from https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals
M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. (2017). Counterfactual fairness. arXiv preprint arXiv:1703.06856
Headleand, C. J., & Teahan, W. (2016). Towards ethical robots: Revisiting Braitenbergs vehicles. 2016 SAI Computing Conference (SAI), 469–477. doi:https://doi.org/10.1109/sai.2016.7556023
Anderson, Michael & Anderson, Susan. (2007). Machine Ethics: Creating an Ethical Intelligent Agent.. AI Magazine. 28. 15–26.
May, P. (2017, November 21). Watch out for ‘killer robots,’ UC Berkeley professor warns in video. Retrieved December 09, 2017, from http://www.mercurynews.com/2017/11/20/watch-out-for-killer-robots-uc-berkeley-professor-warns-in-video/
Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016, May 23). How We Analyzed the COMPAS Recidivism Algorithm. Retrieved December 03, 2017, from https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
SIGCAS - Computers & Society. (2003). Retrieved December 05, 2017, from http://www.sigcas.org/awards-1/awards-winners/moor
J.H. Moor. (2006). "The Nature Importance and Difficulty of Machine Ethics", Intelligent Systems IEEE, vol. 21, pp. 18–21, 2006, ISSN 1541-1672.
Petrasic, K., Saul, B., & Greig, J. (2017, January 20). Algorithms and bias: What lenders need to know. Retrieved December 05, 2017, from https://www.lexology.com/library/detail.aspx?g=c806d996-45c5-4c87-9d8a-a5cce3f8b5ff
Akhtar, A. (2016, August 09). Is Pokémon Go racist? How the app may be redlining communities of color. Retrieved December 09, 2017, from https://www.usatoday.com/story/tech/news/2016/08/09/pokemon-go-racist-app-redlining-communities-color-racist-pokestops-gyms/87732734/
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. Retrieved December 03, 2017, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Eremenko, K., & De Ponteves, H. (2017, November 02). Machine Learning A-Z™: Hands-On Python & R In Data Science. Retrieved December 05, 2017, from https://www.udemy.com/machinelearning/
Rajaraman, A. (2008, March 24). More data usually beats better algorithms. Retrieved December 09, 2017, from http://anand.typepad.com/datawocky/2008/03/more-data-usual.html
Brennan, T., Dieterich, W., & Ehret, B. (2008). Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System. Criminal Justice and Behavior, 36(1), 21–40. doi:https://doi.org/10.1177/0093854808326545
Blomberg, T., Bales, W., Mann, K., Meldrum, R., & Nedelec, J. (2010). Validation of the COMPAS risk assessment classification instrument. Retrieved from the Florida State University website: http://www.criminologycenter.fsu.edu/p/pdf/pretrial/Broward%20Co.%20COMPAS%20 Validation%202010.pdf
Tantithamthavorn, C., Mcintosh, S., Hassan, A. E., & Matsumoto, K. (2016). Comments on “Researcher Bias: The Use of Machine Learning in Software Defect Prediction”. IEEE Transactions on Software Engineering, 42(11), 1092–1094. doi:https://doi.org/10.1109/tse.2016.2553030
About Us. (n.d.). Retrieved December 05, 2017, from https://www.propublica.org/about/
Smith, M., Patil, D., & Muñoz, C. (2016, May 4). Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights. Retrieved December 03, 2017, from https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-data-and-civil-rights
ONeil, C. (2017). Weapons of math destruction: how big data increases inequality and threatens democracy. London: Penguin Books.
AJL -ALGORITHMIC JUSTICE LEAGUE. (n.d.). Retrieved December 05, 2017, from https://www.ajlunited.org/
Steusloff, H. (2016). Humans Are Back in the Loop! Would Production Process Related Ethics Support the Design, Operating, and Standardization of Safe, Secure, and Efficient Human-Machine Collaboration? 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), 348–350. https://doi.org/10.1109/w-ficloud.2016.76
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Shadowen, N. (2019). Ethics and Bias in Machine Learning: A Technical Study of What Makes Us “Good”. In: Lee, N. (eds) The Transhumanism Handbook. Springer, Cham. https://doi.org/10.1007/978-3-030-16920-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-16920-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16919-0
Online ISBN: 978-3-030-16920-6
eBook Packages: Computer ScienceComputer Science (R0)