Advertisement

Feminist AI: Can We Expect Our AI Systems to Become Feminist?

  • Galit WellnerEmail author
  • Tiran Rothman
Research Article

Abstract

The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender bias in AI technologies as a multi-faceted phenomenon, and the linguistic explanation as too narrow. The next step moves from the linguistic aspects to the relational ones, with postphenomenology. One of the analytical tools of this theory is the “I-technology-world” formula that models our relations with technologies, and through them—with the world. Realizing that AI technologies give rise to new types of relations in which the technology has an “enhanced technological intentionality”, a new formula is suggested: “I-algorithm-dataset.” In the third part of the article, four types of solutions to the gender bias in AI are reviewed: ignoring any reference to gender, revealing the considerations that led the algorithm to decide, designing algorithms that are not biased, or lastly, involving humans in the process. In order to avoid gender bias, we can recall a feminist basic understanding—visibility matters. Users and developers should be aware of the possibility of gender and racial biases, and try to avoid them, bypass them, or exterminates them altogether.

Keywords

Feminist philosophy of technology Postphenomenology Behavioral economics Bias Dataset 

Notes

References

  1. Adam, A. (1998). Artificial knowing: gender and the thinking machine. London: Routledge.Google Scholar
  2. Bath, C. (2009). “Searching for methodology: feminist technology design in computer science.” In Proceedings of the 5th European symposium ongender & ICT digital cultures: Participation - Empowerment - Diversity, March 5–7, 2009, University of Bremen, Bremen.Google Scholar
  3. Buolamwini, J., & Gebru, T. (2018). Gender shades: intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.Google Scholar
  4. Caliskan, A., Bryson, J. J., & Arvind, N. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.CrossRefGoogle Scholar
  5. Courtland, R. (2018). The bias detective. Nature, 558, 357–360.CrossRefGoogle Scholar
  6. Dastin, J. (2018). Amazon scrapped a secret AI recruitment tool that showed bias against women. Reuters 10 October 2018.Google Scholar
  7. Datta, A. , M. C. Tschantz, and A. Datta. 2015. “Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination.” Proceedings on Privacy Enhancing Technologies. 92–112. doi: https://doi.org/10.1515/popets-2015-0007.
  8. Datta, A., Tschantz, M. C., & Datta, A. (2002). Transforming technology: a critical theory revisited. New York: Oxford University Press.Google Scholar
  9. Dave, P. (2018). "Fearful of bias, Google blocks gender-based pronouns from new AI tool," Reuters, November 27, 2018.Google Scholar
  10. Deng, M. (2014). One size fits few: Artificial hearts leave many out. LiveScience, September 4, 2014.Google Scholar
  11. Feenberg, A. (2017). Technosystem: the social life of reason. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  12. Fisman, R., & Luca, M. (2016). Fixing discrimination in online marketplaces. Harvard Business Review, 94(12), 88–95.Google Scholar
  13. Gigerenzer, G., & Todd, P. M. (1999). "Fast and frugal heuristics: The adaptive toolbox." Simple Heuristics that Make us Smart. Oxford University Press, pp 3-34.Google Scholar
  14. Han, H., A. K. Jain, S. Shan, & X. Chen. 2017. Heterogeneous face attribute estimation: a deep multi-task learning approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(11), 2597–2609.Google Scholar
  15. Ihde, D. (1979). Technics and praxis: a philosophy of technology. Dordrecht: Reidel Publishing Company.Google Scholar
  16. Ihde, D. (1990). Technology and the Lifeworld: from Garden to Earth. Bloomington: Indiana University Press.Google Scholar
  17. Ihde, D. (2012). Experimental phenomenology: Multistabilities (second ed.). Albany: State University of New York Press.Google Scholar
  18. Kricheli-Katz, T., & Regev, T. (2016). How many cents on the dollar? Women and men in product markets. Science Advances, 2(2), e1500599.CrossRefGoogle Scholar
  19. Lambrecht, A., & Tucker, C. E. (2018). Algorithmic Bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads. SSRN.  https://doi.org/10.2139/ssrn.2852260.
  20. Lomas, N. (2018). "IBM launches cloud tool to detect AI bias and explain automated decisions," TechCrunch, September 19, 2018.Google Scholar
  21. Marcus, G. (2018). “The deepest problem with deep learning.” Medium, December 1 2018. https://medium.com/@GaryMarcus/the-deepest-problemwith-deep-learning-91c5991f5695.
  22. Michelfelder, Diane P., Galit Wellner, and Heather Wiltse. 2017. “Designing differently: toward a methodology for an ethics of feminist technology design.” In The Ethics of Technology: Methods and Approaches, by Sven Ove Hansson, 193–218. London and New York: Rowman and Littlefield.Google Scholar
  23. Prey, R. (2018). Nothing personal: algorithmic individuation on music streaming platforms. Media, Culture & Society, 40(7), 1086–1100.CrossRefGoogle Scholar
  24. Simon, H. A. (1987). Making management decisions: The role of intuition and emotion. The Academy of Management Perspectives, 1(1), 57–64.Google Scholar
  25. Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41(1), 1–20.Google Scholar
  26. Schwartz Cowan, R. (1976). The “industrial revolution” in the home: household technology and social change in the 20th century. Technology and Culture, 17(1), 1–23.CrossRefGoogle Scholar
  27. Suchman, L. (1994). Do categories have politics? The language/action perspective reconsidered. Computer Supported Cooperative Work (CSCW), 2(3), 177–190.CrossRefGoogle Scholar
  28. Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44–54.CrossRefGoogle Scholar
  29. Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.CrossRefGoogle Scholar
  30. Verbeek, P.-P. (2008a). Cyborg intentionality: rethinking the phenomenology of human–technology relations. Phenomenology and Cognitive Science, 7, 387–395.CrossRefGoogle Scholar
  31. Verbeek, Peter-Paul. 2008b. “Morality in design: design ethics and the morality of technological artifacts.” In Philosophy and design: from engineering to architecture, by Pieter E. Vermaas, Peter Kroes, Andrew Light and Steven A. Moore, 91–103. Dordrecht: Springer.Google Scholar
  32. Verbeek, P.-P. (2011). Moralizing technology: understanding and designing the morality of things. Chicago: The University of Chicago Press.CrossRefGoogle Scholar
  33. Wajcman, J. (2009). Feminist theories of technology. Cambridge Journal of Economics.  https://doi.org/10.1093/cje/ben057.
  34. Wellner, G. (2016). A postphenomenological inquiry of cell phones: genealogies, meanings, and becoming. Lanham: Lexington Books.Google Scholar
  35. Wellner, G. (2018a). Posthuman imagination: from modernity to augmented reality. Journal of Posthuman Studies, 2(1), 45–66.CrossRefGoogle Scholar
  36. Wellner, Galit. 2018b. “From cellphones to machine learning. A shift in the role of the user in algorithmic writing.” In Towards a philosophy of digital media, by Alberto Romele and Enrico Terrone, 205–224. Cham: Palgrave MacMillan.Google Scholar
  37. Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in algorithmic and human decision-making: is there a double standard? Philosophy & Technology, 1–23.Google Scholar
  38. Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist—it’s time to make it fair. Nature, 559, 324–326.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.The NB School of DesignTel Aviv UniversityHaifaIsrael

Personalised recommendations