Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?
- 1.9k Downloads
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.
KeywordsAlgorithmic decision-making Transparency Explainable AI Intentional stance
The authors wish to thank the participants of two roundtables, one held in Oxford, November 23–24, 2017, in partnership with the Uehiro Centre for Practical Ethics, University of Oxford, and the other in Dunedin, December 11–12, at the University of Otago.
This research was supported by a New Zealand Law Foundation grant (2016/ILP/10).
Compliance with Ethical Standards
Conflict of Interest
AK works for Soul Machines Ltd under contract. JZ, JM, and CG have no other disclosures or relevant affiliations apart from the appointments above.
- Allport, G. W. (1954). The nature of prejudice. Cambridge: Addison-Wesley.Google Scholar
- Aronson, & Dyer. (2013). Judicial review of administrative action (5th ed.). Sydney: Lawbook Co..Google Scholar
- Baker, J. H. (2002). An introduction to English legal history (4th ed.). New York: Oxford University Press.Google Scholar
- Barocas, S., & Selbst, A. D. (2015). Big data’s disparate impact. California Law Review, 104, 671–732.Google Scholar
- Begby, E. (2013). The epistemology of prejudice. Thought, 2(2), 90–99.Google Scholar
- Bezrukova, K., Spell, C. S., Perry, J. L., & Jehn, K. A. (2016) A meta-analytical integration of over 40 years of research on diversity training evaluation. Available at: http://scholarship.sha.cornell.edu/articles/974.
- Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J. & Shadbolt, N. (2018) “It’s reducing a human being to a percentage”: perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York: ACM.Google Scholar
- Cane, P. (2011). Administrative law (5th ed.). New York: Oxford University Press.Google Scholar
- Churchland, P. A. (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78, 67–90.Google Scholar
- Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. & Huq, A. (2016) Algorithmic decision making and the cost of fairness. Proceedings of KDD’17. Available at: https://arxiv.org/pdf/1701.08230.pdf.
- Corbett-Davies, S., Pierson, E., Feller, A. & Goel, S. (2017) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Washington Post.Google Scholar
- Crawford, K. (2016) Artificial intelligence’s white guy problem. New York Times.Google Scholar
- Damasio, A. R. (1994). Descartes’ error: emotion, reason, and the human brain. New York: Putnam’s Sons.Google Scholar
- Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., Felzmann, H., Haklay, M., Khoo, S., Morison, J., Murphy, M. H., O’Brolchain, N., Schafer, B., & Shankar, K. (2017). Algorithmic governance: developing a research agenda through the power of collective intelligence. Big Data and Society, 1–21.Google Scholar
- Dennett, D. (1987). The intentional stance. Cambridge: MIT Press.Google Scholar
- Dennett, D. (1995). Darwin’s dangerous idea: evolution and the meanings of life. New York: Simon & Schuster.Google Scholar
- Dutta, S. (2017) Do computers make better bank managers than humans? The Conversation.Google Scholar
- Dworkin, R. (1977). Taking rights seriously. London: Duckworth.Google Scholar
- Dworkin, R. (1986). Law’s empire. London: Fontana Books.Google Scholar
- Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for. Duke Law and Technology Review, 16(1), 18–84.Google Scholar
- Edwards, L. & Veale, M. (2018) Enslaving the algorithm: From a “right to an explanation” to a “right to better decisions”? IEEE Security & Privacy.Google Scholar
- Erdélyi, O.J. & Goldsmith, J. (2018) Regulating artificial intelligence: proposal for a global solution. AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Available at: http://www.aiesconference.com/wpcontent/papers/main/AIES_2018_paper_13.pdf.
- Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St Martin’s Press.Google Scholar
- Fodor, J. A. (1981). Three cheers for propositional attitudes. In J. A. Fodor (Ed.), RePresentations: philosophical essays on the foundations of cognitive science. Cambridge: MIT Press.Google Scholar
- Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics 3: speech acts (pp. 41–58). New York: Academic Press.Google Scholar
- Griffiths, J. (2016) New Zealand passport robot thinks this Asian man’s eyes are closed. CNN.com December 9, 2016.
- Hardt, M., Price, E. & Srebro, N. (2016) Equality of opportunity in supervised learning. 30th Conference on Neural Information Processing Systems (NIPS 2016). Available at: https://arxiv.org/pdf/1610.02413v1.pdf.
- Heald, D. (2006). Transparency as an instrumental value. In C. Hood & D. Heald (Eds.), Transparency: the key to better governance? (pp. 59–73). Oxford: Oxford University Press.Google Scholar
- Johnson, J.A. (2006). Technology and pragmatism: from value neutrality to value criticality. SSRN Scholarly Paper, Rochester, NY: Social Science Research Network. Available at: http://papers.ssrn.com/abstract=2154654.
- Kleinberg, J., Mullainathan, S. & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. 8th Conference on Innovations in Theoretical Computer Science (ITCS 2017). Available at: https://arxiv.org/pdf/1609.05807.pdf.
- Klingele, C. (2016). The promises and perils of evidence-based corrections. Notre Dame Law Review, 91(2), 537–584.Google Scholar
- Larson, J., Mattu, S., Kirchner, L. & Angwin, J. (2016) How we analyzed the COMPAS recidivism algorithm. ProPublica.org May 23, 2016.
- Levendowski, A. (2017) How copyright law can fix artificial intelligence’s implicit bias problem. Washington Law Review (forthcoming). Available at: https://ssrn.com/abstract=3024938.
- Lum, K. & Isaac, W. (2016) To predict and serve? Bias in police-recorded data. Significance, 14–19.Google Scholar
- Miller, T. (2017) Explanation in artificial intelligence: insights from the social sciences. Available at: https://arxiv.org/pdf/1706.07269.pdf.
- Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data and Society, 16, 1–21.Google Scholar
- Muehlhauser (2013) Transparency in safety-critical systems. Intelligence.org August 15, 2013. Available at: https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/.
- Nusser, S. (2009). Robust learning in safety-related domains: machine learning methods for solving safety-related application problems. Doctoral dissertation, Otto-von-Guericke-Universita ̈t Magdeburg. Available at: https://pdfs.semanticscholar.org/48c2/e5641101a4e5250ad903828c02025d269a1a.pdf.
- Oliver, W. M., & Batra, R. (2015). Standards of legitimacy in criminal negotiations. Harvard Negotiation Law Review, 20, 61–120.Google Scholar
- Oswald, M. & Grace, J. (2016). Intelligence, policing and the use of algorithmic analysis: A freedom of information-based study. Journal of Information Rights, Policy and Practice, 1(1). Available at: https://journals.winchesteruniversitypress.org/index.php/jirpp/article/view/16.
- Pasquale, F. (2014). The black box society: the secret algorithms that control money and information. Cambridge: Harvard University Press.Google Scholar
- Piattelli-Palmarini, M. (1995). La r’eforme du jugement ou comment ne plus se tromper. Paris: Odile Jacob.Google Scholar
- Plous, S. (2003a). The psychology of prejudice, stereotyping, and discrimination. In S. Plous (Ed.), Understanding prejudice and discrimination (pp. 3–48). New York: McGraw-Hill.Google Scholar
- Plous, S. (2003b). Understanding prejudice and discrimination. New York: McGraw-Hill.Google Scholar
- Pohl, J. (2008). Cognitive elements of human decision making Jens. In G. Phillips-Wren, N. Ichalkaranje, & L. C. Jain (Eds.), Intelligent decision making: an AI-based approach (pp. 3–40). Berlin: Springer.Google Scholar
- Pomerol, J.-C., & Adam, F. (2008). Understanding human decision making: a fundamental step towards effective intelligent decision support. In G. Phillips-Wren, N. Ichalkaranje, & L. C. Jain (Eds.), Intelligent decision making: an AI-based approach (pp. 41–76). Berlin: Springer.Google Scholar
- Prat, A. (2006). The more closely we are watched, the better we behave? In C. Hood & D. Heald (Eds.), Transparency: the key to better governance? (pp. 91–103). Oxford: Oxford University Press.Google Scholar
- Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27–48). Hillsdale: Lawrence Erlbaum Associates.Google Scholar
- Schwab, K. (2016). The fourth industrial revolution. Geneva: Crown.Google Scholar
- Stich, S. (1983). From folk psychology to cognitive science. Cambridge: MIT Press.Google Scholar
- Tatman, R. (2016) Google’s speech recognition has a gender bias. Making Noise and Hearing Things.Google Scholar
- Van Otterlo, M. (2013). A machine learning view on profiling. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn: philosophers of law meet philosophers of technology (pp. 41–64). Abingdon: Routledge.Google Scholar
- Wachter, S., Mittelstadt, B. D., & Floridi, L. (2017a). Transparent, explainable, and accountable AI for robotics. Science Robotics, 2(6).Google Scholar