Skip to main content

Advertisement

Log in

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Here, we have in mind certain professions that stand to lose out to automation, e.g. conveyancing, accountancy, and the like.

  2. Traditional algorithms, like expert systems, could be inscrutable after the fact: even simple rules can generate complex and inscrutable emergent properties. But these effects were not baked in. We are grateful to an anonymous reviewer for pointing this out to us.

  3. See, e.g. <https://standards.ieee.org/develop/project/7001.html>.

  4. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 27.3.2016, p. 1.

  5. Strictly speaking, this “good practice” recommendation (Annex 1) pertains to Article 15, not Article 22, of the GDPR. Article 15(1)(h) requires the disclosure of “meaningful information about the logic involved” in certain kinds of fully automated decisions.

  6. The merits of various pragmatic theories of truth are not especially relevant to us here. Another way we could put our point is that utility in the service of one aim is not utility in the service of another.

  7. Actually, many private, purely personal decisions (regarding, e.g. what to study, which career to pursue, whether to rent or purchase) are also frequently made in consultation with friends, family, mentors, career advisers, and so on.

  8. Our citing Damasio (1994) might seem odd, for we are suggesting that the effects of emotions may be reason-distorting, whereas for Damasio this is not the main point. Damasio sees emotions as an essential component of rational thought (and we agree). Nevertheless, he does see emotions as engendering biases in some cases. For instance, he says: “I will not deny that uncontrolled or misdirected emotion can be a major source of irrational behavior. Nor will I deny that seemingly normal reason can be disturbed by subtle biases rooted in emotion” (1994, pp. 52–53, our emphasis). (He goes on to say: “Nonetheless, (…) [r]eduction in emotion may constitute an equally important source of irrational behavior.” But the key point is that he does see emotions as a potential source of bias in some contexts.)

  9. Copyright law is not the only culprit here. Other factors impeding access include privacy and income disparities.

  10. House v. The King (1936) 55 C.L.R. 499 (High Court of Australia).

  11. Devries v. Australian National Railways Commission (1993) 177 CLR 472 (High Court of Australia); Abalos v. Australian Postal Commission (1990) 171 CLR 167 (High Court of Australia); cf. Fox v. Percy (2003) 214 C.L.R. 118 (High Court of Australia).

  12. See, e.g. Supreme Court Act, s. 101(2) (New South Wales).

  13. This classification is not to be confused with the more traditional one found in the standards literature, e.g. Coglianese and Lazer (2003).

  14. We are grateful to an anonymous reviewer for bringing these to our attention.

References

  • Allport, G. W. (1954). The nature of prejudice. Cambridge: Addison-Wesley.

    Google Scholar 

  • Angie, A. D., Connelly, S., Waples, E. P., & Kligyte, V. (2011). The influence of discrete emotions on judgement and decision-making: a meta-analytic review. Cognition and Emotion, 25(8), 1393–1422.

    Article  Google Scholar 

  • Aronson, & Dyer. (2013). Judicial review of administrative action (5th ed.). Sydney: Lawbook Co..

    Google Scholar 

  • Baker, J. H. (2002). An introduction to English legal history (4th ed.). New York: Oxford University Press.

    Google Scholar 

  • Barocas, S., & Selbst, A. D. (2015). Big data’s disparate impact. California Law Review, 104, 671–732.

    Google Scholar 

  • Begby, E. (2013). The epistemology of prejudice. Thought, 2(2), 90–99.

    Google Scholar 

  • Bezrukova, K., Spell, C. S., Perry, J. L., & Jehn, K. A. (2016) A meta-analytical integration of over 40 years of research on diversity training evaluation. Available at: http://scholarship.sha.cornell.edu/articles/974.

  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J. & Shadbolt, N. (2018) “It’s reducing a human being to a percentage”: perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York: ACM.

  • Burrell, J. (2016). How the machine “thinks”: understanding opacity in machine learning algorithms. Big Data and Society, 3(1), 1–12.

    Article  Google Scholar 

  • Cane, P. (2011). Administrative law (5th ed.). New York: Oxford University Press.

    Google Scholar 

  • Chopra, S., & White, L. F. (2011). A legal theory for autonomous artificial agents. Ann Arbor: University of Michigan Press.

    Book  Google Scholar 

  • Churchland, P. A. (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78, 67–90.

    Google Scholar 

  • Coglianese, C., & Lazer, D. (2003). Management-based regulation: prescribing private management to achieve public goals. Law and Society Review, 37(4), 691–730.

    Article  Google Scholar 

  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. & Huq, A. (2016) Algorithmic decision making and the cost of fairness. Proceedings of KDD’17. Available at: https://arxiv.org/pdf/1701.08230.pdf.

  • Corbett-Davies, S., Pierson, E., Feller, A. & Goel, S. (2017) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Washington Post.

  • Crawford, K. (2016) Artificial intelligence’s white guy problem. New York Times.

  • Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538, 311–313.

    Article  Google Scholar 

  • Damasio, A. R. (1994). Descartes’ error: emotion, reason, and the human brain. New York: Putnam’s Sons.

    Google Scholar 

  • Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., Felzmann, H., Haklay, M., Khoo, S., Morison, J., Murphy, M. H., O’Brolchain, N., Schafer, B., & Shankar, K. (2017). Algorithmic governance: developing a research agenda through the power of collective intelligence. Big Data and Society, 1–21.

  • Dennett, D. (1987). The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Dennett, D. (1991). Real patterns. Journal of Philosophy, 87, 27–51.

    Article  Google Scholar 

  • Dennett, D. (1995). Darwin’s dangerous idea: evolution and the meanings of life. New York: Simon & Schuster.

    Google Scholar 

  • Diakopoulos, N. (2015). Algorithmic accountability: journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415.

    Article  Google Scholar 

  • Dutta, S. (2017) Do computers make better bank managers than humans? The Conversation.

  • Dworkin, R. (1977). Taking rights seriously. London: Duckworth.

    Google Scholar 

  • Dworkin, R. (1986). Law’s empire. London: Fontana Books.

    Google Scholar 

  • Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for. Duke Law and Technology Review, 16(1), 18–84.

    Google Scholar 

  • Edwards, L. & Veale, M. (2018) Enslaving the algorithm: From a “right to an explanation” to a “right to better decisions”? IEEE Security & Privacy.

  • Erdélyi, O.J. & Goldsmith, J. (2018) Regulating artificial intelligence: proposal for a global solution. AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Available at: http://www.aiesconference.com/wpcontent/papers/main/AIES_2018_paper_13.pdf.

  • Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St Martin’s Press.

    Google Scholar 

  • Fodor, J. A. (1981). Three cheers for propositional attitudes. In J. A. Fodor (Ed.), RePresentations: philosophical essays on the foundations of cognitive science. Cambridge: MIT Press.

    Google Scholar 

  • Forssbæck, J., & Oxelheim, L. (2014). The multifaceted concept of transparency. In J. Forssbæck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 3–31). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347.

    Article  Google Scholar 

  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics 3: speech acts (pp. 41–58). New York: Academic Press.

    Google Scholar 

  • Griffiths, J. (2016) New Zealand passport robot thinks this Asian man’s eyes are closed. CNN.com December 9, 2016.

  • Hardt, M., Price, E. & Srebro, N. (2016) Equality of opportunity in supervised learning. 30th Conference on Neural Information Processing Systems (NIPS 2016). Available at: https://arxiv.org/pdf/1610.02413v1.pdf.

  • Heald, D. (2006). Transparency as an instrumental value. In C. Hood & D. Heald (Eds.), Transparency: the key to better governance? (pp. 59–73). Oxford: Oxford University Press.

    Google Scholar 

  • Hilton, D. J. (1990). Conversational processes and causal explanation. Psychological Bulletin, 107(1), 65–81.

    Article  Google Scholar 

  • Johnson, J.A. (2006). Technology and pragmatism: from value neutrality to value criticality. SSRN Scholarly Paper, Rochester, NY: Social Science Research Network. Available at: http://papers.ssrn.com/abstract=2154654.

  • Kleinberg, J., Mullainathan, S. & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. 8th Conference on Innovations in Theoretical Computer Science (ITCS 2017). Available at: https://arxiv.org/pdf/1609.05807.pdf.

  • Klingele, C. (2016). The promises and perils of evidence-based corrections. Notre Dame Law Review, 91(2), 537–584.

    Google Scholar 

  • Langer, E., Blank, A. E., & Chanowitz, B. (1978). The mindlessness of ostensibly thoughtful action: the role of “placebic” information in interpersonal interaction. Journal of Personality and Social Psychology, 36(6), 635–642.

    Article  Google Scholar 

  • Larson, J., Mattu, S., Kirchner, L. & Angwin, J. (2016) How we analyzed the COMPAS recidivism algorithm. ProPublica.org May 23, 2016.

  • Leslie, S. (2017). The original sin of cognition: race, prejudice and generalization. Journal of Philosophy, 114(8), 393–421.

    Article  Google Scholar 

  • Levendowski, A. (2017) How copyright law can fix artificial intelligence’s implicit bias problem. Washington Law Review (forthcoming). Available at: https://ssrn.com/abstract=3024938.

  • Lombrozo, T. (2011). The instrumental value of explanations. Philosophy Compass, 6, 539.

    Article  Google Scholar 

  • Lum, K. & Isaac, W. (2016) To predict and serve? Bias in police-recorded data. Significance, 14–19.

  • McEwen, R., Eldridge, J., & Caruso, D. (2018). Differential or deferential to media? The effect of prejudicial publicity on judge or jury. International Journal of Evidence and Proof, 22(2), 124–143.

    Article  Google Scholar 

  • Miller, T. (2017) Explanation in artificial intelligence: insights from the social sciences. Available at: https://arxiv.org/pdf/1706.07269.pdf.

  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data and Society, 16, 1–21.

    Google Scholar 

  • Montavon, G., Bach, S., Binder, A., Samek, W., & Müller, K.-R. (2017). Explaining nonlinear classification decisions with Deep Taylor decomposition. Pattern Recognition, 65, 211.

    Article  Google Scholar 

  • Muehlhauser (2013) Transparency in safety-critical systems. Intelligence.org August 15, 2013. Available at: https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/.

  • Nusser, S. (2009). Robust learning in safety-related domains: machine learning methods for solving safety-related application problems. Doctoral dissertation, Otto-von-Guericke-Universita ̈t Magdeburg. Available at: https://pdfs.semanticscholar.org/48c2/e5641101a4e5250ad903828c02025d269a1a.pdf.

  • Oliver, W. M., & Batra, R. (2015). Standards of legitimacy in criminal negotiations. Harvard Negotiation Law Review, 20, 61–120.

    Google Scholar 

  • Oswald, M. & Grace, J. (2016). Intelligence, policing and the use of algorithmic analysis: A freedom of information-based study. Journal of Information Rights, Policy and Practice, 1(1). Available at: https://journals.winchesteruniversitypress.org/index.php/jirpp/article/view/16.

  • Pasquale, F. (2014). The black box society: the secret algorithms that control money and information. Cambridge: Harvard University Press.

    Google Scholar 

  • Piattelli-Palmarini, M. (1995). La r’eforme du jugement ou comment ne plus se tromper. Paris: Odile Jacob.

    Google Scholar 

  • Plous, S. (2003a). The psychology of prejudice, stereotyping, and discrimination. In S. Plous (Ed.), Understanding prejudice and discrimination (pp. 3–48). New York: McGraw-Hill.

    Google Scholar 

  • Plous, S. (2003b). Understanding prejudice and discrimination. New York: McGraw-Hill.

    Google Scholar 

  • Pohl, J. (2008). Cognitive elements of human decision making Jens. In G. Phillips-Wren, N. Ichalkaranje, & L. C. Jain (Eds.), Intelligent decision making: an AI-based approach (pp. 3–40). Berlin: Springer.

    Google Scholar 

  • Pomerol, J.-C., & Adam, F. (2008). Understanding human decision making: a fundamental step towards effective intelligent decision support. In G. Phillips-Wren, N. Ichalkaranje, & L. C. Jain (Eds.), Intelligent decision making: an AI-based approach (pp. 41–76). Berlin: Springer.

    Google Scholar 

  • Prat, A. (2006). The more closely we are watched, the better we behave? In C. Hood & D. Heald (Eds.), Transparency: the key to better governance? (pp. 91–103). Oxford: Oxford University Press.

    Google Scholar 

  • Rosch, E. (1978). Principles of categorization. In E. Rosch & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27–48). Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Schwab, K. (2016). The fourth industrial revolution. Geneva: Crown.

    Google Scholar 

  • Stephan, W. G., & Finlay, K. (1999). The role of empathy in improving intergroup relations. Journal of Social Issues, 55(4), 729–743.

    Article  Google Scholar 

  • Stich, S. (1983). From folk psychology to cognitive science. Cambridge: MIT Press.

    Google Scholar 

  • Tatman, R. (2016) Google’s speech recognition has a gender bias. Making Noise and Hearing Things.

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185, 1124–1131.

    Article  Google Scholar 

  • Van Otterlo, M. (2013). A machine learning view on profiling. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn: philosophers of law meet philosophers of technology (pp. 41–64). Abingdon: Routledge.

    Google Scholar 

  • Veale, M., & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law and Security Review, 34, 398–404.

    Article  Google Scholar 

  • Wachter, S., Mittelstadt, B. D., & Floridi, L. (2017a). Transparent, explainable, and accountable AI for robotics. Science Robotics, 2(6).

  • Wachter, S., Mittelstadt, B. D., & Floridi, L. (2017b). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99.

    Article  Google Scholar 

  • Waldron, J. (1990). The law. London: Routledge.

    Book  Google Scholar 

Download references

Acknowledgments

The authors wish to thank the participants of two roundtables, one held in Oxford, November 23–24, 2017, in partnership with the Uehiro Centre for Practical Ethics, University of Oxford, and the other in Dunedin, December 11–12, at the University of Otago.

Funding

This research was supported by a New Zealand Law Foundation grant (2016/ILP/10).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Zerilli.

Ethics declarations

Conflict of Interest

AK works for Soul Machines Ltd under contract. JZ, JM, and CG have no other disclosures or relevant affiliations apart from the appointments above.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zerilli, J., Knott, A., Maclaurin, J. et al. Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?. Philos. Technol. 32, 661–683 (2019). https://doi.org/10.1007/s13347-018-0330-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-018-0330-6

Keywords

Navigation