Skip to main content

Artificial Intelligence and Transparency: Opening the Black Box

  • Chapter
  • First Online:
Regulating Artificial Intelligence

Abstract

The alleged opacity of AI has become a major political issue over the past few years. Opening the black box, so it is argued, is indispensable to identify encroachments on user privacy, to detect biases and to prevent other potential harms. However, what is less clear is how the call for AI transparency can be translated into reasonable regulation. This Chapter argues that designing AI transparency regulation is less difficult than oftentimes assumed. Regulators profit from the fact that the legal system has already gained considerable experience with the question of how to shed light on partially opaque decision-making systems—human decisions. This experience provides lawyers with a realistic perspective of the functions of potential AI transparency legislation as well as with a set of legal instruments which can be employed to this end.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Cf. only National Science and Technology Council Committee on Technology (2016), OECD Global Science Forum (2016), Asilomar Conference (2017), European Parliament (2017), Harhoff et al. (2018), Singapore Personal Data Protection Commission (2018), Agency for Digital Italy (2018), House of Lords Select Committee on Artificial Intelligence (2018), Villani (2018), European Commission (2018) and Datenethikkommission (2018).

  2. 2.

    Bundesanstalt für Finanzdienstleistungsaufsicht (2018), pp. 144–145. See also Hermstrüwer, para 3, Hennemann, para 37.

  3. 3.

    Important contributions to this debate include Mayer-Schönberger and Cukier (2013), pp. 176 et seq.; Zarsky (2013); Pasquale (2015); Burrell (2016); Diakopoulos (2016); Zweig (2016); Ananny and Crawford (2018).

  4. 4.

    The initial proposal (Int. 1696–2017) would have added the text cited above to Section 23-502 of the Administrative Code of the City of New York. However, the law that was finally passed only established a task force which is designated to study how city agencies currently use algorithms. For a detailed account of the legislative process see legistar.council.nyc.gov/LegislationDetail.aspx?ID=3137815&GUID=437A6A6D-62E1-47E2-9C42-461253F9C6D0.

  5. 5.

    Konferenz der Informationsfreiheitsbeauftragten (2018), p. 4.

  6. 6.

    Arbeitsgruppe “Digitaler Neustart” (2018), p. 7.

  7. 7.

    COM/2018/238 final—2018/0112 (COD).

  8. 8.

    Rundfunkkommission der Länder (2018), pp. 25–26.

  9. 9.

    As of January 2018, investment service providers in Germany are subject to notification requirements, if they engage in algorithmic trading within the meaning of Section 80(2) of the Wertpapierhandelsgesetz (WpHG—Securities Trading Act). This provision implements the EU Markets in Financial Instruments Directive II (MiFID II).

  10. 10.

    Cf. Mittelstadt et al. (2016), p. 6: ‘transparency is often naïvely treated as a panacea for ethical issues arising from new technologies.’ Similarly, Neyland (2016), pp. 50 et seq.; Crawford (2016), pp. 77 et seq.

  11. 11.

    Burrell (2016), p. 1.

  12. 12.

    Cf. Ananny and Crawford (2018), p. 983. For an extensive discussion of this point see Tsoukas (1997); Heald (2006), pp. 25–43; Costas and Grey (2016), p. 52; Fenster (2017).

  13. 13.

    For references see Selbst and Barocas (2018), pp. 1089–1090.

  14. 14.

    Article 29 Data Protection Working Party (2018), p. 14.

  15. 15.

    Bundesverfassungsgericht 2 BvR 2134, 2159/92 ‘Maastricht’ (12 October 1993), BVerfGE 89, p. 185; 2 BvR 1877/97 and 50/98 ‘Euro’ (31 March 1998), BVerfGE 97, p. 369. On the values of transparency cf. Scherzberg (2000), pp. 291 et seq., 320 et seq., 336 et seq.; Gusy (2012), § 23 paras 18 et seq.; Scherzberg (2013), § 49 paras 13 et seq.

  16. 16.

    Cf. CJEU C-92/11 ‘RWE Vertrieb AG v Verbraucherzentrale Nordrhein-Westfalen eV’ (21 March 2013), ECLI:EU:C:2013:180; CJEU C-26/13 ‘Árpád Kásler and Hajnalka Káslerné Rábai v OTP Jelzálogbank Zrt’ (30 April 2014), ECLI:EU:C:2014:282. See also Busch (2016). For a critical perspective on disclosure obligations in U.S. private law and in privacy law see Ben-Shahar and Schneider (2011) and Ben-Shahar and Chilton (2016).

  17. 17.

    Merton (1968), pp. 71–72.

  18. 18.

    Cf. Fassbender (2006), § 76 para 2; Hood and Heald (2006); Florini (2007).

  19. 19.

    Sometimes, transparency regulation is identified with granting individual rights to information. However, such individual rights are only one element within a larger regulatory structure that governs the flow of information in society, see Part 4.

  20. 20.

    On the difficulty to identify the ‘boundaries’ of AI-based systems, which further complicates the quest for transparency, see Kaye (2018), para 3; Ananny and Crawford (2018), p. 983.

  21. 21.

    Outside the specific context of AI, error detection and avoidance has become the subject of sophisticated research activities and extensive technical standardization efforts, cf. the international standard ISO/IEC 25000 ‘Software engineering—Software product Quality Requirements and Evaluation (SQuaRE)” (created by ISO/IEC JTC 1/SC 07 Software and systems engineering). The German version ‘DIN ISO/IEC 25000 Software-Engineering – Quality Criteria and Evaluation of Software Products (SQuaRE) - Guideline for SQuaRE’ is maintained by NA 043 Information Technology and Applications Standards Committee (NIA) of the German Institute for Standardization.

  22. 22.

    A much cited paper in this context is Sandvig et al. (2014), pp. 1 et seq.

  23. 23.

    Recently, Munich Re and the German Research Centre for Artificial Intelligence (DFKI) have collaborated in auditing the technology behind a startup which uses AI to detect fraudulent online payments. After the audit, which comprised a check of the data, the underlying algorithms, the statistical models and the IT infrastructure of the company, was conducted successfully, the startup was offered an insurance.

  24. 24.

    Tutt (2017), pp. 89–90, describes several instances, in which forensic experts from IBM and Telsa were unable to reconstruct ex post the reasons for a malfunction of their systems.

  25. 25.

    See supra note 2.

  26. 26.

    For the following see only Hildebrandt (2011), pp. 375 et seq.; van Otterlo (2013), pp. 41 et seq.; Leese (2014), pp. 494 et seq.; Burrell (2016), pp. 1 et seq.; Tutt (2017), pp. 83 et seq.

  27. 27.

    For a detailed discussion see Bundesanstalt für Finanzdienstleistungsaufsicht (2018), pp. 188 et seq.

  28. 28.

    Burrell (2016), p. 2.

  29. 29.

    Even many conventional algorithms are constantly updated, which makes an ex-post evaluation difficult, cf. Schwartz (2015).

  30. 30.

    Cf. IBM (2018).

  31. 31.

    Ananny and Crawford (2018), p. 982. See also Diakopoulos (2016), p. 59.

  32. 32.

    Loi n° 2016-1321 du 7 octobre 2016 pour une République numérique. As further laid out Article R311-3-1-2, created through Article 1 of the Décret n° 2017-330 du 14 mars 2017 relatif aux droits des personnes faisant l’objet de décisions individuelles prises sur le fondement d’un traitement algorithmique (available at www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000034194929&categorieLien=cid), the administration needs to provide information about (1) the degree and the way in which algorithmic processing contributes to the decision making, (2) which data are processed and where they come from, (3) according to which variables the data is treated and, where appropriate, how they are weighed, (4) which operations are carried out by the system—all of this needs to be presented, however, “sous une forme intelligible et sous réserve de ne pas porter atteinte à des secrets protégés par la loi.” There is also a national security exception to the law. For more details see Edwards and Veale (2017).

  33. 33.

    For details cf. von Lewinski (2018), paras 7 et seq.; Martini (2018), paras 16 et seq., 25 et seq.

  34. 34.

    Cf. Martini and Nink (2017), pp. 3 and 7–8; Wachter et al. (2017), pp. 88, 92; Buchner (2018), para 16; Martini (2018), paras 16 et seq.; von Lewinski (2018), paras 16 et seq., 23 et seq., 26 et seq.

  35. 35.

    For Article 12a DPD see CJEU, C-141/12 and C-372/12, paras 50 et seq. For the German Data Protection Act, which contained (and still contains) a special section on credit scoring, the Bundesgerichtshof decided that brief statements on the design of credit scoring systems are sufficient and that system operators need only to explain the abstract relationship between a high credit score and the probability of securing credit. Detailed information on the decision or the system, in an individual case, was not deemed to be necessary, cf. Bundesgerichtshof VI ZR 156/13 (28.1.2014), BGHZ 200, 38, paras 25 et seq. In the literature there are numerous diverging views on what exact information has to be disclosed in order to adequately inform about the logic involved. For an overview of the different positions see Wischmeyer (2018a), pp. 50 et seq.

  36. 36.

    Cf. Hoffmann-Riem (2017), pp. 32–33. On the ‘risk of strategic countermeasures’ see Hermstrüwer, paras 65–69.

  37. 37.

    See only Leese (2014), pp. 495 et seq.; Mittelstadt et al. (2016), p. 6.

  38. 38.

    While the rule of law and the principle of democratic governance commit public actors to transparency and therefore limit administrative secrecy (see para 4), transparency requirements for private actors need to be justified in light of their fundamental rights. However, considering the public interests at stake and the risky nature of the new technology, the interests of private system operators will hardly ever prevail in toto. Moreover, the legislator needs also to protect the fundamental rights of those negatively affected by AI-based systems, which typically means that parliament must enact laws which guarantee an effective control of the technology. However, lawmakers have considerable discretion in this regard. For certain companies which operate privatized public spaces (‘public fora’) or have otherwise assumed a position of power that is somehow state-like, the horizontal effect of the fundamental rights of the data subjects will demand a more robust transparency regulation.

  39. 39.

    For a theory of ‘legal secrets’ see Scheppele (1988), Jestaedt (2001) and Wischmeyer (2018b).

  40. 40.

    See Braun Binder, paras 12 et seq. See also Martini and Nink (2017), p. 10.

  41. 41.

    This problem concerns all forms for transparency regulation, see Holznagel (2012), § 24 para 74; von Lewinski (2014), pp. 8 et seq.

  42. 42.

    Wischmeyer (2018b), pp. 403–409.

  43. 43.

    This has been discussed for cases where sensitive private proprietary technology is deployed for criminal justice or law enforcement purposes, cf. Roth (2017), Imwinkelried (2017) and Wexler (2018). For this reason, the police in North-Rhine Westphalia has developed a predictive policing system which does not use neural networks, but decision tree algorithms, cf. Knobloch (2018), p. 19.

  44. 44.

    See, however, on the (potentially prohibitive) costs of expertise Tischbirek, para 41.

  45. 45.

    Datta et al. (2017), pp. 71 et seq.; Tene and Polonetsky (2013), pp. 269–270.

  46. 46.

    For a nuanced account of the strengths and weaknesses of access to information regulation see Fenster (2017).

  47. 47.

    Ananny and Crawford (2018), p. 983.

  48. 48.

    Ananny and Crawford (2018), p. 982.

  49. 49.

    The purpose of Article 22 GDPR is frequently defined as preventing the degradation of ‘the individual to a mere object of a governmental act of processing without any regard for the personhood of the affected party or the individuality of the concrete case’ (Martini and Nink (2017), p. 3) (translation T.W.). Similarly, von Lewinski (2014), p. 16.

  50. 50.

    See supra note 13. See also Doshi-Velez and Kortz (2017), p. 6: ‘[E]xplanation is distinct from transparency. Explanation does not require knowing the flow of bits through an AI system, no more than explanation from humans requires knowing the flow of signals through neurons.’

  51. 51.

    Especially Wachter et al. (2018) draw on the work of Lewis, in particular on Lewis (1973a, b).

  52. 52.

    While there exists ‘considerable disagreement among philosophers about whether all explanations in science and in ordinary life are causal and also disagreement about what the distinction (if any) between causal and non-causal explanations consists in […], virtually everyone […] agrees that many scientific explanations cite information about causes’ (Woodward 2017). See also Doshi-Velez and Kortz (2017), p. 3.

  53. 53.

    Cf. Russell et al. (2015); Datta et al. (2017), pp. 71 et seq.; Doshi-Velez and Kim (2017); Fong and Vedaldi (2018). Despite recent progress, research in this field is still in its infancy. In 2017, a DARPA project on Explainable AI was initiated, see www.darpa.mil/program/explainable-artificial-intelligence.

  54. 54.

    Goodman and Flaxman (2016). For additional references see Wachter et al. (2017), pp. 76–77.

  55. 55.

    On the narrow scope of the provision cf. supra note 33.

  56. 56.

    For a detailed analysis of the legislative process see Wachter et al. (2017), p. 81; Wischmeyer (2018a) pp. 49–52.

  57. 57.

    For example, scholars recently proposed a mechanism to establish a relation of order between classifiers in a deep neural network which was used for image classifying thus making a significant step forward in offering a causal model for the technology: Palacio et al. (2018). Cf. also Montavon et al. (2018).

  58. 58.

    Cf. Hermstrüwer, paras 70–74.

  59. 59.

    Ribeiro et al. (2016), sec. 2: ‘if hundreds or thousands of features significantly contribute to a prediction, it is not reasonable to expect any user to comprehend why the prediction was made, even if individual weights can be inspected.’

  60. 60.

    Wachter et al. (2018), p. 851.

  61. 61.

    On this trade-off see Lakkaraju et al. (2013), sec. 3; Ribeiro et al. (2016), sec 3.2. Wachter et al. (2018), p. 851, even speak of a ‘three-way trade-off between the quality of the approximation versus the ease of understanding the function and the size of the domain for which the approximation is valid.’

  62. 62.

    Bundesverfassungsgericht 2 BvR 1444/00 (20 February 2001), BVerfGE 103, pp. 159–160.

  63. 63.

    Luhmann (2017), p. 96.

  64. 64.

    Wischmeyer (2015), pp. 957 et seq.

  65. 65.

    Tutt (2017), p. 103. Cf. Lem (2013), pp. 98–99: ‘Every human being is thus an excellent example of a device that can be used without knowing its algorithm. Our own brain is one of the “devices” that is “closest to us” in the whole Universe: we have it in our heads. Yet even today, we still do not know how the brain works exactly. As demonstrated by the history of psychology, the examination of its mechanics via introspection is highly fallible and leads one astray, to some most fallacious hypotheses.’

  66. 66.

    That humans can interact with each other even if they do not know exactly what is causing the decisions of other persons, may have an evolutionary component: Yudkowsky (2008), pp. 308 et seq.

  67. 67.

    Citron and Pasquale (2014).

  68. 68.

    Wachter et al. (2018). See also Doshi-Velez and Kortz (2017), p. 7. For philosophical foundations see Lewis (1973a, b) and Salmon (1994).

  69. 69.

    Wachter et al. (2018), p. 843.

  70. 70.

    Wachter et al. (2018), p. 881.

  71. 71.

    Cf. Wachter et al. (2018), p. 883: ‘As a minimal form of explanation, counterfactuals are not appropriate in all scenarios. In particular, where it is important to understand system functionality, or the rationale of an automated decision, counterfactuals may be insufficient in themselves. Further, counterfactuals do not provide the statistical evidence needed to assess algorithms for fairness or racial bias.’

  72. 72.

    Similarly Hermstrüwer, paras 45–48.

  73. 73.

    Wachter et al. (2018), p. 851: ‘The downside to this is that individual counterfactuals may be overly restrictive. A single counterfactual may show how a decision is based on certain data that is both correct and unable to be altered by the data subject before future decisions, even if other data exist that could be amended for a favourable outcome. This problem could be resolved by offering multiple diverse counterfactual explanations to the data subject.’

  74. 74.

    (Causal) explanations and (semantic) reasons are not identical. However, both explanations and reason-giving require an institutional framework in order to be effective. Reason-giving requirements exist primarily for public authorities. However, the functions of reason-giving as described by courts and scholars are applicable for private parties, too. On the following in more detail Kischel (2003), pp. 88 et seq.; Wischmeyer (2018a), pp. 54 et seq. For a comparative analysis cf. Saurer (2009), pp. 382–383.

  75. 75.

    Bundesverwaltungsgericht 2 C 42.79 (7 May 1981), DVBl 1982, pp. 198–199.

  76. 76.

    Stelkens (2018), § 39 VwVfG, paras 41, 43.

  77. 77.

    Luhmann (1983), p. 215 (translation T.W.).

  78. 78.

    For a comprehensive discussion of AI’s accountability problem, of which transparency is one dimension, cf. Busch (2018).

  79. 79.

    This last aspect is particularly prominent in EU and US constitutional law, cf. Saurer (2009), pp. 365 and 385.

  80. 80.

    Mittelstadt et al. (2016), p. 7.

  81. 81.

    See supra note 34. For a similar appeal see Wachter et al. (2018), p. 881.

  82. 82.

    Martini (2017), p. 1020; Busch (2018), pp. 58–59.

  83. 83.

    Cf supra notes 4 to 9.

  84. 84.

    Reisman et al. (2018), p. 9.

  85. 85.

    Cf. Bundesverfassungsgericht 1 BvR 256, 263, 586/08 ‘Vorratsdatenspeicherung’ (2 March 2010), BVerfGE 125, pp. 336–337.

  86. 86.

    Cf. Busch (2018), pp. 59–60; Sachverständigenrat für Verbraucherfragen (2018), pp. 122, 162–163; Zweig (2019).

  87. 87.

    Cf supra note 38.

  88. 88.

    According to some scholars, such a requirement was introduced in France through the Digital Republic Law (Loi n°2016-1321 pour une République numérique), which has amended the definition of the administrative document in Article 300-2 of the Code des relations entre le public et l’administration by the words ‘codes sources’. For details see Jean and Kassem (2018), p. 15.

  89. 89.

    Cf. Tutt (2017).

  90. 90.

    Data controllers need to make the documented data available to individuals or supervisory authorities in ‘a structured, commonly used and machine-readable format’ (cf. Article 20 GDPR). For this requirement in a different context see Bundesverfassungsgericht 1 BvR 1215/07 ‘Antiterrordatei’ (24 April 2013), BVerfGE 133, p. 370 para 215.

  91. 91.

    Cf. Kaushal and Nolan (2015); Scherer (2016), pp. 353 et seq.; Martini and Nink (2017), p. 12; Tutt (2017), pp. 83 et seq. On government knowledge in general see Hoffmann-Riem (2014), pp. 135 et seq.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Wischmeyer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wischmeyer, T. (2020). Artificial Intelligence and Transparency: Opening the Black Box. In: Wischmeyer, T., Rademacher, T. (eds) Regulating Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-32361-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32361-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32360-8

  • Online ISBN: 978-3-030-32361-5

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics