Abstract
The alleged opacity of AI has become a major political issue over the past few years. Opening the black box, so it is argued, is indispensable to identify encroachments on user privacy, to detect biases and to prevent other potential harms. However, what is less clear is how the call for AI transparency can be translated into reasonable regulation. This Chapter argues that designing AI transparency regulation is less difficult than oftentimes assumed. Regulators profit from the fact that the legal system has already gained considerable experience with the question of how to shed light on partially opaque decision-making systems—human decisions. This experience provides lawyers with a realistic perspective of the functions of potential AI transparency legislation as well as with a set of legal instruments which can be employed to this end.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Cf. only National Science and Technology Council Committee on Technology (2016), OECD Global Science Forum (2016), Asilomar Conference (2017), European Parliament (2017), Harhoff et al. (2018), Singapore Personal Data Protection Commission (2018), Agency for Digital Italy (2018), House of Lords Select Committee on Artificial Intelligence (2018), Villani (2018), European Commission (2018) and Datenethikkommission (2018).
- 2.
Bundesanstalt für Finanzdienstleistungsaufsicht (2018), pp. 144–145. See also Hermstrüwer, para 3, Hennemann, para 37.
- 3.
- 4.
The initial proposal (Int. 1696–2017) would have added the text cited above to Section 23-502 of the Administrative Code of the City of New York. However, the law that was finally passed only established a task force which is designated to study how city agencies currently use algorithms. For a detailed account of the legislative process see legistar.council.nyc.gov/LegislationDetail.aspx?ID=3137815&GUID=437A6A6D-62E1-47E2-9C42-461253F9C6D0.
- 5.
Konferenz der Informationsfreiheitsbeauftragten (2018), p. 4.
- 6.
Arbeitsgruppe “Digitaler Neustart” (2018), p. 7.
- 7.
COM/2018/238 final—2018/0112 (COD).
- 8.
Rundfunkkommission der Länder (2018), pp. 25–26.
- 9.
As of January 2018, investment service providers in Germany are subject to notification requirements, if they engage in algorithmic trading within the meaning of Section 80(2) of the Wertpapierhandelsgesetz (WpHG—Securities Trading Act). This provision implements the EU Markets in Financial Instruments Directive II (MiFID II).
- 10.
- 11.
Burrell (2016), p. 1.
- 12.
- 13.
For references see Selbst and Barocas (2018), pp. 1089–1090.
- 14.
Article 29 Data Protection Working Party (2018), p. 14.
- 15.
Bundesverfassungsgericht 2 BvR 2134, 2159/92 ‘Maastricht’ (12 October 1993), BVerfGE 89, p. 185; 2 BvR 1877/97 and 50/98 ‘Euro’ (31 March 1998), BVerfGE 97, p. 369. On the values of transparency cf. Scherzberg (2000), pp. 291 et seq., 320 et seq., 336 et seq.; Gusy (2012), § 23 paras 18 et seq.; Scherzberg (2013), § 49 paras 13 et seq.
- 16.
Cf. CJEU C-92/11 ‘RWE Vertrieb AG v Verbraucherzentrale Nordrhein-Westfalen eV’ (21 March 2013), ECLI:EU:C:2013:180; CJEU C-26/13 ‘Árpád Kásler and Hajnalka Káslerné Rábai v OTP Jelzálogbank Zrt’ (30 April 2014), ECLI:EU:C:2014:282. See also Busch (2016). For a critical perspective on disclosure obligations in U.S. private law and in privacy law see Ben-Shahar and Schneider (2011) and Ben-Shahar and Chilton (2016).
- 17.
Merton (1968), pp. 71–72.
- 18.
- 19.
Sometimes, transparency regulation is identified with granting individual rights to information. However, such individual rights are only one element within a larger regulatory structure that governs the flow of information in society, see Part 4.
- 20.
- 21.
Outside the specific context of AI, error detection and avoidance has become the subject of sophisticated research activities and extensive technical standardization efforts, cf. the international standard ISO/IEC 25000 ‘Software engineering—Software product Quality Requirements and Evaluation (SQuaRE)” (created by ISO/IEC JTC 1/SC 07 Software and systems engineering). The German version ‘DIN ISO/IEC 25000 Software-Engineering – Quality Criteria and Evaluation of Software Products (SQuaRE) - Guideline for SQuaRE’ is maintained by NA 043 Information Technology and Applications Standards Committee (NIA) of the German Institute for Standardization.
- 22.
A much cited paper in this context is Sandvig et al. (2014), pp. 1 et seq.
- 23.
Recently, Munich Re and the German Research Centre for Artificial Intelligence (DFKI) have collaborated in auditing the technology behind a startup which uses AI to detect fraudulent online payments. After the audit, which comprised a check of the data, the underlying algorithms, the statistical models and the IT infrastructure of the company, was conducted successfully, the startup was offered an insurance.
- 24.
Tutt (2017), pp. 89–90, describes several instances, in which forensic experts from IBM and Telsa were unable to reconstruct ex post the reasons for a malfunction of their systems.
- 25.
See supra note 2.
- 26.
- 27.
For a detailed discussion see Bundesanstalt für Finanzdienstleistungsaufsicht (2018), pp. 188 et seq.
- 28.
Burrell (2016), p. 2.
- 29.
Even many conventional algorithms are constantly updated, which makes an ex-post evaluation difficult, cf. Schwartz (2015).
- 30.
Cf. IBM (2018).
- 31.
- 32.
Loi n° 2016-1321 du 7 octobre 2016 pour une République numérique. As further laid out Article R311-3-1-2, created through Article 1 of the Décret n° 2017-330 du 14 mars 2017 relatif aux droits des personnes faisant l’objet de décisions individuelles prises sur le fondement d’un traitement algorithmique (available at www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000034194929&categorieLien=cid), the administration needs to provide information about (1) the degree and the way in which algorithmic processing contributes to the decision making, (2) which data are processed and where they come from, (3) according to which variables the data is treated and, where appropriate, how they are weighed, (4) which operations are carried out by the system—all of this needs to be presented, however, “sous une forme intelligible et sous réserve de ne pas porter atteinte à des secrets protégés par la loi.” There is also a national security exception to the law. For more details see Edwards and Veale (2017).
- 33.
- 34.
- 35.
For Article 12a DPD see CJEU, C-141/12 and C-372/12, paras 50 et seq. For the German Data Protection Act, which contained (and still contains) a special section on credit scoring, the Bundesgerichtshof decided that brief statements on the design of credit scoring systems are sufficient and that system operators need only to explain the abstract relationship between a high credit score and the probability of securing credit. Detailed information on the decision or the system, in an individual case, was not deemed to be necessary, cf. Bundesgerichtshof VI ZR 156/13 (28.1.2014), BGHZ 200, 38, paras 25 et seq. In the literature there are numerous diverging views on what exact information has to be disclosed in order to adequately inform about the logic involved. For an overview of the different positions see Wischmeyer (2018a), pp. 50 et seq.
- 36.
Cf. Hoffmann-Riem (2017), pp. 32–33. On the ‘risk of strategic countermeasures’ see Hermstrüwer, paras 65–69.
- 37.
- 38.
While the rule of law and the principle of democratic governance commit public actors to transparency and therefore limit administrative secrecy (see para 4), transparency requirements for private actors need to be justified in light of their fundamental rights. However, considering the public interests at stake and the risky nature of the new technology, the interests of private system operators will hardly ever prevail in toto. Moreover, the legislator needs also to protect the fundamental rights of those negatively affected by AI-based systems, which typically means that parliament must enact laws which guarantee an effective control of the technology. However, lawmakers have considerable discretion in this regard. For certain companies which operate privatized public spaces (‘public fora’) or have otherwise assumed a position of power that is somehow state-like, the horizontal effect of the fundamental rights of the data subjects will demand a more robust transparency regulation.
- 39.
- 40.
See Braun Binder, paras 12 et seq. See also Martini and Nink (2017), p. 10.
- 41.
- 42.
Wischmeyer (2018b), pp. 403–409.
- 43.
This has been discussed for cases where sensitive private proprietary technology is deployed for criminal justice or law enforcement purposes, cf. Roth (2017), Imwinkelried (2017) and Wexler (2018). For this reason, the police in North-Rhine Westphalia has developed a predictive policing system which does not use neural networks, but decision tree algorithms, cf. Knobloch (2018), p. 19.
- 44.
See, however, on the (potentially prohibitive) costs of expertise Tischbirek, para 41.
- 45.
- 46.
For a nuanced account of the strengths and weaknesses of access to information regulation see Fenster (2017).
- 47.
Ananny and Crawford (2018), p. 983.
- 48.
Ananny and Crawford (2018), p. 982.
- 49.
The purpose of Article 22 GDPR is frequently defined as preventing the degradation of ‘the individual to a mere object of a governmental act of processing without any regard for the personhood of the affected party or the individuality of the concrete case’ (Martini and Nink (2017), p. 3) (translation T.W.). Similarly, von Lewinski (2014), p. 16.
- 50.
See supra note 13. See also Doshi-Velez and Kortz (2017), p. 6: ‘[E]xplanation is distinct from transparency. Explanation does not require knowing the flow of bits through an AI system, no more than explanation from humans requires knowing the flow of signals through neurons.’
- 51.
- 52.
While there exists ‘considerable disagreement among philosophers about whether all explanations in science and in ordinary life are causal and also disagreement about what the distinction (if any) between causal and non-causal explanations consists in […], virtually everyone […] agrees that many scientific explanations cite information about causes’ (Woodward 2017). See also Doshi-Velez and Kortz (2017), p. 3.
- 53.
Cf. Russell et al. (2015); Datta et al. (2017), pp. 71 et seq.; Doshi-Velez and Kim (2017); Fong and Vedaldi (2018). Despite recent progress, research in this field is still in its infancy. In 2017, a DARPA project on Explainable AI was initiated, see www.darpa.mil/program/explainable-artificial-intelligence.
- 54.
- 55.
On the narrow scope of the provision cf. supra note 33.
- 56.
- 57.
For example, scholars recently proposed a mechanism to establish a relation of order between classifiers in a deep neural network which was used for image classifying thus making a significant step forward in offering a causal model for the technology: Palacio et al. (2018). Cf. also Montavon et al. (2018).
- 58.
Cf. Hermstrüwer, paras 70–74.
- 59.
Ribeiro et al. (2016), sec. 2: ‘if hundreds or thousands of features significantly contribute to a prediction, it is not reasonable to expect any user to comprehend why the prediction was made, even if individual weights can be inspected.’
- 60.
Wachter et al. (2018), p. 851.
- 61.
On this trade-off see Lakkaraju et al. (2013), sec. 3; Ribeiro et al. (2016), sec 3.2. Wachter et al. (2018), p. 851, even speak of a ‘three-way trade-off between the quality of the approximation versus the ease of understanding the function and the size of the domain for which the approximation is valid.’
- 62.
Bundesverfassungsgericht 2 BvR 1444/00 (20 February 2001), BVerfGE 103, pp. 159–160.
- 63.
Luhmann (2017), p. 96.
- 64.
Wischmeyer (2015), pp. 957 et seq.
- 65.
Tutt (2017), p. 103. Cf. Lem (2013), pp. 98–99: ‘Every human being is thus an excellent example of a device that can be used without knowing its algorithm. Our own brain is one of the “devices” that is “closest to us” in the whole Universe: we have it in our heads. Yet even today, we still do not know how the brain works exactly. As demonstrated by the history of psychology, the examination of its mechanics via introspection is highly fallible and leads one astray, to some most fallacious hypotheses.’
- 66.
That humans can interact with each other even if they do not know exactly what is causing the decisions of other persons, may have an evolutionary component: Yudkowsky (2008), pp. 308 et seq.
- 67.
Citron and Pasquale (2014).
- 68.
- 69.
Wachter et al. (2018), p. 843.
- 70.
Wachter et al. (2018), p. 881.
- 71.
Cf. Wachter et al. (2018), p. 883: ‘As a minimal form of explanation, counterfactuals are not appropriate in all scenarios. In particular, where it is important to understand system functionality, or the rationale of an automated decision, counterfactuals may be insufficient in themselves. Further, counterfactuals do not provide the statistical evidence needed to assess algorithms for fairness or racial bias.’
- 72.
Similarly Hermstrüwer, paras 45–48.
- 73.
Wachter et al. (2018), p. 851: ‘The downside to this is that individual counterfactuals may be overly restrictive. A single counterfactual may show how a decision is based on certain data that is both correct and unable to be altered by the data subject before future decisions, even if other data exist that could be amended for a favourable outcome. This problem could be resolved by offering multiple diverse counterfactual explanations to the data subject.’
- 74.
(Causal) explanations and (semantic) reasons are not identical. However, both explanations and reason-giving require an institutional framework in order to be effective. Reason-giving requirements exist primarily for public authorities. However, the functions of reason-giving as described by courts and scholars are applicable for private parties, too. On the following in more detail Kischel (2003), pp. 88 et seq.; Wischmeyer (2018a), pp. 54 et seq. For a comparative analysis cf. Saurer (2009), pp. 382–383.
- 75.
Bundesverwaltungsgericht 2 C 42.79 (7 May 1981), DVBl 1982, pp. 198–199.
- 76.
Stelkens (2018), § 39 VwVfG, paras 41, 43.
- 77.
Luhmann (1983), p. 215 (translation T.W.).
- 78.
For a comprehensive discussion of AI’s accountability problem, of which transparency is one dimension, cf. Busch (2018).
- 79.
This last aspect is particularly prominent in EU and US constitutional law, cf. Saurer (2009), pp. 365 and 385.
- 80.
Mittelstadt et al. (2016), p. 7.
- 81.
See supra note 34. For a similar appeal see Wachter et al. (2018), p. 881.
- 82.
- 83.
Cf supra notes 4 to 9.
- 84.
Reisman et al. (2018), p. 9.
- 85.
Cf. Bundesverfassungsgericht 1 BvR 256, 263, 586/08 ‘Vorratsdatenspeicherung’ (2 March 2010), BVerfGE 125, pp. 336–337.
- 86.
- 87.
Cf supra note 38.
- 88.
According to some scholars, such a requirement was introduced in France through the Digital Republic Law (Loi n°2016-1321 pour une République numérique), which has amended the definition of the administrative document in Article 300-2 of the Code des relations entre le public et l’administration by the words ‘codes sources’. For details see Jean and Kassem (2018), p. 15.
- 89.
Cf. Tutt (2017).
- 90.
Data controllers need to make the documented data available to individuals or supervisory authorities in ‘a structured, commonly used and machine-readable format’ (cf. Article 20 GDPR). For this requirement in a different context see Bundesverfassungsgericht 1 BvR 1215/07 ‘Antiterrordatei’ (24 April 2013), BVerfGE 133, p. 370 para 215.
- 91.
References
Agency for Digital Italy (2018) White Paper on artificial intelligence at the service of citizens. www.agid.gov.it/en/agenzia/stampa-e-comunicazione/notizie/2018/04/19/english-version-white-paper-artificial-intelligence-service-citizen-its-now-online
Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3):973–989
Arbeitsgruppe “Digitaler Neustart” (2018) Zwischenbericht der Arbeitsgruppe “Digitaler Neustart” zur Frühjahrskonferenz der Justizministerinnen und Justizminister am 6. und 7. Juni 2018 in Eisenach. www.justiz.nrw.de/JM/schwerpunkte/digitaler_neustart/zt_fortsetzung_arbeitsgruppe_teil_2/2018-04-23-Zwischenbericht-F-Jumiko-2018%2D%2D-final.pdf
Article 29 Data Protection Working Party (2018) Guidelines on automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01). ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053
Asilomar Conference (2017) Asilomar AI principles. futureoflife.org/ai-principles
Ben-Shahar O, Chilton A (2016) Simplification of privacy disclosures: an experimental test. J Legal Stud 45:S41–S67
Ben-Shahar O, Schneider C (2011) The failure of mandated disclosure. Univ Pa Law Rev 159:647–749
Buchner B (2018) Artikel 22 DSGVO. In: Kühling J, Buchner B (eds) DS-GVO. BDSG, 2nd edn. C.H. Beck, München
Bundesanstalt für Finanzdienstleistungsaufsicht (2018) Big Data trifft auf künstliche Intelligenz. Herausforderungen und Implikationen für Aufsicht und Regulierung von Finanzdienstleistungen. www.bafin.de/SharedDocs/Downloads/DE/dl_bdai_studie.html
Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc 3:205395171562251. https://doi.org/10.1177/2053951715622512
Busch C (2016) The future of pre-contractual information duties: from behavioural insights to big data. In: Twigg-Flesner C (ed) Research handbook on EU consumer and contract law. Edward Elgar, Cheltenham, pp 221–240
Busch C (2018) Algorithmic Accountablity, Gutachten im Auftrag von abida, 2018. http://www.abida.de/sites/default/files/ABIDA%20Gutachten%20Algorithmic%20Accountability.pdf
Citron D, Pasquale F (2014) The scored society: due process for automated predictions. Washington Law Rev 89:1–33
Costas J, Grey C (2016) Secrecy at work. The hidden architecture of organizational life. Stanford Business Books, Stanford
Crawford K (2016) Can an algorithm be agonistic? Ten scenes from life in calculated publics. Sci Technol Hum Values 41(1):77–92
Datenethikkommission (2018) Empfehlungen der Datenethikkommission für die Strategie Künstliche Intelligenz der Bundesregierung. www.bmi.bund.de/SharedDocs/downloads/DE/veroeffentlichungen/2018/empfehlungen-datenethikkommission.pdf?__blob=publicationFile&v=1
Datta A, Sen S, Zick Y (2017) Algorithmic transparency via quantitative input influence. In: Cerquitelli T, Quercia D, Pasquale F (eds) Transparent data mining for big and small data. Springer, Cham, pp 71–94
Diakopoulos N (2016) Accountability in algorithmic decision making. Commun ACM 59(2):56–62
Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. Working Paper, March 2, 2017
Doshi-Velez F, Kortz M (2017) Accountability of AI under the law: the role of explanation. Working Paper, November 21, 2017
Edwards L, Veale M (2017) Slave to the algorithm? Why a ‘Right to an Explanation’ is probably not the remedy you are looking for. Duke Law Technol Rev 16(1):18–84
European Commission (2018) Artificial intelligence for Europe. COM(2018) 237 final
European Parliament (2017) Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. 2015/2103(INL)
Fassbender B (2006) Wissen als Grundlage staatlichen Handelns. In: Isensee J, Kirchhof P (eds) Handbuch des Staatsrechts, vol IV, 3rd edn. C.F. Müller, Heidelberg, § 76
Fenster M (2017) The transparency fix. Secrets, leaks, and uncontrollable Government information. Stanford University Press, Stanford
Florini A (2007) The right to know: transparency for an open world. Columbia University Press, New York
Fong R, Vedaldi A (2018) Interpretable explanations of Black Boxes by meaningful perturbation, last revised 10 Jan 2018. arxiv.org/abs/1704.03296
Goodman B, Flaxman S (2016) European Union regulations on algorithmic decision-making and a “right to explanation”. arxiv.org/pdf/1606.08813.pdf
Gusy C (2012) Informationsbeziehungen zwischen Staat und Bürger. In: Hoffmann-Riem W, Schmidt-Aßmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 2, 2nd edn. C.H. Beck, München, § 23
Harhoff D, Heumann S, Jentzsch N, Lorenz P (2018) Eckpunkte einer nationalen Strategie für Künstliche Intelligenz. www.stiftung-nv.de/de/publikation/eckpunkte-einer-nationalen-strategie-fuer-kuenstliche-intelligenz
Heald D (2006) Varieties of transparency. Proc Br Acad 135:25–43
Hildebrandt M (2011) Who needs stories if you can get the data? Philos Technol 24:371–390
Hoffmann-Riem W (2014) Regulierungswissen in der Regulierung. In: Bora A, Reinhardt C, Henkel A (eds) Wissensregulierung und Regulierungswissen. Velbrück Wissenschaft, Weilerswist, pp 135–156
Hoffmann-Riem W (2017) Verhaltenssteuerung durch Algorithmen – Eine Herausforderung für das Recht. Archiv des öffentlichen Rechts 142:1–42
Holznagel B (2012) Informationsbeziehungen in und zwischen Behörden. In: Hoffmann-Riem W, Schmidt-Aßmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 2, 2nd edn. C.H. Beck, München, § 24
Hood C, Heald D (2006) Transparency. The key to better Governance? Oxford University Press, Oxford
House of Lords Select Committee on Artificial Intelligence (2018) AI in the UK – Ready, willing and able? publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
IBM (2018) Continuous relevancy training. console.bluemix.net/docs/services/discovery/continuous-training.html#crt
Imwinkelried E (2017) Computer source code. DePaul Law Rev 66:97–132
Jean B, Kassem L (2018) L’ouverture des données dans les Universités. openaccess.parisnanterre.fr/medias/fichier/e-tude-open-data-inno3_1519834765367-pdf
Jestaedt M (2001) Das Geheimnis im Staat der Öffentlichkeit. Was darf der Verfassungsstaat verbergen? Archiv des öffentlichen Rechts 126:204–243
Kaushal M, Nolan S (2015) Understanding artificial intelligence. Brookings Institute, Washington, D.C. www.brookings.edu/blogs/techtank/posts/2015/04/14-understanding-artificial-intelligence
Kaye D (2018) Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression 29 August 2018. United Nations A/73/348
Kischel U (2003) Die Begründung. Mohr Siebeck, Tübingen
Knobloch T (2018) Vor die Lage kommen: Predictive Policing in Deutschland, Stiftung Neue Verantwortung. www.stiftung-nv.de/sites/default/files/predictive.policing.pdf (19 Jan 2019)
Konferenz der Informationsfreiheitsbeauftragten (2018) Positionspapier. www.datenschutzzentrum.de/uploads/informationsfreiheit/2018_Positionspapier-Transparenz-von-Algorithmen.pdf
Lakkaraju H, Caruana R, Kamar E, Leskovec J (2013) Interpretable & explorable approximations of black box models. arxiv.org/pdf/1707.01154.pdf
Leese M (2014) The new profiling: algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union. Secur Dialogue 45(5):494–511
Lem S (2013) Summa technologiae. University of Minnesota Press, Minneapolis
Lewis D (1973a) Counterfactuals. Harvard University Press, Cambridge
Lewis D (1973b) Causation. J Philos 70:556–567
Luhmann N (1983) Legitimation durch Verfahren. Suhrkamp, Frankfurt am Main
Luhmann N (2017) Die Kontrolle von Intransparenz. Suhrkamp, Berlin
Martini M (2017) Algorithmen als Herausforderung für die Rechtsordnung. JuristenZeitung 72:1017–1025
Martini M (2018) Artikel 22 DSGVO. In: Paal B, Pauly D (eds) Datenschutz-Grundverordnung Bundesdatenschutzgesetz, 2nd edn. C.H. Beck, München
Martini M, Nink D (2017) Wenn Maschinen entscheiden… – vollautomatisierte Verwaltungsverfahren und der Persönlichkeitsschutz. Neue Zeitschrift für Verwaltungsrecht Extra 36:1–14
Mayer-Schönberger V, Cukier K (2013) Big data. Houghton Mifflin Harcourt, Boston
Merton R (1968) Social theory and social structure. Macmillan, New York
Mittelstadt B, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms. Big Data Soc 3(2):1–21
Montavon G, Samek W, Müller K (2018) Methods for interpreting and understanding deep neural networks. Digital Signal Process 73:1–15
National Science and Technology Council Committee on Technology (2016) Preparing for the future of artificial intelligence. obamawhitehouse.archiv es.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_ future_of_ai.pdf
Neyland D (2016) Bearing accountable witness to the ethical algorithmic system. Sci Technol Hum Values 41(1):50–76
OECD Global Science Forum (2016) Research ethics and new forms of data for social and economic research. www.oecd.org/sti/inno/globalscienceforumreports.htm
Palacio S, Folz J, Hees J, Raue F, Borth D, Dengel A (2018) What do deep networks like to see? arxiv.org/abs/1803.08337
Pasquale F (2015) The Black Box Society: the secret algorithms that control money and information. Harvard University Press, Cambridge
Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithmic impact assessments: a practical framework for public agency accountability. ainowinstitute.org/aiareport2018.pdf
Ribeiro M, Singh S, Guestrin C (2016) “Why Should I Trust You?” Explaining the predictions of any classifier. arxiv.org/pdf/1602.04938.pdf
Roth A (2017) Machine testimony. Yale Law J 126:1972–2259
Rundfunkkommission der Länder (2018) Diskussionsentwurf zu den Bereichen Rundfunkbegriff, Plattformregulierung und Intermediäre. www.rlp.de/fileadmin/rlp-stk/pdf-Dateien/Medienpolitik/04_MStV_Online_2018_Fristverlaengerung.pdf
Russell S, Dewey S, Tegmark M (2015) Research priorities for robust and beneficial artificial intelligence. arxiv.org/abs/1602.03506
Sachverständigenrat für Verbraucherfragen (2018) Technische und rechtliche Betrachtungen algorithmischer Entscheidungsverfahren. Gutachten der Fachgruppe Rechtsinformatik der Gesellschaft für Informatik e.V. http://www.svr-verbraucherfragen.de/wp-content/uploads/GI_Studie_Algorithmenregulierung.pdf
Salmon W (1994) Causality without counterfactuals. Philos Sci 61:297–312
Sandvig C, Hamilton K, Karahalios K, Langbort C (2014) Auditing algorithms: research methods for detecting discrimination on internet platforms. www.personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20%2D%2D%20Sandvig%20%2D%2D%20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf
Saurer J (2009) Die Begründung im deutschen, europäischen und US-amerikanischen Verwaltungsverfahrensrecht. Verwaltungsarchiv 100:364–388
Scheppele K (1988) Legal secrets. University of Chicago Press, Chicago
Scherer M (2016) Regulating artificial intelligence systems. Harv J Law Technol 29:353–400
Scherzberg A (2000) Die Öffentlichkeit der Verwaltung. Nomos, Baden-Baden
Scherzberg A (2013) Öffentlichkeitskontrolle. In: Hoffmann-Riem W, Schmidt-Aßmann E, Voßkuhle A (eds) Grundlagen des Verwaltungsrechts, vol 3, 2nd edn. C.H. Beck, München, § 49
Schwartz B (2015) Google: we make thousands of updates to search algorithms each year. www.seroundtable.com/google-updates- thousands-20403.html
Selbst A, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139
Singapore Personal Data Protection Commission (2018) Discussion paper on artificial intelligence and personal data. www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/Discussion-Paper-on-AI-and-PD%2D%2D-050618.pdf
Stelkens U (2018) § 39 VwVfG. In: Stelkens P, Bonk H, Sachs M (eds) Verwaltungsverfahrensgesetz, 9th edn. C.H. Beck, München
Tene O, Polonetsky J (2013) Big data for all: privacy and user control in the age of analytics. Northwest J Technol Intellect Prop 11:239–273
Tsoukas H (1997) The tyranny of light. The temptations and paradoxes of the information society. Futures 29:827–843
Tutt A (2017) An FDA for algorithms. Adm Law Rev 69:83–123
van Otterlo M (2013) A machine learning view on profiling. In: Hildebrandt M, de Vries K (eds) Privacy, due process and the computational turn. Routledge, Abingdon-on-Thames, pp 41–64
Villani C (2018) For a meaningful artificial intelligence – towards a French and European Strategy. www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
von Lewinski K (2014) Überwachung, Datenschutz und die Zukunft des Informationsrechts. In: Telemedicus (ed) Überwachung und Recht. epubli GmbH, Berlin, pp 1–30
von Lewinski K (2018) Artikel 22 DSGVO. In: Wolff H, Brink S (eds) Beck‘scher Online-Kommentar Datenschutzrecht. C.H. Beck, München
Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decisionmaking does not exist in the general data protection regulation. Int Data Priv Law 7:76–99
Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the Black Box: automated decisions and the GDPR. Harv J Law Technol 31:841–887
Wexler R (2018) Life, liberty, and trade secrets: intellectual property in the criminal justice system. Stanf Law Rev 70:1343–1429
Wischmeyer T (2015) Der »Wille des Gesetzgebers«. Zur Rolle der Gesetzesmaterialien in der Rechtsanwendung. JuristenZeitung 70:957–966
Wischmeyer T (2018a) Regulierung intelligenter Systeme. Archiv des öffentlichen Rechts 143:1–66
Wischmeyer T (2018b) Formen und Funktionen des exekutiven Geheimnisschutzes. Die Verwaltung 51:393–426
Woodward J (2017) Scientific explanation. In: Zalta E (ed) The Stanford encyclopedia of philosophy. Stanford University, Stanford. plato.stanford.edu/archives/fall2017/entries/scientific-explanation
Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Bostrom N, Ćirkovic M (eds) Global catastrophic risks. Oxford University Press, New York, pp 308–345
Zarsky T (2013) Transparent Predictions. Univ Ill Law Rev 4:1503–1570
Zweig K (2016) 2. Arbeitspapier: Überprüfbarkeit von Algorithmen. algorithmwatch.org/de/zweites-arbeitspapier-ueberpruefbarkeit-algorithmen
Zweig K (2019) Algorithmische Entscheidungen: Transparenz und Kontrolle, Analysen & Argumente, Digitale Gesellschaft, Januar 2019. https://www.kas.de/c/document_library/get_file?uuid=533ef913-e567-987d-54c3-1906395cdb81&groupId=252038
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Wischmeyer, T. (2020). Artificial Intelligence and Transparency: Opening the Black Box. In: Wischmeyer, T., Rademacher, T. (eds) Regulating Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-32361-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-32361-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32360-8
Online ISBN: 978-3-030-32361-5
eBook Packages: Law and CriminologyLaw and Criminology (R0)