Skip to main content

Artificial Intelligence and Administrative Decisions Under Uncertainty

  • Chapter
  • First Online:
Regulating Artificial Intelligence

Abstract

How should artificial intelligence guide administrative decisions under risk and uncertainty? I argue that artificial intelligence, specifically machine learning, lifts the veil covering many of the biases and cognitive errors engrained in administrative decisions. Machine learning has the potential to make administrative agencies smarter, fairer and more effective. However, this potential can only be exploited if administrative law addresses the implicit normative choices made in the design of machine learning algorithms. These choices pertain to the generalizability of machine-based outcomes, counterfactual reasoning, error weighting, the proportionality principle, the risk of gaming and decisions under complex constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    I would like to thank Christoph Engel, Timo Rademacher and Thomas Wischmeyer for valuable comments on an earlier draft of this chapter.

  2. 2.

    Simon (1997), pp. 99–100; for an example, see Joh (2017), pp. 290 et seq.

  3. 3.

    While risk usually refers to situations where the probability of an outcome is known, uncertainty is usually assumed when probabilities cannot be quantified. See Knight (1921) and Vermeule (2015).

  4. 4.

    Also see Rademacher, para 31 and Wischmeyer, para 6. For an account of these concerns, see Pasquale (2015); Citron and Pasquale (2014); Hogan-Doran (2017), pp. 32–39.

  5. 5.

    Simon (1955, 1997); for a recent account of biases among judges, see Spamann and Klöhn (2016).

  6. 6.

    For a similar approach, see Lehr and Ohm (2017); see also Rademacher, paras 36–38.

  7. 7.

    Dworkin (1965), p. 682. A note of precision is warranted: the use of machine learning algorithms by administrative agencies will of course not entail a formal modification of existing legal rules. Rather, it will alter the factual classifications and predictions required to apply existing legal rules. As far as machine learning effectively reduces factual errors when applying the law, the law itself is likely to become somewhat more predictable as well.

  8. 8.

    For an analysis of the virtues of legal uncertainty, see Baker et al. (2004).

  9. 9.

    For an analysis in the context of common law, see Sunstein (2001).

  10. 10.

    For an overview, see Athey and Imbens (2017), pp. 22–27.

  11. 11.

    Also see Buchanan and Headrick (1970), pp. 47 et seq.

  12. 12.

    See § 24(1) VwVfG—Verwaltungsverfahrensgesetz.

  13. 13.

    See § 24(2) VwVfG—Verwaltungsverfahrensgesetz.

  14. 14.

    Note that machine learning algorithms also quantify the confidence one may have in the prediction or classification, which mitigates the concerns raised here.

  15. 15.

    However, there is a grey area: Consider the likely case that machine learning yields better predictions for some groups than for others (e.g. a better prediction of statutory non-compliance). In that case, personalized investigations will impose higher burdens on the group for which a better prediction is available, since administrative agencies will be more likely to target that group and apply the respective administrative rules to it even though these rules may be formally abstract and general. Abstract and general legal rules can therefore have similar effects as formal personalized law if the machine-based predictions entail a different (e.g. more stringent) application of the law for groups about which more is known (de facto personalized law). For an overview of personalized law approaches, see Pasquale (2018), pp. 7–12.

  16. 16.

    See Coglianese and Lehr (2017), pp. 1160 et seq.; Cuéllar (2016), pp. 10 et seq.

  17. 17.

    While the rule is a specificity of German administrative law, similar ideas can be found in US law and cost benefit analysis applied to law. For the foundations of expected utility theory, see von Neumann and Morgenstern (1944).

  18. 18.

    US law is somewhat different in that respect, see Sunstein (2018), pp. 3 et seq.

  19. 19.

    For an informal application of probability theory to police law, see Poscher (2008), pp. 352 et seq.

  20. 20.

    See also Buchholtz, paras 13 et seq.

  21. 21.

    § 35(1) GewO—Gewerbeordnung.

  22. 22.

    Marcks (2018), paras 35–62. Similar rules can be found in German restaurants law, see §§ 15(1), (2), 4(1) Nr. 1 GastG—Gaststättengesetz. For an overview, see Ehlers (2012), paras 21 et seq.

  23. 23.

    If machine-based proxies are better at predicting the reliability of business persons than proxies based on human judgement, there is no obvious reason for sticking to the latter. In fact, business persons who are disadvantaged by human-made proxies may try to invoke equal protection rights and argue that sticking to human-made proxies constitutes an unjustified discrimination. All this depends on the interpretation of what ‘reliability’ exactly means under administrative law (a dummy for existing human-made proxies or a concept that is open to new interpretations).

  24. 24.

    § 39 VwVfG; for an analysis, see Coglianese and Lehr (2017), pp. 1205 et seq.; Wischmeyer (2018), pp. 56–59.

  25. 25.

    See Wischmeyer, paras 9 et seq.

  26. 26.

    Cuéllar (2016); for a computer science perspective, see Parkes and Wellman (2015).

  27. 27.

    § 35a VwVfG—Verwaltungsverfahrensgesetz, see Djeffal, paras 20 et seq.

  28. 28.

    Alarie et al. (2017); Alarie et al. (2018), pp. 117–124.

  29. 29.

    Coglianese and Lehr (2017), pp. 1177–1184; Cuéllar (2016). A similar doctrine is known as Wesentlichkeitslehre in German constitutional law; see also Rademacher, paras 14, 18.

  30. 30.

    It is difficult to interpret the results of unsupervised machine learning without any parameter for normative weight. This problem is particularly acute in the analysis of legal texts, see Talley (2018). Most legal applications are based on supervised machine learning, see Lehr and Ohm (2017), p. 676.

  31. 31.

    Technically, the output variable is the dependent variable, while the input variable is the independent variable.

  32. 32.

    Berk (2017), pp. 159 et seq.

  33. 33.

    For a related argument, see Barocas and Selbst (2016), pp. 677 et seq.

  34. 34.

    Lehr and Ohm (2017), pp. 681–683; Joh (2017), pp. 290 et seq.

  35. 35.

    For a formal description, see Appendix A.1.

  36. 36.

    For a formal description, see Appendix A.2.

  37. 37.

    For a formal description, see Appendix A.3. Also see Ramasubramanian and Singh (2017), p. 489.

  38. 38.

    For a formal description, see Appendix A.4. Also see Ramasubramanian and Singh (2017), p. 489.

  39. 39.

    Domingos (2012), p. 81.

  40. 40.

    Note, however, that we should be careful with general statements about the properties of machine learning algorithms. Some machine learning algorithms have low bias, but high variance (decision trees, k-nearest neighbors, support vector machines). Other models used in machine learning generate outcomes with high bias, but low variance (linear regression, logistic regression, linear discriminant analysis).

  41. 41.

    Athey (2018), p. 4; Ramasubramanian and Singh (2017), pp. 488–492; Domingos (2012), pp. 80–81; Lehr and Ohm (2017), p. 697.

  42. 42.

    Camerer (2018), p. 18, states more generally that ‘people do not like to explicitly throw away information.’

  43. 43.

    See Tischbirek.

  44. 44.

    For a similar argument, see Lehr and Ohm (2017), p. 714.

  45. 45.

    These methods include: cross-validation, see Berk (2017), pp. 33 et seq.; pruning in classification and regression trees (CART), see Berk (2017), pp. 157 et seq.; Random Forests, a method that relies on bagging or boosting, see Lehr and Ohm (2017), pp. 699–701. The latter technique resembles bootstrapping, a procedure used under more conventional types of econometric analyses.

  46. 46.

    Berk (2017), pp. 205 et seq.

  47. 47.

    Lehr and Ohm (2017), pp. 693–694; Berk (2017), pp. 187 et seq., 195–196.

  48. 48.

    Alpaydin (2014), pp. 109 et seq.; in the legal context, see Barocas and Selbst (2016), pp. 688–692.

  49. 49.

    Lehr and Ohm (2017), pp. 700–701.

  50. 50.

    Thanks to Krishna Gummadi for his insights on the definition of objective functions. Hildebrandt (2018), p. 30, argues that lawyers should ‘speak law to the power of statistics.’

  51. 51.

    For a related argument, see Lehr and Ohm (2017), p. 675.

  52. 52.

    Kleinberg et al. (2018).

  53. 53.

    See Kleinberg et al. (2018), pp. 255 et seq.; for a similar method, see Amaranto et al. (2018).

  54. 54.

    Pearl and Mackenzie (2018), pp. 362 et seq.; Bottou et al. (2013).

  55. 55.

    Cowgill and Tucker (2017), p. 2.

  56. 56.

    § 35(1) GewO—Gewerbeordnung.

  57. 57.

    Selbst and Barocas (2018), pp. 1099 et seq.; Doshi-Velez and Kim (2017).

  58. 58.

    Wachter et al. (2017, 2018); for a related approach, see Doshi-Velez and Kortz (2017), pp. 6–9.

  59. 59.

    Doshi-Velez and Kortz (2017), p. 7.

  60. 60.

    This is also acknowledged by Wachter et al. (2018), p. 845.

  61. 61.

    For a related argument, see Cowgill and Tucker (2017), p. 2.

  62. 62.

    Cowgill (2017).

  63. 63.

    For a related argument, see Alarie et al. (2017).

  64. 64.

    For an example, see Talley (2018), pp. 198–199.

  65. 65.

    Athey (2018), p. 9.

  66. 66.

    Berk (2017), pp. 147 et seq., pp. 274 et seq.

  67. 67.

    See Witten et al. (2016), pp. 179–183; see also Lehr and Ohm (2017), p. 692.

  68. 68.

    For an application to predictions of domestic violence, see Berk et al. (2016), pp. 103–104.

  69. 69.

    Article 3 GG—Grundgesetz; Article 21 EU Charter of Fundamental Rights.

  70. 70.

    For a discussion, see Petersen (2013). Note that necessity can be seen as a version of pareto-efficiency: the administration is not authorized to pick a measure that makes the population worse off than another equally effective measure.

  71. 71.

    § 22(1), (2) GastG—Gaststättengesetz.

  72. 72.

    Kang et al. (2013) use Yelp reviews for Seattle restaurants over the period from 2006 to 2013.

  73. 73.

    For a discussion of the problem, see Athey (2017), p. 484.

  74. 74.

    Ascarza (2018).

  75. 75.

    For a similar conclusion in a different context, see Ascarza (2018), p. 2.

  76. 76.

    Consider a small group of high-risk persons (e.g. engaging in grand corruption) who are non-sensitive to an intervention and a large group of low-risk persons (e.g. engaging in petty corruption) who are sensitive to an intervention. Is it proportionate to intervene against the latter only if the purpose of the intervention is to reduce the total amount of risks (e.g. the social costs of corruption)? To the best of my knowledge, this problem has not been analyzed systematically, neither in cost-benefit analysis nor in public law doctrine.

  77. 77.

    For a related argument, see Ho (2017), p. 32; Joh (2017), pp. 290 et seq.

  78. 78.

    The difference-in-difference approach compares the average change of an output variable for an untreated group over a specified period and the average change of an output variable for a treatment group over a specified period.

  79. 79.

    Blake et al. (2015); for a discussion of the diff-in diff approach in the legal context, see Spamann (2015), pp. 140–141.

  80. 80.

    This approach is used when random assignment—a controlled experiment—is not possible to identify a causal relationship between an input variable and an output variable. The instrumental variable has an effect on the input variable but no independent effect on the output variable.

  81. 81.

    Angrist and Pischke (2009); Spamann (2015), p. 142; for further analysis, see Athey and Imbens (2017), pp. 14–15.

  82. 82.

    Tramèr et al. (2016).

  83. 83.

    Article 19(4) GG – Grundgesetz; Article 47 EU Charter of Fundamental Rights.

  84. 84.

    For a related argument, see Lodge and Mennicken (2017), pp. 4–5.

  85. 85.

    Athey et al. (2002).

  86. 86.

    Of course, gaming the system in that case requires collusion, which is rather unlikely when the market is thick or when antitrust law is effectively enforced. For an account of legal remedies against collusion in procurement auctions, see Cerrone et al. (2018).

  87. 87.

    For an introduction to the P versus NP problem, see Fortnow (2009).

  88. 88.

    NP stands for nondeterministic polynomial time.

  89. 89.

    Milgrom and Tadelis (2019); Milgrom (2017), pp. 26 et seq.; Leyton-Brown et al. (2017).

  90. 90.

    Milgrom (2017), pp. 33–37.

  91. 91.

    For an overview of §§ 55(10), 61(4) TKG—Telekommunikationsgesetz, see Eifert (2012), paras 113 et seq.

  92. 92.

    Bansak et al. (2018).

  93. 93.

    Balcan et al. (2005).

  94. 94.

    For such an approach, see Feng et al. (2018).

References

  • Alarie B, Niblett A, Yoon A (2017) Regulation by machine. J Mach Learn Res W&CP:1–7

    Google Scholar 

  • Alarie B, Niblett A, Yoon A (2018) How artificial intelligence will affect the practice of law. Univ Toronto Law J 68(Supplement 1):106–124

    Article  Google Scholar 

  • Alpaydin E (2014) Introduction to machine learning, 3rd edn. MIT Press, Cambridge

    Google Scholar 

  • Amaranto D et al (2018) Algorithms as prosecutors: lowering rearrest rates without disparate impacts and identifying defendant characteristics ‘Noisy’ to human decision-makers. Working Paper, January 28, 2018

    Google Scholar 

  • Angrist J, Pischke J-S (2009) Mostly harmless econometrics. Princeton University Press, Princeton

    Book  Google Scholar 

  • Ascarza E (2018) Retention futility: targeting high-risk customers might be ineffective. J Market Res 55:80–98

    Article  Google Scholar 

  • Athey S (2017) Beyond prediction: using big data for policy problems. Science 355:483–485

    Article  Google Scholar 

  • Athey S (2018) The impact of machine learning on economics. Working Paper, January 2018

    Google Scholar 

  • Athey S, Imbens GW (2017) The state of applied econometrics: causality and policy evaluation. J Econ Perspect 31:3–32

    Article  Google Scholar 

  • Athey S, Cramton P, Ingraham A (2002) Auction-based timber pricing and complementary market reforms in British Columbia. Manuscript, 5 March 2002

    Google Scholar 

  • Baker T, Harel A, Kugler T (2004) The virtues of uncertainty in law: an experimental approach. Iowa Law Rev 89:443–487

    Google Scholar 

  • Balcan MF et al (2005) Mechanism design via machine learning. In: Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, pp 605–614

    Google Scholar 

  • Bansak K et al (2018) Improving refugee integration through data-driven algorithmic assignment. Science 359:325–329

    Article  Google Scholar 

  • Barocas S, Selbst AD (2016) Big data’s disparate impact. Calif Law Rev 104:671–732

    Google Scholar 

  • Berk RA (2017) Statistical learning from a regression perspective, 2nd edn. Springer International, Basel

    Google Scholar 

  • Berk RA, Sorenson SB, Barnes G (2016) Forecasting domestic violence: a machine learning approach to help inform arraignment decisions. J Empir Legal Stud 13:94–115

    Article  Google Scholar 

  • Blake T, Nosko C, Tadelis S (2015) Consumer heterogeneity and paid search effectiveness: a large-scale field experiment. Econometrica 83:155–174

    Article  Google Scholar 

  • Bottou L et al (2013) Counterfactual reasoning and learning systems: the example of computational advertising. J Mach Learn Res 14:3207–3260

    Google Scholar 

  • Buchanan BG, Headrick TE (1970) Some speculation about artificial intelligence and legal reasoning. Stanf Law Rev 23:40–62

    Article  Google Scholar 

  • Camerer CF (2018) Artificial intelligence and behavioral economics. In: Agrawal AK, Gans J, Goldfarb A (eds) The economics of artificial intelligence: an agenda. University of Chicago Press, Chicago

    Google Scholar 

  • Cerrone C, Hermstrüwer Y, Robalo P (2018) Debarment and collusion in procurement auctions. Discussion papers of the Max Planck Institute for Research on Collective Goods Bonn 2018/5

    Google Scholar 

  • Citron DK, Pasquale F (2014) The scored society: due process for automated predictions. Washington Law Rev 89:1–33

    Google Scholar 

  • Coglianese C, Lehr D (2017) Regulating by robot: administrative decision making in the machine-learning era. Georgetown Law J 105:1147–1223

    Google Scholar 

  • Cowgill B (2017) Automating judgement and decisionmaking: theory and evidence from résumé screening. Working Paper, May 5, 2017

    Google Scholar 

  • Cowgill B, Tucker C (2017) Algorithmic bias: a counterfactual perspective. Working paper: NSF Trustworthy Algorithms, December 2017

    Google Scholar 

  • Cuéllar MF (2016) Cyberdelegation and the administrative state. Stanford Public Law Working Paper No. 2754385

    Google Scholar 

  • Domingos P (2012) A few useful things to know about machine learning. Commun ACM 55:78–87

    Article  Google Scholar 

  • Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. Working Paper, March 2, 2017

    Google Scholar 

  • Doshi-Velez F, Kortz M (2017) Accountability of AI under the law: the role of explanation. Working Paper, November 21, 2017

    Google Scholar 

  • Dworkin RM (1965) Philosophy, morality and law – observations prompted by Professor Fuller’s Novel Claim. Univ Pa Law Rev 113:668–690

    Article  Google Scholar 

  • Ehlers D (2012) § 20 Gaststättenrecht. In: Ehlers D, Fehling M, Pünder H (eds) Besonderes Verwaltungsrecht, Bd. 1, Öffentliches Wirtschaftsrecht. C.F. Müller, Heidelberg

    Google Scholar 

  • Eifert M (2012) § 23 Telekommunikation. In: Ehlers D, Fehling M, Pünder H (eds) Besonderes Verwaltungsrecht, Bd. 1, Öffentliches Wirtschaftsrecht. C.F. Müller, Heidelberg

    Google Scholar 

  • Feng Z, Narasimhan H, Parkes DC (2018) Deep learning for revenue-optimal auctions with budgets. In: Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018), pp 354–362

    Google Scholar 

  • Fortnow L (2009) The status of the P versus NP problem. Commun ACM 52:78–86

    Article  Google Scholar 

  • Hildebrandt M (2018) Law as computation in the era of artificial legal intelligence: speaking law to the power of statistics. Univ Toronto Law J 68(Supplement 1):12–35

    Article  Google Scholar 

  • Ho DE (2017) Does peer review work? An experiment of experimentalism. Stanf Law Rev 69:1–119

    Google Scholar 

  • Hogan-Doran D (2017) Computer says “no”: automation, algorithms and artificial intelligence in Government decision-making. Judicial Rev 13:1–39

    Google Scholar 

  • Joh EE (2017) Feeding the machine: policing, crime data, & algorithms. William Mary Bill Rights J 26:287–302

    Google Scholar 

  • Kang JS et al (2013) Where not to eat? Improving public policy by predicting hygiene inspections using online reviews. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1443–1448

    Google Scholar 

  • Kleinberg J et al (2018) Machine learning and human decisions. Q J Econ 133:237–293

    Google Scholar 

  • Knight F (1921) Risk, uncertainty and profit. Houghton Mifflin, Boston

    Google Scholar 

  • Lehr D, Ohm P (2017) Playing with the data: what legal scholars should learn about machine learning. UC Davis Law Rev 51:653–717

    Google Scholar 

  • Leyton-Brown K, Milgrom P, Segal I (2017) Economics and computer science of a radio spectrum reallocation. PNAS 114:7202–7209

    Article  Google Scholar 

  • Lodge M, Mennicken A (2017) The importance of regulation of and by algorithm. In: Andrews L et al (eds) Algorithmic regulation. London School of Economics and Political Science, London, Discussion Paper 85:2–6

    Google Scholar 

  • Marcks P (2018) § 35 GewO. In: Landmann/Rohmer (ed) Gewerbeordnung, 78th edn. C.H. Beck, München

    Google Scholar 

  • Milgrom P (2017) Discovering prices: auction design in markets with complex constraints. Columbia University Press, New York

    Book  Google Scholar 

  • Milgrom PR, Tadelis S (2019) How artificial intelligence and machine learning can impact market design. In: Agrawal AK, Gans J, Goldfarb A (eds) The economics of artificial intelligence: an agenda. University of Chicago Press, Chicago

    Google Scholar 

  • Parkes DC, Wellman MP (2015) Economic reasoning and artificial intelligence. Science 349:267–272

    Article  Google Scholar 

  • Pasquale F (2015) The Black Box Society: the secret algorithms that control money and information. Harvard University Press, Cambridge

    Book  Google Scholar 

  • Pasquale F (2018) New economic analysis of law: beyond technocracy and market design. Crit Anal Law 5:1–18

    Google Scholar 

  • Pearl J, Mackenzie D (2018) The book of why: the new science of cause and effect. Basic Books, New York

    Google Scholar 

  • Petersen N (2013) How to compare the length of lines to the weight of stones: balancing and the resolution of value conflicts in constitutional law. German Law J 14:1387–1408

    Article  Google Scholar 

  • Poscher R (2008) Eingriffsschwellen im Recht der inneren Sicherheit. Die Verwaltung 41:345–373

    Article  Google Scholar 

  • Ramasubramanian K, Singh A (2017) Machine learning using R. APress, New York

    Book  Google Scholar 

  • Selbst AD, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139

    Google Scholar 

  • Simon HA (1955) A behavioral model of rational choice. Q J Econ 69:99–118

    Article  Google Scholar 

  • Simon HA (1997) Administrative behavior: a study of decision-making processes in administrative organizations, 4th edn. The Free Press, New York

    Google Scholar 

  • Spamann H (2015) Empirical comparative law. Ann Rev Law Soc Sci 11:131–153

    Article  Google Scholar 

  • Spamann H, Klöhn L (2016) Justice is less blind, and less legalistic, than we thought. J Legal Stud 45:255–280

    Article  Google Scholar 

  • Sunstein CR (2001) Of artificial intelligence and legal reasoning. Univ Chicago Law School Roundtable 8:29–35

    Google Scholar 

  • Sunstein CR (2018) The cost-benefit revolution. MIT Press, Cambridge

    Book  Google Scholar 

  • Talley EL (2018) Is the future of law a driverless car?: Assessing how the data-analytics revolution will transform legal practice. J Inst Theor Econ 174:183–205

    Article  Google Scholar 

  • Tramèr F et al (2016) Stealing machine learning models via prediction APIs. In: Proceedings of the 25th USENIX security symposium, pp 601–618

    Google Scholar 

  • Vermeule A (2015) Rationally arbitrary decisions in administrative law. J Legal Stud 44:S475–S507

    Article  Google Scholar 

  • von Neumann J, Morgenstern O (1944) Theory of games and economic behavior. Princeton University Press, Princeton

    Google Scholar 

  • Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decisionmaking does not exist in the general data protection regulation. Int Data Priv Law 7:76–99

    Article  Google Scholar 

  • Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the Black Box: automated decisions and the GDPR. Harv J Law Technol 31:841–887

    Google Scholar 

  • Wischmeyer T (2018) Regulierung intelligenter Systeme. Archiv des öffentlichen Rechts 143:1–66

    Article  Google Scholar 

  • Witten IH et al (2016) Data mining: practical machine learning tools and techniques, 4th edn. Elsevier, Amsterdam

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoan Hermstrüwer .

Editor information

Editors and Affiliations

Appendix

Appendix

  1. 1.

    An ordinary least squares regression takes the functional form:

    $$ {y}_i={\beta}_0+{\beta}_1{x}_{1i}+\dots +{\beta}_n{x}_{ni}+{\varepsilon}_i $$
  2. 2.

    The objective function usually used for regression algorithms is to minimize the mean squared error (MSE) between the vector of observed values Yi and the vector of predicted values \( {\hat{Y}}_i \) given a set of n observations:

    $$ MSE=\frac{1}{n}\sum \limits_{i=1}^n{\left({Y}_i-{\hat{Y}}_i\right)}^2 $$

The closer MSE is to zero, the higher the accuracy of the predictor.

  1. 3.

    Technically, variance describes the difference between estimates of a value in one sample from the expected average estimate of the value if the algorithm were retrained on other data sets:

    $$ V=E\left[{\left(\hat{f}(x)-E\left[\hat{f}(x)\right]\right)}^2\right] $$
  2. 4.

    Technically, bias is the difference between the expected average prediction of the machine learning and the true value that the model is intended to predict:

    $$ B=E\left[\hat{f}(x)-f(x)\right] $$

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hermstrüwer, Y. (2020). Artificial Intelligence and Administrative Decisions Under Uncertainty. In: Wischmeyer, T., Rademacher, T. (eds) Regulating Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-32361-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32361-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32360-8

  • Online ISBN: 978-3-030-32361-5

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics