Introductory remarks

Civil liability, in its traditional paradigm based on “deterrence”, can be understood as indirect market regulation, since the risk of incurring liability for damages provides an incentive to invest in safety.Footnote 1 The claim I raise in this article is that such a paradigm may prove inappropriate in the markets for artificial intelligence devices, which are likely to play a very relevant role in several industries, for example with regards to robots in all their uses (from health care to hospitality etc.), self-driving cars, artificial intelligence (hereinafter: AI) services etc..

Indeed, according to the current paradigm of civil liability based on deterrence, compensation is allowed only to the extent that “someone” is identified as a debtor (either through fault or under a strict liability rule). However, it would not be useful to impose the obligation to pay such compensation to producers and programmers: robots and AI algorithms, in fact, could “behave” very independently of the instructions initially provided.

As the way AI operates could be unpredictable, with negative consequences despite no flaw in design or implementation, the use of civil liability as a deterrent mechanism can be a disincentive to new technologies based on artificial intelligence, to the extent that this can lead to charges to the producers and/or programmers even if the damage derives from a perfectly “correct” functioning of the algorithms. There would be no “deterrence”, therefore, because the damage would result from a situation in which there is no “fault” to blame or prevent.

Therefore, I think AI requires that the law on this matter evolves from an issue of civil liability into one of financial management of losses. My statement is not made with reference to a specific legal system but as a point of general theory of civil liability, even if, for specific purposes, legislation and case-law belonging to different legal systems are referred to in this research. This reform appears very relevant, since one can imagine a sharp evolution, in the coming years, toward a much higher use of artificial intelligence and robotisation, which makes it important and urgent that civil liability regimes adapt to favour this evolution rather than hindering or preventing it. Some proposals in this regard are provided in the final part of this article.

A final introductory remark: as indicated below, in § 3, AI is used in many different sectors (health care, aviation, finance etc.) and carries out very diverse activities (monitoring, data mining, forecasting, market analysis and trading, image recognition, designing treatment plans, even performing physical activities etc.). Depending on this, AI algorithms could damage one’s revenues, assets, reputation or even physical integrity (through its use, for example, in surgery or in self-driving cars). Of course, different uses in diverse contexts may require different rules on compensation. My proposal in favour of a “no-fault” system is to be understood generally applicable to all cases in which AI carries out activities with a certain degree of autonomy and, therefore, this article is focused on offering a general scope of the “no-fault” paradigm. However, in order to provide a clearer reference to the factual context referred to, one can understand my proposal as applicable, in particular, to AI algorithms characterized by an high degree of autonomy, execution of physical tasks and impact on human physical integrity, as it happens with reference to self-driving cars.Footnote 2

The “traditional” paradigm of civil liability based on deterrence

The current paradigm of civil liability laws is primarily based on the assumption that civil liability plays and should play an important role in deterrence. It is believed that any increase of liability to producers and suppliers of goods and services will increase investments in safety to avoid incurring liability. Therefore, it is commonly believed that the stricter the civil liability rules on producers and other professionals, the higher the overall level of safety within the system (Calabresi 1970; Cooter and Ulen 2008; Viscusi and Hersh 2013).

The idea that civil liability must have a deterrent function presupposes that the obligation to pay damages is attributed to the person whom the legal systems identifies as the addressee of such deterrence. The person, in other words, whose investment in safety is to be fostered. This paradigm has remained substantially constant over time and has developed on two main strategies for allocating the obligation to pay damages: liability for fault and strict liability.

The first and most important criterion for attributing the obligation to pay compensation for damages is that of fault. The idea that the damages requires someone’s “fault” is deeply rooted in the legal thought from ancient times: it emerged in Justinian law and was further consolidated in the jus commune and canon law (Mazeaud and Tunc 1957), starting from a thousand and five hundred years ago.

This idea, which until recently inspired the entire system of civil liability, was eloquently called, in German literature, the “dogma of fault” (Verschuldensdogma).Footnote 3 Roughly speaking, one may say that all the modern legal systems establish their civil liability regime mainly on fault (Bussani and Sebok 2015).

The aforementioned paradigmaticcentrality of “deterrence” has evolved, but has remained in place, when most relevant social, political and economic changes directed legal thought toward a growing quest for solidarity in all western legal systems. This happened regardless of their civil-law or common-law basic structure,Footnote 4 so that some Authors understand such a change as an example of the case where the common law and civil law of torts “reach similar results because they must address and resolve the same basic fact patterns”.Footnote 5

The quest for solidarity, strongly driven by the concrete consequences and upheavals deriving from the industrial revolution, has led legislators to consider it unfair that the damages following certain (intrinsically risky) activities should be borne by consumers and other end-users of goods and services unless a “fault” of producers or other professionals could be proven in court.

It was, therefore, considered that professional producers of goods and services should bear the risk of their activities regardless of their “fault”. This liability reallocation strategy, which evolved throughout the XX century, was deemed efficient and ethically grounded to the extent that such professional producers were (and are) in a better position to assess the risk of their businesses, to spread the cost of accidents and set up adequate prevention policies.Footnote 6

This evolution has led, among other things, to a significant variation in civil liability legislation (within the same paradigm based on deterrence, I believe), which lead to the adoption of loss-spreading strategies in civil liability laws (Comporti 1965); under an economic point of view Cooter 1991). This new allocation strategy ignored the concept of “fault” and regarded the exercise of risky activities as an autonomous criterion for imposing liability for damages.

From a legal point of view, this evolution has expanded the liability imposed on professional producers to include cases in which the latter could not prove that the damage was not attributable to them, cases where there was scientific uncertainty as to the cause of the harmful effects or even cases where such cause was unknown (Montinaro 2012; in an economic analysis of law perspective see Faure et al. 2016). This development has been pursued through similar techniques in all Western legal systems, mainly: the reversal of the burden of proof and the imposition of strict liability on producers and other professionals, the development of the precautionary principle in many fields of application etc.Footnote 7

Legal systems moved even further in the direction of reallocation of liability for damages through the adoption of different loss-spreading techniques and strategies; this was the case, for example, of mandatory insurance, which was imposed to producers and professionals of specific goods and services in different jurisdiction.Footnote 8

The emergence of strict liability represented a mere incremental advancement of the same traditional paradigm of civil liability, based on “deterrence”. In fact, the developments just summarised have been limited, essentially, only to reallocate the “cost of accidents” from customers and end-users to producers and professionals within the same conceptual and legal framework already in place, providing, for some cases, the shift of the financial burden for compensation oninsurance companies.

The concept of “fault” has been conceptually replaced, in some cases, by that of strict liability, simply to increase deterrence even in cases where the fault could not be assessed positively in court, with the aim of inducing producers and other professionals to increase investments in safety correspondingly (Savatier 1945; Comporti 1965). Legislation, however, appeared to keep considering civil liability also for its potential of deterrence.

Such an approach to the issue at stake is shown, for example, in the “Principles of European Tort Law” (PETL) developed by the European Group on Tort Law,Footnote 9 especially as regards the connection of compensation to liability to compensate damages [art. 1:101(1)], which invariably depends solely on fault or “strict liability” [Title III]. The same approach seems to be supported by scholars and even sophisticated studies, at the supranational level, have considered, and continue to consider, civil liability as carrying out the central function of deterrence together with that of compensation (OECD 2006).

Artificial intelligence, its applications and its peculiar characteristics

It should be noted that the paradigm of civil liability based on deterrence has proven to be reliable and appropriate in several cases. In many cases, increased liability resulted in an incentive for producers and other professionals to invest in safer products and services. This happened, for example, with reference to general consumer legislation enacted, among many others, through Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.Footnote 10

This paradigm, however, has proved inappropriate in other casesFootnote 11 as it appears to be with regard to the so-called artificial intelligence revolution. Some preliminary considerations on AI and its applications are appropriate before moving on to legal analysis, to take note that artificial intelligence has countless applications in society today. Many of them, in fact, are very common (Kaplan and Haenlein 2019; Solaiman 2017) and are present in every sector (Kurzweil 2005).

For example, in agriculture, AI algorithms increase the efficiency of farming by monitoring crops and soil and using the information collected in order to predict, among other things, the time required for a crop to be ready for harvest (Faggella 2020a, b). In finance, AI allows for huge data extraction and market analysis far beyond any human capacity (Costantino and Coletti 2008) and makes millions of daily trades possible without any human intervention (the so-called High-frequency Trading) together with calculation of asset allocation (portfolio management) (Faggella 2020a, b). It also allows the assignment of credit scores aimed at assessing the risk of consumer default (Asatryan 2017).

If AI is used for programming robots, it can perform physical activities. AI- programmed robots are quite common in many industries and are used to perform jobs that can be dangerous for humans. If sensors are used, robots can even collect information and perform monitoring functions. Self driving cars are currently being tested (Badue et al. 2020), also for military applications (Congressional Research Service 2019).

Current AI algorithms are not limited to executing tasks based on predefined and permanent rules. They are able to collect data (the so-called data mining: Friedman 1998) and self-learning. In particular, algorithms can automatically improve through experience and become capable of making predictions and decisions they were not explicitly programmed for (Mitchell 1997; Koza et al. 1996). Applications, especially those falling within the so-called deep learning, can be supervised, semi-supervised or even unsupervised by humans (Bengio et al. 2013; Schmidhuber 2015; Bengio et al. 2015). Deep learning-based image recognition is currently able to achieve more accurate results than human-based ones (Cireşan et al. 2012). In medical diagnosis (more in general on this issue see: Amisha et al. 2019; Davenport and Kalakota 2019) AI allows detection of tumors through computerised interpretation of medical images (Litjens et al. 2017; Forslid et al. 2017), design of treatment plans also through the extraction of medical records and the creation of drugs. The recent Covid-19 pandemic has confirmed how AI can be used for the control and detection of pandemic cases, diagnoses (Castiglioni et al. 2020) and vaccines and drugs development after AI predicted the RNA structure of SARS-CoV-2 (Baidu 2020).

AI has even shown able to perform tasks such as generating news and financial reports, writing texts (Metz 2019), increasing traffic on social media platforms by detecting users’ preferences (Williams 2016) and even to transform structured data into reports and recommendations. Research is also being conducted to apply deep learning to military robots in order to enable them to perform new tasks through observation (U.S. Army Research Laboratory 2018).

AI and civil liability: the problem(s)

Artificial intelligence is prone to several problems arising from its technical and operational characteristics. Among these one may recall the risk deriving from the poor quality of the data to which the system accesses (so that the AI shows itself prone to racism if the available data areFootnote 12). The risk arising from conflicts between different objectives pursued by different elements of the same AI device should also be mentioned (Meyer 2007). Of course, all internet-connected software and devices are subject to hacking and unauthorised access (Sheehan et al. 2018).

With reference to the purposes of this article, the peculiar problem arising from artificial intelligence is that AI algorithms can have a certain degree of autonomy in their operation. Therefore, their “behaviour” evolves over time (and will do so much more in the near future), based on the information and feedback collected and processed by thousands of different shared sources (so-called “machine learning” and “deep learning”). In fact, it can be said that algorithms do not only perform tasks, but also learn how to perform them over time.Footnote 13

In this field, therefore, the relationship of cause and effect, as regards the causality of the damage, may be not linear as we are used to believe (Karnov 2016; Scherer 2016; even if not everyone agrees on this point: Vladeck 2014; Hubbard 2014) since the way causality works is no longer “Aristotelian”.Footnote 14 As stated by the EU Expert Group on Liability and New Technologies, AI makes it questionable the adequacy of existing liability rules based on “anthropocentric and monocausal model of inflicting harm” (European Commission 2019). On the contrary, it can be considered quite frequent (and even more frequent in the future, due to technological evolution) the possibility that algorithms “behave” very independently from the instructions initially provided by programmers.

The results of the AI activity, therefore, could be unpredictable despite the absence of flaws in the design or implementation. This implies that algorithms mayerr in their “decision making”.Footnote 15 Such a expansion of the area of “unknown”, which is not capable of being predicted according to our current scientific methods (U. Beck 1996), requires careful consideration of which civil liability regime should apply to damage caused by AI operation.

Many proposals have been madeFootnote 16 in this regard. Almost all of them are based on what I called the “traditional paradigm” of civil liability, rooted on deterrence. Either they suggest applying the fault rules (Abbott 2018) or the strict liability regimes (Buonanno 2019), sometimes pleading extension of the rules on defective products (Borghetti 2004) or on animals under the care of humans (Schaerer et al. 2009).

Application of the traditional paradigm of civil liability to AI, however, might not foster significant improvements of safety and could determine negative externalities, instead.Footnote 17 This statement can be understood after considering that compensation to damaged consumers and other end-users of AI devices requires, under the said traditional paradigm, that the obligation to pay compensation is imposed to producers and programmers thereof (the only “someone” available to be imposed liability on: Hao 2019).Footnote 18However, producers and programmers could not do much to forecast unforeseeable “behaviour” of AI algorithms, which would be influenced by innumerable variables provided by databases, big data gathering and the end-users themselves, which are completely out of the reach and control of anyone.

This is why, in my view, civil liability would (and could) not induce virtuous investments in safety within the AI industry: in fact, no further investment, fostered by deterrence, could prevent such kind of risks. On the other hand, the application of the traditional paradigm of civil liability, especially when conceived as a strict liability regime, would expose producers and programmers to unpredictable and potentially unlimited claims for civil liability, with no possibility of reducing the risks by increasing investments in safety (with regard to damage following “unforeseeable” behaviour of AI algorithms). Therefore, it is likely that such an applicationFootnote 19 would prevent them from entering the market or developing it, thereby hampering technological progress (what is sometimes called the risk of “technology chilling”: Montagnani and Cavallo 2020; Viscusi and Moore 1991; Huber and Litan 1991; Parchomovsky and Stein 2008; Morgan 2017; Magrani 2019; Policy Department for Citizens’ Rights and Constitutional Affairs 2020; EU Independent High-Level Expert Group On Artificial Intelligence 2019a, b; Bertolini 2015; Pellegatta 2019; Palmerini and Bertolini 2016).

This would be a significant negative externality, since new technologies determine an important increase in safety and reduce the overall number and relevance of accidents (as available data already show with respect to the current situationFootnote 20).

It can be noted, of course, that the risk of a “technology chilling” is not detectable in these times. Economic and business literature account for significant investments in AI (OECD 2019) and international market races to deploy AI technology (see, e.g., CBI 2018; Welsch and Behrmann 2018). Furthermore, AI has been used in finance for more than ten years and the application of the current civil liability regulation has not chilled that use until now. This is true. However, recent AI applications (driverless cars, medical applications and the like) show a wider and deeper exposure to risk than ever before. Moreover, other markets have shown that ex ante uncertainty on the allocation of the costs of accidents (coupled with the consequent fear of excessive litigation) “may drive otherwise healthy companies outside the market”.Footnote 21

As a matter of fact, the purpose of this article is precisely to highlight that, as history shows with reference to other sectors, the application of inadequate civil liability rules to evolving markets can raise serious concerns about negative externalities (see, e.g., OECD 2006; Mello et al. 2010; Di Gregorio et al. 2015). Of course, one can hope that the problems do not arise. I propose that a wiser solution would be to adapt the legislation in order to prevent such negative externalities from manifesting themselves in the first place—which appears to be the strategy behind the EU proposal to give legal personality to robots, which is recalled below, under § 5.

A proposal: the need to relieve producers and programmers from civil liability when robots correctly comply with scientifically validated standardised rules

Law scholars have observed that current civil liability legislation can be an obstacle to the development of artificial intelligence and the exploitation of the following benefits (Montagnani and Cavallo 2020; Viscusi and Moore 1991; Huber and Litan 1991; Parchomovsky and Stein 2008; Morgan 2017; Magrani 2019; Policy Department for Citizens’ Rights and Constitutional Affairs 2020; EU Independent High-Level Expert Group On Artificial Intelligence 2019a, b; Bertolini 2015; Pellegatta 2019; Palmerini and Bertolini 2016). A similar obstacle has been observed, in the past, with regard to medical civil liability.Footnote 22 It should be noted that the reference to civil medical liability, when it comes to tort law reform in the wake of artificial intelligence, appears appropriate as the two systems show similarities in both incentives and (negative) externalities (Gaine 2003).

In fact, as noted above, there is a rather high possibility (which will increase in the future, due to technological evolutions) that AI algorithms “behave” increasingly far independently from the instructions initially provided by programmers. This possibility led the European Parliament to propose “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations” (European Parliament 2016; Solaiman 2017; Bryson et al. 2017; Amidei 2017; Guerra 2018). The main reason for this proposal is to use legal personality as a technique to impute liability to the robot alone and, therefore, isolate its obligations (including damages) from those of its producer and programmer. Consideration of robots as Haftungssubjekte (liability subjects)Footnote 23 represents, in short, a proposal to solve the problem of a “fair and efficient allocation of loss”, highlighted by the EU Expert Group on Liability and New Technologies (European Commission 2019).

I believe that such a proposal is not desirable, since robots cannot and should not be considered as “persons” under current civil legislation (European Commission 2019; European Parliament 2016; European Parliament 2017; Solaiman 2017; Bryson et al. 2017; Floridi and Taddeo 2018; IEEE Standards Association 2017; Wagner 2018, 2019a, b; Eidenmüller 2017; Chopra and White 2011; Koops et al. 2010). However, this proposal is much relevant within the present discussion, because it clearly shows the need to shift “obligations” away from producers and programmers when robots are capable of acting rather autonomously from their original design (Scherer 2016).

How could such a problem be addressed? The most relevant debate, on this point, is whether modern technology requires new specific legislation or existing legislation and concepts can be adapted to it: this is the so-called “law of the horse” controversy (Easterbrook 1996; Lessig 1999; Calo 2015; Stradella 2013).

Since AI algorithms are able to “behave” in a very different way from what was initially foreseen in their programming, I believe that the problems highlighted above, especially in § 4, do not concern what the algorithm actually does but, instead, how the algorithm is designed from the very beginning. From this point of view, I believe that civil liability rooted on deterrence (which will probably be conceived as strict liability: European Commission 2019; *EU Independent High-Level Expert Group On Artificial Intelligence 2019a, b) should correspond, in these sectors, mainly to lack of conformity to predetermined standards (depending, of course, on available knowledge) (Guerra 2018; Virk 2013). This compliance constitutes, in the AI environment, a sort of “adapted range of duties of care” (European Commission 2019) and represents a more effective form of regulation within mass products (Viscusi 1989).

Conversely, strict liability should not apply if an algorithm programmed in accordance with standards occasionally errs and produces negative consequences despite no design or implementation flaws; this is the case from which the negative externalities highlighted above come from. I believe that, in these cases, producers and programmers of AI algorithms and devices should be released from civil liability for damages. In other words, in all cases where there is no evidence of negligence, imprudence or unskillfulness and the robot (both in its physical components and in its artificial intelligence aspects) complied with production and programming scientifically validated standards, programmers and producers of AI algorithms and devices should not be held liable for damages.Footnote 24

It should be noted that this proposal is in stark contrast with the current paradigm of allocation of the “costs of accidents”, since, as briefly recalled above in § 2, the current regulatory paradigm shows a tendency to impose strict liability on firms that carry out intrinsically risky activities and, therefore, impose on them the costs of all damages for which there is no positive evidence of diligence, prudence and skill (i.e.: all cases in which firms cannot prove that the damage is not attributable to them, in which there is scientific uncertainty as to the cause of the harmful effects or even cases where this cause is unknown).

It is not ignored that the mere respect of standards could lead to unwanted damage in some cases (they would be also allowed compensation in my proposed “no-fault” system, as noted below). However, my claim is made on the basis of the idea, confirmed by available empirical evidence,Footnote 25 that the adoption of artificial intelligence in carrying out specific activities such as driving (being destined to increase drastically in the next future) determines a significant increase in safety and reduces the overall number and relevance of damage and deaths compared to human action.

This means that provision of incentives for technological innovation, provided that it respect scientifically validated standards, appears a safer strategy than any other.

A new paradigm of civil compensation for damages related to AI: towards the evolution of compensation from an issue of civil liability to one of financial management of losses

It is necessary, at this stage, to translate the above observations into rules. The law, in fact, binds economic and social activities in order to contribute to the pursuit of welfare; on the other hand, however, the law cannot arbitrarily define its objectives and (especially) the means. The actual functioning of the economic and social contexts faced must be taken into the utmost consideration, in order to develop well-founded, affordable, reliable and effective rules (de Jong et al. 2018).

The failure of the current paradigm of civil liability based on deterrence, when applied to artificial intelligence, observed and (I believe) established above, requires a radical modification thereof. Such a modification appears relevant in these days, since the application of the “traditional” paradigm of civil liability can hinder the development of markets towards the intensive use of artificial intelligence and robotisation in the future (the already mentioned “technology chilling”). Furthermore, civil liability rules rooted in deterrence are likely to place jurisdictions adopting this paradigm at a competitive disadvantage in favour of jurisdictions that are more responsive to the needs and demands of the markets referred to.

What is surprising is that in areas of research other than law problems quite similar have been studied thoroughly and scholars have come to the conclusion that intrinsically risky activities incorporate a certain percentage of risk that does not depend on the person performing them but on the activities themselves (Althaus 2005; Aldred 2013; Aven 2012, 2016; Beck 1996; Lindley 2006). Errors occur and will occur regardless of the severity of the civil liability rules in force.

This theme recalls the concept of “manufactured uncertainties” developed by Beck, which is based on the idea that in modern times the area of “unknown” is widened and risks escape from what is capable of being predicted pursuant to our current scientific methods.Footnote 26 We need to adapt the legislation to the “risk society”, that is: “a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself” (Beck 1992).

Such a conclusion should lead to discarding the “blame culture”, which inspires and supports the current law on civil liability, and replacing it, at least in some cases (as briefly discussed here) with a “no-blame culture”, rooted in risk managementFootnote 27 and scientifically validated standardisation. While literature on risk management is fairly consistent on this point, lawyers and lawmakers seem rather conservative on this point.

In this regards, it was noted, above, that the negative externalities imposed on the AI markets by the traditional civil liability paradigm could be reduced if producers and programmers of artificial intelligence devices could be released from civil liability under certain conditions; in particular, when there is no evidence of their negligence, imprudence or unskillfulness and their activity complied with scientifically validated standards.Footnote 28

Such release, however, may not (and should not) lead to prevent damaged customers and end-users to get compensation. In fact, on their side, any abrogation of the right to compensation would be inconsistent with the “solidarity” approach that now pervades juridical systems, mentioned above. In addition to this, it would contradict the principle of “functional equivalence”, according to which compensation should not be denied in a situation involving emerging digital technologies “when there would be compensation in a functionally equivalent situation involving human conduct and conventional technology”.Footnote 29

This is why I believe that a new regulation of the matter should be developed, inspired by a new paradigm, aimed at maintaining compensation for damages on the patient’s side, but shifting away from producers and programmers of AI devices (when there is no evidence of negligence, imprudence, or unskillfulness and scientifically validated standard of production and programming are complied with) the obligation to pay for such compensation.Footnote 30

In other words, I see room for relevant legislation to evolve from an issue of civil liability into one of financial management of losses. This would take better account of the “systemic” need for proper functioning of the market as a whole. In fact, what could seem in the short term to favour the individual customer (e.g., condemning a producer to pay compensation for a specific damage suffered by an end-user of AI devices or robots, despite compliance with validated standards and no negligence, imprudence or unskillfulness being ascertained in court) can possibly damage systemic safety (determined, in hypothesis, by the development of AI) if it prevents the market from developing into a more technological and safer system (due to the disincentives determined by the sentence itself; in the example above: producers could abandon research and development of AI devices and robots operating in risky environments).

The legal systems should bear the risk that application of scientifically validated standards can determine harmful consequences in individual cases to the extent that, from a systemic point of view, this application allows a significant reduction of the overall risks and damage (Kizer and Blum 2005; Hernandez 2014; US Department of Transportation 2017).

This new paradigm could be built on the basis of “no-fault” systems available in different jurisdictions.Footnote 31 In this regard, one can cite the no-fault rules issued in the field of medical damage, further described in § 7 (see, in general: OECD 2006; Marchisio 2020); adverse effects attributed to vaccination (World Health Organisation 2009; Looker and Kelly 2011); damages coming from unknown driversFootnote 32 etc..

Adopting a “no-fault” scheme would isolate compensation in favour of damaged end-users from liability on producers and programmers of AI devices. It would also help resolve other weaknesses inherent in the traditional paradigm of civil liability. One can mention, here, the risk of civil liability turning into a “damages lottery” due to the fact that, in some cases, the damages cannot be awarded because no one is at fault in the specific event. It is also possible to report the case in which damages cannot be collected because the debtor is (in many instances: deliberately) unable to pay (Atiyah 1997; Cane and Goudkamp 2013).

For the sake of completeness, one might wonder if the proposed no-fault schemes might actually create a preference for AI-driven activities over the use of human labor. This would confirm, in hypothesis, what appears to be a bias against humans that exists, for example, in the immigration and tax laws of many jurisdictions, to the extent that robots can generally be freely imported without work visas and the income they generate from their work is usually not taxed on the robot as it would be for a human. The issue is very complex and cannot be addressed here. In summary, it should be noted that, whatever measures are introduced to compensate for the loss of human work caused by the use of artificial intelligence, such measures shouldcompensate those who have lost their jobs in the short and medium term, contribute to the retraining of unemployed workers and foster study and training in technological subjects, but they should not prevent the success of artificial intelligence.Footnote 33 The proposal that I have developed in this research is aimed precisely at preventing technological innovation from being hindered since artificial intelligence shows, in many sectors, a more secure strategy than any other based on human action. In these areas, removing the incentives for AI would mean reducing overall efficiency, safety and security.

Some references and observations on some existing “no-fault” laws

It is clear that all the existing pieces of “no-fault” legislation, briefly mentioned above, are targeted to specific sectors and that, when implemented with reference to AI, should be properly adjusted. Even if they provide good examples of financial management of losses and valuable ideas for future legislation on artificial intelligence, in fact, their contribution to the development of an adequate scheme for AI should be further studied carefully. A detailed exam of existing “no-fault” models and any attempt to provide even a concise description of how a no-fault scheme might be designed in order to regulate the issue at stake would fall far beyond the scope of this article, which is intended to outline the need to change a regulatory paradigm of the law of compensation and not to determine its specific content.

However, some remarks may be appropriate, here, to define in what terms existing legislation can represent a model for AI markets and what adjustments are needed to adapt them to the latter.

First, it can be noted that “no-fault” schemes seem to differ, in a very broad view, with respect to six main variables (Dickson et al. 2016): the eligibility criteria for compensation;Footnote 34 if the compensation is paid automatically upon occurrence of the eventFootnote 35 or an avoidability standard is adopted;Footnote 36 whether or not the system prevents continuous access to the courts; how the program is funded;Footnote 37 whether or not the compensation is imposed a financial cap; the definition of the financial entitlement.Footnote 38

It is clear that the drafting of a “no-fault” scheme for the damages produced by AI algorithms would require a careful definition of the eligibility criteria, especially as regards the definition of the “scientifically validated” standards (and modification procedures) to be complied with in order to have the scheme applicable. It would also be necessary to define a third, independent entity in charge of paying compensation to damaged end-users in application of the “no fault” scheme, of its operation and its financing. Similarly, the definition of a standardized amount of compensation under a “no fault” scheme should also be provided. These issues cannot be discussed here, as this article aims to present the general scopes and principles of my proposal, while the topics briefly listed here are rather detailed aspects of it.

Furthermore, the way in which a “no-fault” scheme is conceived depends to a significant extent on the legal and institutional context in which the scheme operates, particularly with respect to the way in which the social security net is designed in each different country (Dickson et al. 2016). It is clear, for example, that in the USA any such scheme would likely be funded privately while in European countries such as Sweden, Norway and Finland it is more likely to be publicly funded (OECD 2006; Mello et al. 2011; Dickson et al. 2016; Vandersteegen et al. 2015). Acceptance of a standardised compensation scheme can also depend heavily on how the social security net is designed in each different country.Footnote 39

Secondly, the aforementioned pieces of legislation have a much narrower scope than the issues dealt with here (e.g., within health law they are mainly aimed at avoiding litigation; in case of damages coming from unknown drivers they seek compensation in case no liable person is identified etc.).

To my knowledge, the only “no-fault” scheme that shares a common approach to the scope discussed here is that provided for injuries as a result of vaccination. This scheme, in fact, embodies the idea that compensation of statistically “inevitable” injuries should not, in principle, be imposed on persons who carry out the relevant activities or who supply products on the market, to the extent that negligence, imprudence or unskillfulness is not proven and scientifically validated standards are complied with.

This approach, well functioning with reference to vaccination (adverse effects are very rare compared to the over 2.5 million deaths prevented, only in 2008, by vaccination: World Health Organisation 2009; Looker and Kelly 2011), could represent a model for AI algorithms liability regulation, as their use could determine harmful consequences in individual cases but, from a systemic point of view, would allow a significant reduction of the overall risks and damages. The approach proposed here resembles that of mandatory seat belt in motor vehicles: also in that case “seat belts can cause injuries but it is vastly more likely that they will protect you. It is all about probabilities and the chances are on the side of wearing seat belts” (Giubilini and Savulescu 2019).

Third, “no-fault” legislationis currently showing shortcomings in terms of safety incentives, in the absence of the deterrent brought about by “traditional” civil liability.Footnote 40 The pure “no fault” models, in fact, raise concerns about their appropriateness to limit the risk of moral hazard, exactly as it happens in New Zealand with respect to medical law, since “the principal weakness of no-fault schemes is the difficulty of ensuring that the socially optimal amount of care is taken by potential loss-causers, as the links between their potential to cause loss and the costs of their actions are severed” (Howell et al. 2002).

This is why the proposed “no-fault” system should not apply outside the scope defined above, namely: relief from liability in the absence of negligence, imprudence or unskillfulnessFootnote 41and in compliance with scientifically validated standards. Out of this scope, “no-fault” rules would unreasonably remove the deterrent effect that civil liability can still produce. I argue that “no-fault” rules should be combined with “fault” rules in order to take advantage of the benefits each of them brings, narrowing their flaws through their reciprocal interaction.

Furthermore, in all cases where “no-fault” schemes apply, they should be combined with a discipline capable of providing incentives for safety.Footnote 42 I believe that, in those cases in which no one can be blamed for ignoring the standards set, such an approach should be uncoupled from deterrence on individuals (e.g., the deterrence induced by civil liability should not be replaced by the deterrence induced by disciplinary sanctions on employees). Instead, it should be inspired by organizational and procedural criteria, thus shifting the paradigmatic centrality from individuals to risk management.

Concluding remarks: towards a general “law of the horse” for artificial intelligence technologies

As noted above, the intensive use of artificial intelligence in several sectors is very likely to reduce overall risks and harm compared to human action. However, it can give rise to particular risks and harms in specific cases. In this article I have examined, in particular, the risks associated with machine-learning and deep-learning capacity of artificial intelligence devices, consisting of the AI algorithms ability to act in a rather autonomous way from their original design.

From a systemic point of view, the overall benefits of artificial intelligence outweigh the resulting costs. Therefore, technological evolution should be encouraged or, at least, not hindered.

It is recognised that “traditional” civil liability rules can provide a negative incentive towards such evolution, as they can impose the obligation to pay compensation on producers and programmers of AI devicesdespite no design or implementation flaws.Footnote 43 In these cases, civil liability would provide no virtuous deterrence to utmost care, but would simply discourage technological progress. Therefore, AI creates new challenges with regard to civil liability, which must balance adequate compensation to victims with the need not to hinder technological innovation (EU Commission 2020).

No-fault compensation schemes could be an interesting and worthy regulatory strategy for that purpose, in order to allow an evolution of the matter from an issue of civil liability into one of financial management of losses. Of course, such schemes should only apply in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity has been adequately compliant with scientifically validated standards. In other cases, traditional civil liability rules would have a valid deterrent function.

Therefore, with reference to the AI markets, the evolution toward a “no-fault” system should not abrogate the traditional civil liability paradigm rooted in deterrence. Instead, both of them should coexist as independent and alternative techniques of compensation (a sort of “double track” legislation on damages), in order to exploit the advantages that each of them gives, restricting their defects from their reciprocal interaction.