by Francisco Andrade, Director of the Master's in Law and Informatics at UMinho and Tiago Cabral, Master's student in EU Law at UMinho
▪
1. The development of Artificial Intelligence (hereinafter, “AI”) brings with it a whole new set of legal questions and challenges. AI will be able to act in an autonomous manner, and electronic “agents” will be, evidently, capable of creating changes in the legal position of natural and legal persons and even of infringing their rights. One notable example of this phenomena will be in data protection and privacy, where a data processing operation by a software agent may be biased against a specific data subject (eventually due to a faulty dataset, but also due to changes in the knowledge database of the “agent” under the influence of users or of other software agents ) and, thus, infringe the principles of lawfulness and fairness in data protection, but due to difficulties in auditing the decision one may never find why (or even find that there was a bias). More extreme examples can be arranged if we put software agents or robots in charge of matters such as making (or even assisting) decisions in court or questions related to the military.
2. One does not have to seek such extreme examples, in fact, even in entering into an agreement, a software agent may, by infringing the law, negatively affect the legal position of a person.
3. Therefore, it is no surprise that one of the most pressing legal challenges brought by AI is the one regarding liability for damages caused by AI-enabled software agents and autonomous robots. The Resolution of the European Parliament of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (hereinafter “Parliament Resolution”) dedicated a considerable part of its analysis to this question. Parliament considers that a “series of rules, governing in particular liability, transparency and accountability, are useful, reflecting the intrinsically European and universal humanistic values that characterise Europe’s contribution to society, are necessary; whereas those rules must not affect the process of research, innovation and development in robotics”. It points out that the current Product Liability Directive is flawed, due to issues such as proving causality between damages and the machine’s action. We would add that the fact that it does not apply to software agents when they can be considered as a service makes it deeply unfit to regulate the next stage of AI development. In addition, Parliament points out correctly that the issues in liability do not end at Product Liability, with plenty of others, equally challenging, existing in other types of contractual and non-contractual liability. Furthermore, transparency on one side, may be difficult to ensure, since we may be confronted with true black boxes. On the other side, even if we have access to the interactions the “agents” entered in, it is difficult to evaluate both the causal link and the intentional states of the software.
4. Considering that the issue deserved special recognition, when releasing the Communication Artificial Intelligence for Europe, the European Commission also released the accompanying Commission Staff Working Document: Liability for Emerging Digital Technologies. In this document, the Commission stresses that the importance of having an adapted liability regime is twofold. On one hand, it gives peace of mind to consumers, who can adhere to a technology, knowing that if something goes wrong, they will be provided compensation in the appropriate manner. On the other hand, innovative companies need a stable and appropriate framework to develop their solutions and triumph in the international stages. The limitations of the Product Liability Directive are again assessed, and the Working Document recommends a deeper analysis on whether concepts and elements currently contained within the EU/national liability framework should be revised in accordance with new challenges arising from emerging technologies.
5. Liability is ever-present in the deliverables by the High-Level Expert Group on Artificial Intelligence (hereinafter, “HLG”). The principles discussed in the Ethics Guidelines such as the principle of individual freedom, and human dignity (including the prevention of harm) are extremely important in the context of the development of both AI and the potential liability rules around it. In fact, analysing such relations certainly merits an autonomous, longer and future study. Still, we should refer that in the Policy and Investment Recommendations for Trustworthy AI, the HLG asks an interesting and transversal question regarding whether “it is necessary or desirable to introduce traceability and reporting requirements for AI applications to facilitate their auditability, ex-ante external oversight before AI systems can be deployed, systematic monitoring and oversight by competent authorities on an ongoing basis, and the obligation for meaningful human intervention and oversight when using AI decision in specific sectors”. In fact, those traceability and reporting requirements appear to be key if we need to prove causality in damages caused by AI (especially software agents). They are also important (and arguably mandatory) for legal fields such as data protection.
6. Even though there have been a number of relevant inputs by the European Institutions and by legal scholars regarding this issue in recent times, one cannot stop oneself from thinking that there is still a difficult path ahead, filled with enormous challenges.
7. Some valid suggestions to solve this conundrum remain though. Even acknowledging the criticism that was directed at it, Professor Giovanni Sartor’s suggestion of “creating companies for on-line trading, which would use agents in doing their business”, merits consideration. According to the Author, “such agents would act in the name of a company, their will would count as the will of the company, their legally relevant location would be the company’s domicile, and creditors could sue the company for obligations contracted by those agents”[i].
8. The solution of mandatory no-fault insurance effectively solves some of the most pressing issues arising from liability around AI, including the allocation of the burden of proof. However, availability and price may be a problem. In addition, it may not be adaptable to every type of AI. One option would be to only make insurance mandatory for specific types of AI-enabled software agents and robots, complementing this solution with a no-fault compensation fund. In addition, even if we were to accept the suggestion of Giovanni Sartor, the issues of the causal link and of the intentional states of the software remain as utmost relevant for determination of liabilities and of the amount of eventual indemnities.
9. Of course, neither of the above solutions guarantees compensation for all types of damages caused by AI. Meanwhile, no-fault insurance and funds will likely not be able to compensate every type of damages (moral damages come to mind). In these cases, we would need to fall back to the general liability regimes and that is why it is key to keep them updated and adapted to the current world. In this aspect, “borrowing” solid concepts from the area of computer sciences, may help us understand the new questions arising before us.
10. At the European level amending the Product Liability Direct[ii] is key. A legal instrument from 1985 (lightly amended in 1999), cannot (and indeed it is not) be adapted to the current state of our technology. The need is even more pressing if we take into account that the Sale of Goods and Digital Services Directive provide much wider coverage in their scopes and, therefore, certain elements within the value chain such as traders/sellers may be held liable, while producers are not. In fact, these two legal instruments already provide a suitable legal basis in which to build upon a new and improved Product Liability Directive. But the need of further legal studies on the subject, considering the real interactions entered in by the software, remain crucial.
[i] Giovanni Sartor, “Agents in Cyberlaw”, in Sartor, G. (ed.), (2002a). “The Law of Electronic Agents: Selected Revised Papers”. Proceedings of the Workshop on the Law of Electronic Agents (LEA 2002).
[ii] Notwithstanding the future Commission Guidance on the Product Liability Directive, which we find insufficient.