Robots and civil liability (ongoing work within the EU)

5126137767_1ae2ba5506_o

 by Susana Navas Navarro, Professor of Civil Law, Autonomous University of Barcelona

The broad interest shown by the European Union (EU) for the regulation of different aspects of robotics and artificial intelligence is nowadays very well known.[i] One of those aspects concerns the lines of thinking that I am interested in: civil liability for the use and handling of robots. Thus, in the first instance, it should be determined what is understood by “robot” for the communitarian institutions. In order to be considered as “robot”, an entity should meet the following conditions: i) acquisition of autonomy via sensors or exchanging data with the environment (interconnectivity), as well as the processing and analysis of this data; ii) capacity to learn from experience and also through interaction with other robots; iii) a minimal physical medium to distinguish them from a “virtual” robot; iv) adaptation of its behaviour and actions to the environment; v) absence of biological life. This leads to three basic categories of “smart robots”: 1) cyber-physical systems; 2) autonomous systems; 3) smart autonomous robots.[ii] Therefore, strictly speaking, a “robot” is an entity which is corporeal and, as an essential part of it, may or may not incorporate a system of artificial intelligence (embodied AI).

The concept of “robot” falls within the definition of AI, which is specified, on the basis of what scholars of computer science have advised, as: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. 
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems”.[iii]

Concerning the robot as a corporeal entity, issues related to civil liability are raised from a twofold perspective: firstly, in relation to the owner of a robot in the case of causation of damages to third parties when there is no legal relationship between them; and, secondly, regarding the damages that the robot may be caused to third parties due to its defects. From a legal standpoint, it should be noted that in most cases the “robot” is considered as “movable good” that, furthermore, may be classified as a “product”. We shall focus on each of these perspectives separately.

1. Civil liability of the owner of a robot for damages caused to a third party
. To effectively address this issue, one should bear in mind that a robot can present different levels of autonomy. For example, in the case of autonomous vehicles several levels are proposed ranging from absence of autonomy to an entirely autonomous vehicle.[iv] In the case of drones, there is a distinction between a drone piloted from the ground and a completely autonomous drone, passing through a grey area where, despite being piloted by a human, the drone has the autonomy to carry out some specific actions beyond the pilot’s control.[v] This varying level of autonomy may also be applied to assistive or therapeutic robots. The robot’s level of autonomy, which makes it more or less predictable in its actions, should have an impact when it comes to regulating civil liability for damages caused to a third party. In this respect, in my view, if the robot does not have autonomy or it is reduced so that a human may exercise control over the machine, the rule regarding civil liability should be founded on an attribution thereof based on fault for intentional o negligent violation of the required standard of conduct (liability based on fault); whereas, if the robot is capable of learning from the environment and making its own decisions in such a way that the human lacks control of it, the rule regarding civil liability should be founded on the abnormally dangerous activity that creates a foreseeable and significant risk of damage to third parties (strict liability).[vi] In this respect, the rule would be more akin to the civil liability rule contained in the civil codes regarding damages caused by animals. Along with the individual and the legal entity, personality would be attributed to an animal as a sentient being[vii] and to the robot as “electronic” personality.[viii] Definitely, the traditional legal concept of “personality” should be reviewed.

2) Liability for robot defects that cause damages to a third party. The fact that the “robot” is qualified as a “product” enables the application of the provisions of Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.[ix] In any event, the group of high level experts in civil liability and new technologies[x] are in the process of reviewing this regulation to see to what extent can it continue to be applied in light of the emergence of digital technologies. In addition, the Machinery Directive[xi] and the General Product Safety Directive[xii] are also under review. Of the various documents being published, it appears that the producer’s strict liability will remain as liability rule. Nevertheless, it seems to me that the type of defect should be distinguished in order to legally attribute the damage caused to another by a person who acts based on fault or that carries on an abnormally dangerous activity. In so far as corporeal robots are increasingly more sophisticated, it may not be unreasonable to think that the emphasis will have to be placed above all on their design, whereby the defects causing the robot to be considered “defective” shall more frequently be “design defects” rather than “manufacturing defects”. Likewise, their level of sophistication implies greater precision in the warnings, information and instructions that the producer should provide the customer with, i.e., more information, but also of a more technical nature, whereby there is even a need for some kind of specific knowledge on the part of the owner of the smart machine in order to fully understand the information provided. Information becomes more complex, which may lead to defective information becoming a more frequent type of defect ―along with design defects― than manufacturing defects when we deal with robots and smart machines. On this basis, if the producer is to be considered liable for all type of defects, as the current legal provisions state, investments in high technology from the side of manufacturers may drop if not stop altogether. In search of the balance between investment in technological research and liability respect to third party, the solution should not be for the producer to be granted immunity in the case of certain defects; instead, the option may be to distinguish the criterion of “legal attribution of liability according to the type of defect”. In the case of design and information defects, liability should be based on fault with a possible reversal of the burden of proof in the light of the gravity of the danger presented by the activity; whereas, on the assumption that there is a manufacturing defect, the risk of damage would be the most appropriate attribution rule of liability. Just in regards to manufacturing and information defects may “development risks” (the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered”, 7 lit. e Directive 85/374/EEC) be put forward as grounds for exoneration of the producer’s liability. This would not apply in the case of “design defects”, related to which the “reasonable alternative design rule” would take on significance. Two tests would thus be combined: the “consumer expectation test” (for the first two types of defects) and the “reasonable alternative design test” (for the last type of defect).

On the other hand, the legal concept of “producer” should be reviewed. According to Art. 3 of Directive 85/374/EEC, the producer is liable for damages caused to third parties by a defective product. If the producer is made to bear sole liability when the defect is not strictly a manufacturing defect and, for instance, some individually identified persons (i.e., algorithm creator, programmer, designer, manufacturer of a part of the product, etc.) or a research team have been involved in the design, there may be some lack of interest in investing in the manufacture of robots or other smart machines on the side of the industry. If, in the case of robots and smart machines, one considers that a large proportion of defects may be due to design, the concept of “producer” could be broadened to encompass the “engineer-designer”, provided that the latter is not an auxiliary engaged by the former. Moreover, it is increasingly more common to use open source software to create the robot (open robots) and, in these cases, anybody can make modifications, innovations, add certain aspects to public protocols, etc. Subjective uncertainty affects the existence and proof of the “causality” between the defect and the damage caused. Therefore, even though the idea may draw criticism, I do not think that the “market share liability rule”, when the damage is attributable to two or more persons (multiple tortfeasors), should be absent from this debate.[xiii]

The internet of things, as well as robots and other smart machines, poses a challenge for the rules regarding civil liability because they highlight that there is a need for a coherent system on the said liability that answers to new situations that may arise. It should not be forgotten that permanent communication between smart machines or systems that are capable of self-repair or expert robots that make decisions at critical moments may drastically reduce the number of accidents or fatalities with the knock-on effect of a reduction in deaths and physical injuries with permanent sequelae. This may have a primary economic impact in the fields of health and insurances.[xiv]

The permanent communication between smart machines may lead to the constantly adaptation of those to new technological and scientific developments or the adaptation to their environment based on existing information in a field of knowledge or technique that is gathered thanks to sensors or RFID systems (e.g., the constituent materials of piping, ducts or other infrastructure could adapt to the environment). This will inevitably impact the rules regarding civil liability of the producer and the owner of the robot or smart machine (i.e., the determination of the natural causality via the connection between things will have a direct impact on legal causality, in the determination of the scales and recoverable damages, among other aspects), but will also influence the legal systems as a whole.

In this way, the legal system should be a flexible system that adapts to fast-paced technological changes, establishing “innovation” as a principle or legal rule. Actually, the regulation through guiding principles (soft law) may become much more adaptive to the “innovative” smart environment than regulation by means of hard rules that seek to regulate every detail of a situation or conflict.[xv] Furthermore, this approach is consistent with the EU Better Regulation Agenda.[xvi]

In any event, in 2019 the EU envisages publishing a document with guidelines for the application of Directive 85/374/EEC to the new technological and digital reality which we are immersed in. We will be looking out for it, although I greatly fear that it will fail (once again) to offer a major review of the mentioned Directive. 

 

[i] See, for instance, the European Parliament Resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics [2018/2088(INI)].

[ii] Follow up to the EU Parliament Resolution of 16 February 2017 on Civil Law rules on Robotics, 2015/2103 INL.

[iii] AIHLEG, “A definition of AI: Main capabilities and disciplines”, 8 April 2019. Available at: https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines (date retrieved: July 2019).

[iv] As seen in the document of the Society of Automotive Engineers, “Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems” from 2014 (J3016_201401). Available at: https://www.sae.org/standards/content/j3016_201401/ (date retrieved: July 2019).

[v] Commission Delegated Regulation (EU) 2019/945 of 12 March 2019 on unmanned aircraft systems and on third-country operators of unmanned aircraft systems (L 152/1, 11.06.2019), Commission Implementing Regulation (EU) 2019/947 of 24 May 2019 on the rules and procedures for the operation of unmanned aircraft (L 152/45, 11.06.2019).

[vi] Marina Castells, “Drones civiles”. In: Susana Navas (ed.), Inteligencia artificial. Tecnología. Derecho (Tirant Lo Blanch Valencia 2017), pp. 92-99.

[vii] Caroline Regad, “Génesis de una doctrina: el animal como personal natural no humana”. In: Forum of Animal Law Studies, vol. 10, issue 1, 2019, pp. 84-90.

[viii] Follow up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on Robotics, 2015/2103 INL.

[ix] OJL 210, 7.8.1985.

[x] http://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetail&groupID=3592. Date retrieved: July 2019.

[xi] Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJL 157/24, 9.6.2006). At the time of writing this Chapter, the above mentioned Directive was under review [Artificial Intelligence for Europe, SWD (2018) 137 final].

[xii] Directive 2001/95/EC of the European Parliament and of the Council of 3 December on general product safety (OJL 11/4, 15.1.2002).

[xiii] Susana Navas, “Robot machines and civil liability”. In: Martin Ebers/Susana Navas (eds.), Algorithms & Law. Cambridge University Press, to be published in 2020.

[xiv] Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe [25.04.2018, SWD(2018) 137 final].

[xv] EPSC Strategic Note, “Towards an Innovation Principle Endorsed by Better Regulation”, issue 14, 30, June 2016.

[xvi] Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions, “The principles of subsidiarity and proportionality: Strengthening their role in the EU’s policymaking” [23.10.2018 COM(2018) 490 – COM(2018) 491].

Pictures credits: FANUC robot… by Steve Jurvetson.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s