By Susana Navas Navarro (Professor at the Universidad Autónoma de Barcelona)
I was surprised and struck by the fact that, after all the work carried out within the European Union (“EU”), on the subject of civil liability for Artificial Intelligence (“AI”) Systems, the European Commission has opted for a Directive (the Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence or “Proposal for a Directive”) as the instrument to regulate this issue. Moreover, a Directive with a content focused exclusively on two issues: a) the disclosure of relevant information for evidence purposes or to decide whether or not to bring forth a lawsuit and against whom (Article 3) and b) the presumption of the causal link between the defendant’s fault and the result or absence thereof that an AI system should produce (Article 4). The argument for this is the disparity of civil liability regimes in Europe and the difficulties there have always existed in harmonization (see the Explanatory Memorandum accompanying the Proposal, p. 7). Choosing a Regulation as proposed by the European Parliament or the proposals of the White Paper on AI would have allowed such harmonisation, and could have included rules on evidence. It seems to me that behind this decision lies the invisible struggle, previously evidenced in other issues, between the Commission and the European Parliament. I believe that the risks for all involved in the use and handling of AI systems, especially high-risk ones, are compelling reasons in favour of harmonization and strict liability.
In relation to this aspect, the Proposal for a Directive abandons the risk-based approach that had been prevailing in this area, since it assumes that the civil liability regimes in most of the Member States are based on fault. This is referred to, for example, in Article 3(5) when presuming the breach of duty of care by the defendant or directly in Article 2(5) when defining the action for damages, or Article 4(1) when admitting the presumption of the causal link between it and the result produced by the IA system or by the absence or failure in the production of such a result which causes the damage. Therefore, if in the national civil liability regime, the case was subsumed under a strict liability regime (e.g., equated to the use or operation of a machine or vicarious liability of the employer), these rules would not apply. National procedural systems, in relation to access to evidence, are not so far r from the provisions of this future Directive.
Regardless of the initial disappointment, it is important to recognise as very positive that the Proposal for a Directive refers to the AI Act regarding concepts such as “AI Systems”, “high-risk” AI system”, “provider”, “user” etc. To these, Article 2 of the Proposal for a Directive adds a number of definitions, such as “claim for damages”, “claimant”, “potential claimant”, “defendant” and “duty of care”.
Essentially, as stated above, the Proposal for a Directive is focused on two questions:
1. Disclosure of evidence
Courts must be provided, by the Member States, with the power to order access the required evidence – in fact, this expression refers to access to information on the specific high-risk system suspected of having caused the damage (Recital 16), thus access to “data” – of which the defendant or potential defendant is in possession in order to enable the claimant or potential claimant to support his/her claim for damages. This data may consist of specific documentation and records of system movements or behaviour (“black box” or “log”).
In the case of a potential claimant, who is the one who has been damaged but has not yet filed the action, in order to address the judge or court requesting access to means of evidence, he must have previously addressed the provider or person who must comply with the obligations of the provider or the user, requesting access to them. In addition, he must have been refused by them and, finally, he must support his request with sufficient facts or evidence to support his plausible action for damages [Article 3(1)]. In the case of an action for damages, the claimant will only be granted access to the data, rectius, to the means of evidence, provided that he has taken steps to collect the relevant information directly from the defendant [Article 3(2)]. Therefore, the access to the means of evidence ordered by the court is dependent on the fact of the claimant/potential claimant not having been able to access them directly from the defendant either because the latter has refused to provide them or has made it extremely difficult. For this reason, the rule establishes that the attempts by the claimant must be “proportionate”.
Along with access to information, the possibility is provided for the claimant to request that the court order measures in order to preserve the means of evidence [Article 3(3)].
In accordance with the principle of data minimization, the Proposal for Directive establishes that access to evidence, to information, should only be what is necessary to support the action. Therefore, the specific purpose for which access to such information is requested along with the proportionality of the request are also taken into account [Article 3(4)]. When assessing “proportionality” – an indeterminate legal concept – the courts must take into account the interests of the parties, including third parties that may be affected, for example, when it comes to confidential information or information relating to national security.
On the other hand, sometimes the information, especially if it is of a technical nature, involves access to a trade secret. In this case, either at the request of a party, which is, moreover, the general rule of the measures provided for in this Proposal for a Directive, or, in this case, ex officio, it may take the necessary measures to ensure confidentiality when the information is to be used in court proceedings. The fact that access to technical information overrides exclusive rights most likely stems from the principle “as open as possible, as closed as necessary” present in the European Data Strategy. This principle, from a legal perspective, appears in Regulation 2021/695, of the European Parliament and of the Council of 28 April 2021, establishing the framework program for research and innovation “Horizon Europe”.
In any case, the person obliged to allow access to the means of evidence can appeal the court order in this regard. Therefore, there are remedies against the order. However, in case of failure to comply with the court order, a presumption to have breached a relevant duty of care will exist [Articles 4(2) and 4(3) of the Proposal for a Directive]. What are these duties of care? Failure to comply with certain requirements such as that, if the AI system uses techniques involving data processing, the model has not been developed on the basis of training, validation and testing that complies with the quality requirements provided for in Article 10 of the AI Act, or does not comply with the transparency criteria set forth in Article 13 of the AI Act, does not allow human supervision (Article 14 AI Act), does not have sufficient robustness, accuracy and cybersecurity that it should have (Article 15-16 AI Act), the corrective measures that could be taken to bring the AI system into compliance with the requirements in the AI Act have not been taken or conveniently implemented (Articles 16 and 21 AI Act). In any case, it is a rebuttable presumption.
Access to the information necessary to support an action for damages and that the defendant must provide, since it is in possession of or in a better position to access the means of evidence, implies the legal recognition of a subjective right. In addition, there is the right of the potential claimant to request the information in order to decide whether to file lawsuit, i.e., whether there is sufficient basis for doing so and against whom (Recital 17).
This access was referred in the Resolution of the European Parliament regulation on civil liability for the use of AI mentioned at the beginning of these lines. In my opinion, there is a lack of greater coordination between this Proposal for a Directive and the Data Act, which specifically regulates data access.
2. Rebuttable presumption of a causal link in the case of fault
Judges and courts are empowered to presume the causal link between the claimant’s fault and the result produced by the IA system or, as the case may be, the absence of a result, provided that:
(a) the claimant has demonstrated, or the court has presumed pursuant to Article 3(5), the fault of the defendant, or of a person for whose behaviour the defendant is responsible, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred.
(b) it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output.
(c) the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
This standard applies whether the defendant is the provider or a subject who must comply with the provider’s obligations or a user of the AI system. However, when it comes to the latter, the duties of care that are considered breached are the breach of his obligation to use or monitor the high-risk AI system in accordance with the instructions provided by the provider or, where appropriate, to suspend or discontinue its use, or exposes the AI system to “input data” under his control that are not relevant to the purpose pursued by the system [Article 4(3)]. Therefore, these are post-commercialization duties of care.
In case of low-risk AI systems the above-mentioned presumption only applies if the courts consider it excessively difficult for the claimant to prove the above-mentioned causal link. If the defendant used the AI system in the course of a personal activity, the presumption only applies if the defendant materially interfered with the operation of the AI system or if the defendant should have determined the conditions of the operation of the AI system and failed to do so adequately [Articles 4(5) and 4(6) respectively].
At first sight, the proof does not appear to be as easy for the claimant as it might seem. From the perspective of national civil liability law, this approach is questionable, since fault is a further requirement that the defendant has to prove, as a matter of principle, together with the harmful conduct, the damage and the causal link between them, and not that it automatically results from the proof of the breach of certain duties of care that are legal obligations. On the other hand, the duty of care is defined as a model of conduct, which sounds strange from the perspective of national law, in which they are two different concepts.
The Report from the expert group on Liability and New Technologies proposed, however, that the rebuttable presumption should be that of causation itself, which would actually ease the burden of proof on the victim. In fact, it suggests more cases that relieve the victim of the burden of proof. The Report, although based on the general rule that the victim must prove the cause of the damage, admits that the complexity of the technology involved may result in an asymmetry of information between the operator responsible and the victim, making it impossible to prove the causal link or making the proof excessively burdensome for the victim. For this reason, it listed, in Recommendation 26, a series of circumstances that would justify the European or even national legislator including a general rule of reversal of the proof of the causal link.
Furthermore, Recommendation 24 suggests that, in addition to fault and the existence of the defect itself, that causation should be presumed whenever non-compliance with safety standards is detected, the observance of which would have prevented the damage.
In short, the victim is relieved of the burden of proof, but only slightly.
 European Parliament Resolution of 20 October 2020 with recommendations for a “proposal for a regulation on civil liability for the use of AI”, https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html
Photo by Alex Knight on Pexels.com.