The future regulation on non-contractual civil liability for AI systems

By Susana Navas Navarro (Professor at the Universidad Autónoma de Barcelona)

I was surprised and struck by the fact that, after all the work carried out within the European Union (“EU”), on the subject of civil liability for Artificial Intelligence (“AI”) Systems, the European Commission has opted for a Directive (the Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence or “Proposal for a Directive”) as the instrument to regulate this issue. Moreover, a Directive with a content focused exclusively on two issues: a) the disclosure of relevant information for evidence purposes or to decide whether or not to bring forth a lawsuit and against whom (Article 3) and b) the presumption of the causal link between the defendant’s fault and the result or absence thereof that an AI system should produce (Article 4). The argument for this is the disparity of civil liability regimes in Europe and the difficulties there have always existed in harmonization (see the Explanatory Memorandum accompanying the Proposal, p. 7). Choosing a Regulation as proposed by the European Parliament[1] or the proposals of the White Paper on AI would have allowed such harmonisation, and could have included rules on evidence. It seems to me that behind this decision lies the invisible struggle, previously evidenced in other issues, between the Commission and the European Parliament. I believe that the risks for all involved in the use and handling of AI systems, especially high-risk ones, are compelling reasons in favour of harmonization and strict liability.

In relation to this aspect, the Proposal for a Directive abandons the risk-based approach that had been prevailing in this area, since it assumes that the civil liability regimes in most of the Member States are based on fault. This is referred to, for example, in Article 3(5) when presuming the breach of duty of care by the defendant or directly in Article 2(5) when defining the action for damages, or Article 4(1) when admitting the presumption of the causal link between it and the result produced by the IA system or by the absence or failure in the production of such a result which causes the damage. Therefore, if in the national civil liability regime, the case was subsumed under a strict liability regime (e.g., equated to the use or operation of a machine or vicarious liability of the employer), these rules would not apply. National procedural systems, in relation to access to evidence, are not so far r from the provisions of this future Directive.

Continue reading “The future regulation on non-contractual civil liability for AI systems”

What is “Reality”? An overview to the potential legal implications of Extended Reality technologies

By Manuel Protásio (PhD Candidate at the School of Law of the University of Minho)

When Virtual Reality and Augmented Reality become ubiquitous in our most mundane actions and inter-personal relations, they will certainly bring many changes in how Law addresses human behavior.

The need for a coherent discussion regarding the potential cognitive effects of these technologies and, subsequently, the legal consequences that may be triggered by their effects is highly relevant and necessary to avoid possible misconceptions in courts and legal systems.

The use of these technologies may result in alterations of our cognitive functions, significant enough to be considered a type of an altered state of consciousness, amenable to different legal consequences. On that premise, it is important to realize that these technologies can have both positive[1] and negative effects. [2] 

These technologies are built and defined with reference to the concept of reality. Such terminology is used to contrast actual reality.  Reality, as it is defined by the Oxford Dictionary, is “the state of things as they actually exist, as opposed to an idealistic or notional idea of them”.[3] This reality, or the “the thing in itself” as Kant proposed, in the information age and especially in the light of technologies like Augmented and Virtual Reality, has become harder to ascertain, since the human model of perception[4] is being exposed to more filter layers than it is used to.[5]

The ontological dimension of reality has always shifted depending on the criteria and discourse used to define it. John Locke for instance, in his Essay on Human Understanding in 1690, describes reality as the knowledge that we convey on the objects that surround us. That knowledge – he states – comes from our observational Experience, which in turn comes from the external interaction of our senses with “sensible objects” followed by the internal operations of our mind.[6] He describes these internal operations as being a cognitive reflective process on the perceived objects, which can be interpreted as employing meaning – or affections as he says- to those “sensible objects”. From this systematic process, sensible qualities are born, such as “Yellow, White, Heat, Cold, Soft, Hard, Bitter, Sweet”.   

Continue reading “What is “Reality”? An overview to the potential legal implications of Extended Reality technologies”