The future regulation on non-contractual civil liability for AI systems

By Susana Navas Navarro (Professor at the Universidad Autónoma de Barcelona)

I was surprised and struck by the fact that, after all the work carried out within the European Union (“EU”), on the subject of civil liability for Artificial Intelligence (“AI”) Systems, the European Commission has opted for a Directive (the Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence or “Proposal for a Directive”) as the instrument to regulate this issue. Moreover, a Directive with a content focused exclusively on two issues: a) the disclosure of relevant information for evidence purposes or to decide whether or not to bring forth a lawsuit and against whom (Article 3) and b) the presumption of the causal link between the defendant’s fault and the result or absence thereof that an AI system should produce (Article 4). The argument for this is the disparity of civil liability regimes in Europe and the difficulties there have always existed in harmonization (see the Explanatory Memorandum accompanying the Proposal, p. 7). Choosing a Regulation as proposed by the European Parliament[1] or the proposals of the White Paper on AI would have allowed such harmonisation, and could have included rules on evidence. It seems to me that behind this decision lies the invisible struggle, previously evidenced in other issues, between the Commission and the European Parliament. I believe that the risks for all involved in the use and handling of AI systems, especially high-risk ones, are compelling reasons in favour of harmonization and strict liability.

In relation to this aspect, the Proposal for a Directive abandons the risk-based approach that had been prevailing in this area, since it assumes that the civil liability regimes in most of the Member States are based on fault. This is referred to, for example, in Article 3(5) when presuming the breach of duty of care by the defendant or directly in Article 2(5) when defining the action for damages, or Article 4(1) when admitting the presumption of the causal link between it and the result produced by the IA system or by the absence or failure in the production of such a result which causes the damage. Therefore, if in the national civil liability regime, the case was subsumed under a strict liability regime (e.g., equated to the use or operation of a machine or vicarious liability of the employer), these rules would not apply. National procedural systems, in relation to access to evidence, are not so far r from the provisions of this future Directive.

Continue reading “The future regulation on non-contractual civil liability for AI systems”

Editorial of December 2021

By Alessandra Silveira (Editor)

AI systems and automated inferences – on the protection of inferred personal data

On 23 November 2021 the European Commission published the consultation results on a set of digital rights and principles to promote and uphold EU values in the digital space – which ran between 12 May and 6 September 2021.[1] This public consultation on digital principles is a key deliverable of the preparatory work for the upcoming “Declaration on digital rights and principles for the Digital Decade”, which European Commission will announce by the end of 2021. The consultation invited all interested people to share their views on the formulation of digital principles in 9 areas: i) universal access to internet services; ii) universal digital education and skills for people to take an active part in society and in democratic processes; iii) accessible and human-centric digital public services and administration; iv) access to digital health services; v) an open, secure and trusted online environment; vi) protecting and empowering children and young people in the online space; vii) a European digital identity; viii) access to digital devices, systems and services that respect the climate and environment; ix) ethical principles for human-centric algorithms.  

Continue reading “Editorial of December 2021”

Editorial of June 2021

By Tiago Sérgio Cabral (Managing Editor)

Data Governance and the AI Regulation: Interplay between the GDPR and the proposal for an AI Act

It is hardly surprising that the recent European Commission’s proposal for a Regulation on a European Approach for Artificial Intelligence (hereinafter the “proposal for an AI Act”) is heavily inspired by the GDPR. From taking note of the GDPR’s success in establishing worldwide standards to learning from its shortcomings, for example by suppressing the stop-shop mechanism (arguably responsible for some of its enforcement woes).[1]

The proposal for an AI Act should not be considered a GDPR for AI for one singular reason: there is already a GDPR for AI, and it is called the GDPR. The scope and aims of the proposal are different, but there is certainly a high degree of influence and the interplay between the two Regulations, if the AI Act is approved, will certainly be interesting. In this editorial we will address one particular aspect where the interplay between the GDPR and the AI act could be particularly relevant: data governance and data set management.

Before going specifically into this subject, it is important to know that the AI Act’s proposed fines have a higher ceiling than the GDPR’s: up to 30,000,000 euros or, if the offender is company, up to 6% of its total worldwide annual turnover for the preceding financial year (article 71(3) of the proposal for an AI Act). We should note, nonetheless, that this specific value is applicable to a restricted number of infringements, namely:

Continue reading “Editorial of June 2021”

Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks

by Piedade Costa de Oliveira (Former official of the European Commission - Legal Service)
Disclaimer: The opinions expressed are purely personal and are the exclusive responsibility of the author. They do not reflect any position of the European Commission

The use of algorithms for automated decision-making, commonly referred to as Artificial Intelligence (AI), is becoming a reality in many fields of activity both in the private and public sectors.

It is common ground that AI raises considerable challenges not only for the area for which it is operated in but also for society as a whole. As pointed out by the European Commission in its White Paper on AI[i], AI entails a number of potential risks, such as opaque decision-making, gender-based bias or other kinds of discrimination or intrusion on privacy.

In order to mitigate such risks, Article 22 of the GDPR confers on data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them or similarly significantly affects them[ii].

Continue reading “Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks”

Universal basic income and artificial intelligence

8506058779_a4ff678f49_o

 by Charize Hortmann, Master in Human Rights, UMinho

Currently the world’s economy has reached an unprecedent juncture. If by one side has never been so much wealth generally accumulated[i], by the other is undeniable that inequality between the richest and the poorest increases by the minute[ii]. At the same time, we are getting close to fulfilling the greatest threat brought out by the first Industrial Revolution. The technological unemployment[iii], due the advance and the improvement of certain technologies, like Artificial Intelligence (AI) and the Internet of Things (IoT).

Considering this scenery, much has been thought about coming up with solutions that seek to curb the progress of social inequalities, as well as being an alternative to the possibility of facing a massive unemployment worldwide.
Continue reading “Universal basic income and artificial intelligence”

Is the European Union’s legal framework ready for AI-enabled drone deliveries? A preliminary short assessment – from the Commission Implementing Regulation 2019/947/EU to data protection

19793862459_d3350b1a38_o

 by Marília Frias, Senior Associate at Vieira de Almeida & Associados
 and Tiago Cabral, Master in EU Law, University of Minho

1. As we are writing this short essay, a significant percentage of the world population is at home, in isolation, as a preventive measure to stop the spread of the COVID-19 pandemic. Of course, for isolation to be effective, people should only leave their houses, when strictly necessary, for instance, to shop essential goods and, frequently, preventive measures include orders of closure directed to all non-essential businesses.

2. Unfortunately, the European Union (hereinafter, “EU”) is one of the epicentres of the pandemic. As a result, some European citizens are turning to e-commerce to buy goods not available in the brick-and-mortar shops that are still open. Meanwhile, others opt to bring their shopping into the online realm simply to reduce the risk of contact and infection. Currently, sustaining the market as best as possible under these conditions to avoid a (stronger) economic crisis should be one of the key priorities. Furthermore, with a growing number of people working remotely, it is also vital to guarantee that the necessary supplies can arrive in time and with no health-related concerns attached.

3. Nowadays, most delivery services work based on humans who physically get the product from point A and deliver it to point B. The system is more or less the same, whether the reader orders a package from China or delivery from the pizza place 5 minutes away from the reader’s house. Obviously, more people will be involved in the delivery chain in our first example, but it is still, at its core, a string of people getting the order from point A to point B. This is a challenge for those working in the delivery and transportation businesses who have to put their health on the line to ensure swift delivery of products to the ones who are at home.
Continue reading “Is the European Union’s legal framework ready for AI-enabled drone deliveries? A preliminary short assessment – from the Commission Implementing Regulation 2019/947/EU to data protection”

Artificial intelligence and PSI Directive (EU) – open data and the re-use of public sector information before new digital demands

4528443760_13374dcb87_o

 by Joana Abreu, Editor and Jean Monnet Module eUjust Coordinator


In Ursula von der Leyen’s speech entitled “A Union that strives for more”, one of nowadays President of the European Commission’s priorities was to establish “a Europe fit for digital age”. In this sense, von der Leyen’s aspirations were to grasp the opportunities from the digital age within safe and ethical boundaries, particularly those deriving from artificial intelligence as “[d]igital technologies […] are transforming the world at an unprecedented speed”. Therefore, the President of the European Commission established that “[i]n my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence”. Last 1st December 2019, the European Commission took office, led by President Ursula von der Leyen. As that time lapse is passing by, there is a need to understand how a Europe fit for the digital age is taking shape. There is to say, has the European Union already made efforts to meet that digital age?

In fact, recalling Digital4EU Stakeholder Forum, held in Brussels, on the 25th February 2016, Digital Single Market was thought by inception in order to materialise it as a primary public interest in action. Concerning digital public services, it was highlighted that some of them were not as transparent as they should and that “Governments need[ed] to look at how to re-use the information already available […] and open up the data they h[ad], while adapting to current trends and making use of public services easy and simple”. In order to do so, this forum established that “Member States should implement the once only principle: once only obligation, re-use of data, making the best use of key enablers and thinking cross-border services from inception”.
Continue reading “Artificial intelligence and PSI Directive (EU) – open data and the re-use of public sector information before new digital demands”

Regulating liability for AI within the EU: Short introductory considerations

2348234723894

 by Francisco Andrade, Director of the Master's in Law and Informatics at UMinho
 and Tiago Cabral, Master's student in EU Law at UMinho

1. The development of Artificial Intelligence (hereinafter, “AI”) brings with it a whole new set of legal questions and challenges. AI will be able to act in an autonomous manner, and electronic “agents” will be, evidently, capable of creating changes in the legal position of natural and legal persons and even of infringing their rights. One notable example of this phenomena will be in data protection and privacy, where a data processing operation by a software agent may be biased against a specific data subject (eventually due to a faulty dataset, but also due to changes in the knowledge database of the “agent” under the influence of users or of other software agents ) and, thus, infringe the principles of lawfulness and fairness in data protection, but due to difficulties in auditing the decision one may never find why (or even find that there was a bias). More extreme examples can be arranged if we put software agents or robots in charge of matters such as making (or even assisting) decisions in court or questions related to the military.

2. One does not have to seek such extreme examples, in fact, even in entering into an agreement, a software agent may, by infringing the law, negatively affect the legal position of a person.
Continue reading “Regulating liability for AI within the EU: Short introductory considerations”

Robots and civil liability (ongoing work within the EU)

5126137767_1ae2ba5506_o

 by Susana Navas Navarro, Professor of Civil Law, Autonomous University of Barcelona

The broad interest shown by the European Union (EU) for the regulation of different aspects of robotics and artificial intelligence is nowadays very well known.[i] One of those aspects concerns the lines of thinking that I am interested in: civil liability for the use and handling of robots. Thus, in the first instance, it should be determined what is understood by “robot” for the communitarian institutions. In order to be considered as “robot”, an entity should meet the following conditions: i) acquisition of autonomy via sensors or exchanging data with the environment (interconnectivity), as well as the processing and analysis of this data; ii) capacity to learn from experience and also through interaction with other robots; iii) a minimal physical medium to distinguish them from a “virtual” robot; iv) adaptation of its behaviour and actions to the environment; v) absence of biological life. This leads to three basic categories of “smart robots”: 1) cyber-physical systems; 2) autonomous systems; 3) smart autonomous robots.[ii] Therefore, strictly speaking, a “robot” is an entity which is corporeal and, as an essential part of it, may or may not incorporate a system of artificial intelligence (embodied AI).

The concept of “robot” falls within the definition of AI, which is specified, on the basis of what scholars of computer science have advised, as: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. 
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems”.[iii]

Concerning the robot as a corporeal entity, issues related to civil liability are raised from a twofold perspective: firstly, in relation to the owner of a robot in the case of causation of damages to third parties when there is no legal relationship between them; and, secondly, regarding the damages that the robot may be caused to third parties due to its defects. From a legal standpoint, it should be noted that in most cases the “robot” is considered as “movable good” that, furthermore, may be classified as a “product”. We shall focus on each of these perspectives separately.
Continue reading “Robots and civil liability (ongoing work within the EU)”

Cyber-regulatory theories: between retrospection and ideologies

5871479872_c721b87242_o

by Luana Lund, specialist in telecommunications regulation (ANATEL, Brazil)
 ▪

This article presents a brief history of some of the main theories about internet regulation to identify ideological and historical relationships among them.

In the 1980s, the open-source movement advocated the development and common use of communication networks, which strengthened the belief of the technical community in an inclusive and democratic global network [1]. This context led to the defense of full freedom on the internet and generated debates about the regulation of cyberspace in the 1990s. In the juridical area, Cyberlaw movement represents the beginning of such discussions [2]. Some of these theorists believed in the configuration of cyberspace as an independent environment, not attainable by the sovereignty of the States. At that time, John Perry Barlow was the first to use the term “cyberspace” for the “global electronic social space.” In 1996, he published the “Internet Declaration of Independence“, claiming cyberspace as a place where “Governments of the Industrial World […] have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear […] Cyberspace does not lie within your borders” [3].
Continue reading “Cyber-regulatory theories: between retrospection and ideologies”