Regulating liability for AI within the EU: Short introductory considerations

2348234723894

 by Francisco Andrade, Director of the Master's in Law and Informatics at UMinho
 and Tiago Cabral, Master's student in EU Law at UMinho

1. The development of Artificial Intelligence (hereinafter, “AI”) brings with it a whole new set of legal questions and challenges. AI will be able to act in an autonomous manner, and electronic “agents” will be, evidently, capable of creating changes in the legal position of natural and legal persons and even of infringing their rights. One notable example of this phenomena will be in data protection and privacy, where a data processing operation by a software agent may be biased against a specific data subject (eventually due to a faulty dataset, but also due to changes in the knowledge database of the “agent” under the influence of users or of other software agents ) and, thus, infringe the principles of lawfulness and fairness in data protection, but due to difficulties in auditing the decision one may never find why (or even find that there was a bias). More extreme examples can be arranged if we put software agents or robots in charge of matters such as making (or even assisting) decisions in court or questions related to the military.

2. One does not have to seek such extreme examples, in fact, even in entering into an agreement, a software agent may, by infringing the law, negatively affect the legal position of a person.
Continue reading “Regulating liability for AI within the EU: Short introductory considerations”

Robots and civil liability (ongoing work within the EU)

5126137767_1ae2ba5506_o

 by Susana Navas Navarro, Professor of Civil Law, Autonomous University of Barcelona

The broad interest shown by the European Union (EU) for the regulation of different aspects of robotics and artificial intelligence is nowadays very well known.[i] One of those aspects concerns the lines of thinking that I am interested in: civil liability for the use and handling of robots. Thus, in the first instance, it should be determined what is understood by “robot” for the communitarian institutions. In order to be considered as “robot”, an entity should meet the following conditions: i) acquisition of autonomy via sensors or exchanging data with the environment (interconnectivity), as well as the processing and analysis of this data; ii) capacity to learn from experience and also through interaction with other robots; iii) a minimal physical medium to distinguish them from a “virtual” robot; iv) adaptation of its behaviour and actions to the environment; v) absence of biological life. This leads to three basic categories of “smart robots”: 1) cyber-physical systems; 2) autonomous systems; 3) smart autonomous robots.[ii] Therefore, strictly speaking, a “robot” is an entity which is corporeal and, as an essential part of it, may or may not incorporate a system of artificial intelligence (embodied AI).

The concept of “robot” falls within the definition of AI, which is specified, on the basis of what scholars of computer science have advised, as: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. 
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems”.[iii]

Concerning the robot as a corporeal entity, issues related to civil liability are raised from a twofold perspective: firstly, in relation to the owner of a robot in the case of causation of damages to third parties when there is no legal relationship between them; and, secondly, regarding the damages that the robot may be caused to third parties due to its defects. From a legal standpoint, it should be noted that in most cases the “robot” is considered as “movable good” that, furthermore, may be classified as a “product”. We shall focus on each of these perspectives separately.
Continue reading “Robots and civil liability (ongoing work within the EU)”

Cyber-regulatory theories: between retrospection and ideologies

5871479872_c721b87242_o

by Luana Lund, specialist in telecommunications regulation (ANATEL, Brazil)
 ▪

This article presents a brief history of some of the main theories about internet regulation to identify ideological and historical relationships among them.

In the 1980s, the open-source movement advocated the development and common use of communication networks, which strengthened the belief of the technical community in an inclusive and democratic global network [1]. This context led to the defense of full freedom on the internet and generated debates about the regulation of cyberspace in the 1990s. In the juridical area, Cyberlaw movement represents the beginning of such discussions [2]. Some of these theorists believed in the configuration of cyberspace as an independent environment, not attainable by the sovereignty of the States. At that time, John Perry Barlow was the first to use the term “cyberspace” for the “global electronic social space.” In 1996, he published the “Internet Declaration of Independence“, claiming cyberspace as a place where “Governments of the Industrial World […] have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear […] Cyberspace does not lie within your borders” [3].
Continue reading “Cyber-regulatory theories: between retrospection and ideologies”

A short introduction to accountability in machine-learning algorithms under the GDPR

30212411048_96d9eea677_o

 by Andreia Oliveira, Master in EU Law (UMINHO)
 and Fernando Silva, Consulting coordinator - Portuguese Data  Protection National Commission

Artificial Intelligence (AI) can be defined as computer systems designed to solve a wide range of activities, that are “normally considered to require knowledge, perception, reasoning, learning, understanding and similar cognitive abilities” [1]. Having intelligent machines capable of imitating human’s actions, performances and activities seems to be the most common illustration about AI. One needs to recognise AI as being convoluted – thus, machine learning, big data and other terms as automatization must hold a seat when discussing AI.  Machine learning, for example, is defined as the ability of computer systems to improve their performance without explicitly programmed instructions: a system will be able to learn independently without human intervention [2]. To do this, machine learning develops new algorithms, different from the ones that were previously programmed, and includes them as new inputs it has acquired during the previous interactions.

The capabilities of machine learning may put privacy and data protection in jeopardy. Therefore, ascertaining liability would be inevitable and would imply the consideration of inter alia all plausible actors that can be called upon account.

Under the General Data Protection Regulation (GDPR), the principle of accountability is intrinsically linked to the principle of transparency. Transparency empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data. Accountability requires transparency of processing operations, however transparency does not constitute accountability [3]. On the contrary, transparency acts as an accountability’ helper – e.g. helping to avoid barriers, such as opacity.
Continue reading “A short introduction to accountability in machine-learning algorithms under the GDPR”

e-Justice paradigm and Artificial Intelligence (AI): where effective judicial protection stands?

Artificial Intelligence Technology Futuristic

 by Joana Abreu, Editor

2019 marks the beginning of a new era for e-Justice.

Looking at both Council’s e-Justice Strategy (2019/C 96/04) and Action Plan (2019/C 96/05) from 2019 to 2023, we are able to understand how this European institution is engaged to establish sensitivities on Artificial Intelligence in justice fields. Furthermore, the European Commission also presented a report on the previous Action Plan (Evaluation study on the outcome of the e-Justice Action Plan 2014-2018 and the way forward – Final Report – DT4EU), where it advanced the need to bet on artificial intelligence mechanisms in the e-Justice fields.

In fact, the European Commission, when questioned stakeholders on the possibility of using Artificial Intelligence technologies in the domain of justice, 41% understood it should be used and other 41% understood its potentialities could be explored.

Taking into consideration those numbers, the Council also established the need to understand AI’s influence and potential on e-Justice fields, addressing it under the topic “Evolutivity” and relating to future perspectives.
Continue reading “e-Justice paradigm and Artificial Intelligence (AI): where effective judicial protection stands?”

Editorial of February 2019

industry-2692459_960_720

 by Felipe Debasa, Phd Rey Juan Carlos University, Madrid


IV Industrial Revolution social challenges. The Law, from discipline to tool? Reflections about the European Union

After World War II comes to a change an historical era. It is about the Present World or Present Time as historians point out[i] , or Anthropocene as geologists name. An era with new challenges and also challenges built on the legacy of the millions of dead of the world wars, totalitarianism, and nationalism.

“It is not a time for words, but a bold and constructive act”. With this phrase, Robert Schuman initiated the press conference that May 9th, 1950, in which he presented the document that would give rise to the current European Union. We Europeans are about to celebrate the 70th anniversary of that date that has allowed us to enjoy many things in peace and freedom.

With the change of the millennium, comes another new period dubbed as a IV Industrial Revolution, Industry 4.0 or Era of Technology. “The traditional world is crumbling, while another is emerging; and while we are in the middle and some of us without knowing what to do”[ii].

In 2016, I directed a summer course at the Menéndez Pelayo International University of Santander[iii] on the Future of Employment that was inaugurated by the Minister of the sector in Spain, in which we began to alert of the social challenges and about the tremendous revolution that came over us. We analysed, among other things, the jobs of the future, the digital transformation of companies, the new forms of teleworking, the role of women in this revolution; and so, we are warning of neologism that was about to appear, probably by regulated sectors without competition. And yes, that moment seems to have arrived.
Continue reading “Editorial of February 2019”

Editorial of July 2018

artificial-intelligence-698122_960_720

 by Alessandra Silveira, Editor 
 and Sophie Perez Fernandes, Junior Editor


Artificial intelligence and fundamental rights: the problem of regulation aimed at avoiding algorithmic discrimination

The scandal involving Facebook and Cambridge Analytica (a private company for data analysis and strategic communication) raises, among others, the problem of regulating learning algorithms. And the problem lies above all in the fact that there is no necessary connection between intelligence and free will. Unlike human beings, algorithms do not have a will of their own, they serve the goals that are set for them. Though spectacular, artificial intelligence bears little resemblance to the mental processes of humans – as the Portuguese neuroscientist António Damásio, Professor at the University of Southern California, brilliantly explains[i]. To this extent, not all impacts of artificial intelligence are easily regulated or translated into legislation – and so traditional regulation might not work[ii].

In a study dedicated to explaining why data (including personal data) are at the basis of the Machine-Learning Revolution – and to what extent artificial intelligence is reconfiguring science, business, and politics – another Portuguese scientist, Pedro Domingos, Professor in the Department of Computer Science and Engineering at the University of Washington, explains that the problem that defines the digital age is the following: how do we find each other? This applies to both producers and consumers – who need to establish a connection before any transaction happens –, but also to anyone looking for a job or a romantic partner. Computers allowed the existence of the Internet – and the Internet created a flood of data and the problem of limitless choice. Now, machine learning uses this infinity of data to help solve the limitless choice problem. Netflix may have 100,000 DVD titles in stock, but if customers cannot find the ones they like, they will end up choosing the hits; so, Netflix uses a learning algorithm that identifies customer tastes and recommends DVDs. Simple as that, explains the Author[iii].
Continue reading “Editorial of July 2018”