Robots and civil liability (ongoing work within the EU)

5126137767_1ae2ba5506_o

 by Susana Navas Navarro, Professor of Civil Law, Autonomous University of Barcelona

The broad interest shown by the European Union (EU) for the regulation of different aspects of robotics and artificial intelligence is nowadays very well known.[i] One of those aspects concerns the lines of thinking that I am interested in: civil liability for the use and handling of robots. Thus, in the first instance, it should be determined what is understood by “robot” for the communitarian institutions. In order to be considered as “robot”, an entity should meet the following conditions: i) acquisition of autonomy via sensors or exchanging data with the environment (interconnectivity), as well as the processing and analysis of this data; ii) capacity to learn from experience and also through interaction with other robots; iii) a minimal physical medium to distinguish them from a “virtual” robot; iv) adaptation of its behaviour and actions to the environment; v) absence of biological life. This leads to three basic categories of “smart robots”: 1) cyber-physical systems; 2) autonomous systems; 3) smart autonomous robots.[ii] Therefore, strictly speaking, a “robot” is an entity which is corporeal and, as an essential part of it, may or may not incorporate a system of artificial intelligence (embodied AI).

The concept of “robot” falls within the definition of AI, which is specified, on the basis of what scholars of computer science have advised, as: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. 
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems”.[iii]

Concerning the robot as a corporeal entity, issues related to civil liability are raised from a twofold perspective: firstly, in relation to the owner of a robot in the case of causation of damages to third parties when there is no legal relationship between them; and, secondly, regarding the damages that the robot may be caused to third parties due to its defects. From a legal standpoint, it should be noted that in most cases the “robot” is considered as “movable good” that, furthermore, may be classified as a “product”. We shall focus on each of these perspectives separately.
Continue reading “Robots and civil liability (ongoing work within the EU)”

A short introduction to accountability in machine-learning algorithms under the GDPR

30212411048_96d9eea677_o

 by Andreia Oliveira, Master in EU Law (UMINHO)
 and Fernando Silva, Consulting coordinator - Portuguese Data  Protection National Commission

Artificial Intelligence (AI) can be defined as computer systems designed to solve a wide range of activities, that are “normally considered to require knowledge, perception, reasoning, learning, understanding and similar cognitive abilities” [1]. Having intelligent machines capable of imitating human’s actions, performances and activities seems to be the most common illustration about AI. One needs to recognise AI as being convoluted – thus, machine learning, big data and other terms as automatization must hold a seat when discussing AI.  Machine learning, for example, is defined as the ability of computer systems to improve their performance without explicitly programmed instructions: a system will be able to learn independently without human intervention [2]. To do this, machine learning develops new algorithms, different from the ones that were previously programmed, and includes them as new inputs it has acquired during the previous interactions.

The capabilities of machine learning may put privacy and data protection in jeopardy. Therefore, ascertaining liability would be inevitable and would imply the consideration of inter alia all plausible actors that can be called upon account.

Under the General Data Protection Regulation (GDPR), the principle of accountability is intrinsically linked to the principle of transparency. Transparency empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data. Accountability requires transparency of processing operations, however transparency does not constitute accountability [3]. On the contrary, transparency acts as an accountability’ helper – e.g. helping to avoid barriers, such as opacity.
Continue reading “A short introduction to accountability in machine-learning algorithms under the GDPR”

European Ethical Charter on the use of artificial intelligence in judicial systems and their environment: what are the implications of this measure?

hand-3308188_960_720

 by Amanda Espiñeira, Master Student at University of Brasília

Artificial intelligence has become a topic of great interest for the advancement of the information society and automation. Through various themes, from art, gastronomy, the world of games, the various mechanisms that involve AI allow the expansion of human creativity and capabilities, and are very important, especially when it comes to judicial systems. A field that for a long time has remained closed to innovations and digital transformations, today it opens and allows that there is more celerity and transparency to the decisions of the legal world. In other words, AI promises to fill a gap in the area, which still has plastered processes, such as the registry offices, which are almost synonymous with bureaucracy.

However, the importance of the theme and its efficiency, debating ethical aspects in this area is extremely relevant because AI can extract insights, we could never come up using traditional data mining techniques. And is even more important in the context of recent data protection regulation, especially GDPR- General Data Protection Regulation.

Thus, the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted the first European text setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems, published on December 4, 2018[1].
Continue reading “European Ethical Charter on the use of artificial intelligence in judicial systems and their environment: what are the implications of this measure?”