Robots and civil liability (ongoing work within the EU)

5126137767_1ae2ba5506_o

 by Susana Navas Navarro, Professor of Civil Law, Autonomous University of Barcelona

The broad interest shown by the European Union (EU) for the regulation of different aspects of robotics and artificial intelligence is nowadays very well known.[i] One of those aspects concerns the lines of thinking that I am interested in: civil liability for the use and handling of robots. Thus, in the first instance, it should be determined what is understood by “robot” for the communitarian institutions. In order to be considered as “robot”, an entity should meet the following conditions: i) acquisition of autonomy via sensors or exchanging data with the environment (interconnectivity), as well as the processing and analysis of this data; ii) capacity to learn from experience and also through interaction with other robots; iii) a minimal physical medium to distinguish them from a “virtual” robot; iv) adaptation of its behaviour and actions to the environment; v) absence of biological life. This leads to three basic categories of “smart robots”: 1) cyber-physical systems; 2) autonomous systems; 3) smart autonomous robots.[ii] Therefore, strictly speaking, a “robot” is an entity which is corporeal and, as an essential part of it, may or may not incorporate a system of artificial intelligence (embodied AI).

The concept of “robot” falls within the definition of AI, which is specified, on the basis of what scholars of computer science have advised, as: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. 
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems”.[iii]

Concerning the robot as a corporeal entity, issues related to civil liability are raised from a twofold perspective: firstly, in relation to the owner of a robot in the case of causation of damages to third parties when there is no legal relationship between them; and, secondly, regarding the damages that the robot may be caused to third parties due to its defects. From a legal standpoint, it should be noted that in most cases the “robot” is considered as “movable good” that, furthermore, may be classified as a “product”. We shall focus on each of these perspectives separately.
Continue reading “Robots and civil liability (ongoing work within the EU)”

A short introduction to accountability in machine-learning algorithms under the GDPR

30212411048_96d9eea677_o

 by Andreia Oliveira, Master in EU Law (UMINHO)
 and Fernando Silva, Consulting coordinator - Portuguese Data  Protection National Commission

Artificial Intelligence (AI) can be defined as computer systems designed to solve a wide range of activities, that are “normally considered to require knowledge, perception, reasoning, learning, understanding and similar cognitive abilities” [1]. Having intelligent machines capable of imitating human’s actions, performances and activities seems to be the most common illustration about AI. One needs to recognise AI as being convoluted – thus, machine learning, big data and other terms as automatization must hold a seat when discussing AI.  Machine learning, for example, is defined as the ability of computer systems to improve their performance without explicitly programmed instructions: a system will be able to learn independently without human intervention [2]. To do this, machine learning develops new algorithms, different from the ones that were previously programmed, and includes them as new inputs it has acquired during the previous interactions.

The capabilities of machine learning may put privacy and data protection in jeopardy. Therefore, ascertaining liability would be inevitable and would imply the consideration of inter alia all plausible actors that can be called upon account.

Under the General Data Protection Regulation (GDPR), the principle of accountability is intrinsically linked to the principle of transparency. Transparency empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data. Accountability requires transparency of processing operations, however transparency does not constitute accountability [3]. On the contrary, transparency acts as an accountability’ helper – e.g. helping to avoid barriers, such as opacity.
Continue reading “A short introduction to accountability in machine-learning algorithms under the GDPR”