by Alessandra Silveira, Editor and Sophie Perez Fernandes, Junior Editor
▪
Artificial intelligence and fundamental rights: the problem of regulation aimed at avoiding algorithmic discrimination
The scandal involving Facebook and Cambridge Analytica (a private company for data analysis and strategic communication) raises, among others, the problem of regulating learning algorithms. And the problem lies above all in the fact that there is no necessary connection between intelligence and free will. Unlike human beings, algorithms do not have a will of their own, they serve the goals that are set for them. Though spectacular, artificial intelligence bears little resemblance to the mental processes of humans – as the Portuguese neuroscientist António Damásio, Professor at the University of Southern California, brilliantly explains[i]. To this extent, not all impacts of artificial intelligence are easily regulated or translated into legislation – and so traditional regulation might not work[ii].
In a study dedicated to explaining why data (including personal data) are at the basis of the Machine-Learning Revolution – and to what extent artificial intelligence is reconfiguring science, business, and politics – another Portuguese scientist, Pedro Domingos, Professor in the Department of Computer Science and Engineering at the University of Washington, explains that the problem that defines the digital age is the following: how do we find each other? This applies to both producers and consumers – who need to establish a connection before any transaction happens –, but also to anyone looking for a job or a romantic partner. Computers allowed the existence of the Internet – and the Internet created a flood of data and the problem of limitless choice. Now, machine learning uses this infinity of data to help solve the limitless choice problem. Netflix may have 100,000 DVD titles in stock, but if customers cannot find the ones they like, they will end up choosing the hits; so, Netflix uses a learning algorithm that identifies customer tastes and recommends DVDs. Simple as that, explains the Author[iii].
Computer engineers explain that machine learning is a technology that builds itself. What differs machine learning from normal programming is that in the latter it is necessary to explain to the computer what it has to do – step by step. If I want the computer to play chess or make a medical diagnosis I have to explain to it how to play chess or how to make a diagnosis. But a learning algorithm is able to learn from the data it is given: if it is given a video of a car to be driven, of a road, and of what a person does at the wheel, the learning algorithm learns how to drive. Computers learn by simulating reasoning by analogy.
No wonder, then, that machine learning – this method of transforming data into knowledge – is revolutionizing science, business, and politics. And with the development of e-commerce automated customization has become mandatory. That is why the success or failure of a business – and ultimately an entire market or economy – depends more and more on the quality of its learning algorithms. And these, in turn, depend on our data: the more data they have, the more they learn.
In any case, machine learning is just a technology – and therefore what matters is what we decide to do with it and how to regulate its use. Above all, because learning algorithms increasingly decide who gets credit, who buys what, who gets what job, who gets what increase, what actions go up and down, how much insurance costs, where the police officers are, who has romantic encounters and with whom, etc…
For this reason, the Portuguese MEP João Ferreira recently questioned the European Commission about the measures to be taken to face the so-called “algorithmic discrimination” (based on sex, age, ethnic origin, sexual orientation, etc.) caused by risk management algorithms. In other words, what measures are being considered in order to extend existing provisions for certain sectors (e.g. granting of bank credit), ensuring a more global scope?
But perhaps the question is even deeper: how should learning algorithms act in order not to perpetuate the discrimination that underlies the data from which they are developed? In a recent study entitled #BigData: Discrimination in data-supported decision making, the European Union Agency for Fundamental Rights (FRA) sought to explain how such algorithmic discrimination can occur, suggesting possible solutions in order to move towards fundamental rights compliance in this field. The study gives an account of the following examples:
- Transparency: opening up for scrutiny how algorithms were built would not only support the further development of these tools, but also allow others to detect, and where necessary rectify, any erroneous applications.
- Conducting fundamental rights impact assessments: to identify potential biases and abuses in the application of and output from algorithms. These include, among others, an assessment of the potential for discrimination in relation to different grounds.
- Checking the quality of data: given the amount of data generated and used, it remains a challenge to assess the quality of all data collected and used for building algorithms. However, the study highlights how essential it is to collect metadata (i.e. information about the data) and make quality assessments of the correctness and generalisability of the data.
- Making sure the way the algorithm was built and operates may be meaningfully The challenge of understanding the mathematical background of a statistical method or an algorithm does not prevent a general description of the process and/or rationale behind the calculations feeding the decision making, most notably, which data were used to create the algorithm. Other than ensuring transparency, this would help facilitate access and in making the decision to resort to remedies for people who challenge data-supported decisions.
Therefore, the General Data Protection Regulation[iv] does not allow lawyers to rest with the digital and technological revolution. It was for no other reason that the European Parliament called on the European Commission in 2017 to submit, on the basis of Article 114 TFEU, a proposal for a legislative instrument on legal questions related to the development and use of robotics and artificial intelligence foreseeable in the next 10 to 15 years – a proposal that contemplates, inter alia, the hypothesis of recognition of electronic persons, in addition to the traditional natural and legal persons[v]. That is, creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause – and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.
Furthermore, the European Commission presented last April its Communication “Artificial Intelligence for Europe” establishing the framework for approaching this new reality, explaining its impacts, benefits and risks. The European Commission also set out a European initiative on Artificial Intelligence, which aims to: i) boost technological and industrial capacity and artificial intelligence uptake across the economy; ii) prepare for socio-economic changes brought about by artificial intelligence; and iii) ensure an appropriate ethical and legal framework. For that purpose, the European Comission intends to set a framework for stakeholders and experts – the European AI Alliance – to develop draft artificial intelligence ethics guidelines by the end of the year[vi].
Herein lies the idea that the development of robotics, artificial intelligence, and digitalization[vii] requires that all those involved in the development and commercialization of such tools assume legal responsibility for the quality of the technology they produce at all stages of the process. That is, sustainable technologies – or a fairly secure, equitable, and open digital environment. Thus, the concept of sustainability, both in its analytical (descriptive) and (mainly) in its normative (prescriptive) dimension, must also understand how continuous technological developments impact and interact in the (world) economy, the (global) society and the environment (without frontiers) of the planet and thus guide the road towards socially inclusive, environmentally sustainable and technologically equitable economic growth. As a synergetic formula of the complexity of the economic, social, environmental and technological dynamics that interact in the development process, sustainability is shaped as a matrix concept of the digital age, defining the conditions and the assumptions for legal regulation in a context of permanent evolution.
The Authors are team members of the Jean Monnet Project “INTEROP – EU Digital Single Market as a Political Calling: Interoperability as the way forward”.
[i] António Damásio, The Strange Order of Things: Life, Feeling, and the Making of Cultures, Pantheon Books, 2018.
[ii] On the subject, see European Parliament, Scientific Foresight study “Ethical Aspects of Cyber-Physical Systems”, Science and Technology Options Assessment Panel (STOA), 2016, available here.
[iii] Pedro Domingos, The Master Algorithm. How the Quest for the Ultimate Learning Machine Will Remake Our World, Penguin Books, 2017, p. 11-12.
[iv] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, p. 1-88.
[v] European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics [2015/2103(INL)], available here.
[vi] European Commission, Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the “Artificial Intelligence For Europe”, Brussels, 25.4.2018 COM(2018) 237 final, available here.
[vii] Here, digitalization is understood as “the way in which many domains of social life are restructured around digital communication and media infrastructures or the way in which these media structure, shape, and influence the contemporary world” – Corien Prins, et al. (ed.), Digital Democracy in a Globalized World, Elgar Law, Technology and Society Series, 2017, p. 6.
Pictures credits: Artificial intelligence by geralt.