By Alessandra Silveira (Editor)
▪
AI systems and automated inferences – on the protection of inferred personal data
On 23 November 2021 the European Commission published the consultation results on a set of digital rights and principles to promote and uphold EU values in the digital space – which ran between 12 May and 6 September 2021.[1] This public consultation on digital principles is a key deliverable of the preparatory work for the upcoming “Declaration on digital rights and principles for the Digital Decade”, which European Commission will announce by the end of 2021. The consultation invited all interested people to share their views on the formulation of digital principles in 9 areas: i) universal access to internet services; ii) universal digital education and skills for people to take an active part in society and in democratic processes; iii) accessible and human-centric digital public services and administration; iv) access to digital health services; v) an open, secure and trusted online environment; vi) protecting and empowering children and young people in the online space; vii) a European digital identity; viii) access to digital devices, systems and services that respect the climate and environment; ix) ethical principles for human-centric algorithms.
In this last area – and this is relevant to our argument in this post –, 92% of respondents find it important (77% – very important) that no one should be limited or purposefully misguided by algorithmic systems against their autonomy and free will. Several respondents elaborated on the need to have principles on digital self-determination, which specify that persons should not be overly dependent on, or monitored by digital technologies (from large corporations) to determine their future actions. Several respondents also indicated the need for more transparency on the use of personal data by companies, and to limit targeted content and advertising, especially when detrimental for persons’ physical and mental health.
Those results point to the need to create more understanding/awareness about how algorithms work, but also to have the possibility to obtain more transparency on their functioning. Herein lies the problem of the opacity of inferences or predictions resulting from data analysis by Artificial Intelligence (AI) systems – inferences whose application to everyday situations determines how each of us, as personal data subjects, are perceived and evaluated by others. It is important to assess the existence of legal remedies to challenge operations that result in automated inferences that are not reasonably justified,[2] under penalty of violating our insusceptibility to instrumentalization and objectification – in other words, human dignity itself.
Profiling is often used to make such predictions about individuals. It involves the collection of information about a person and the assessment of their characteristics or behavioural patterns in order to place them in a certain category or group and drawing upon that an inference or prediction – be it of their ability to perform a task, of their interest or presumed behaviour. To this extent, such automated inferences demand protection as inferred personal data, since they also make it possible to identify someone by association of concepts, characteristics, or contents. The crux of the matter is that people are increasingly losing control over such automated inferences and over how they are seen and evaluated by others.
Considering that the General Data Protection Regulation (GDPR) regulates the processing of data of an identified or identifiable natural person, it would apply when AI systems are building on everyone’s data, as well as when such systems are used to analyse data and produce inferences on individuals. So, it is important to test to what extent the GDPR enables the defence of individuals in the face of some AI applications – especially with regard to profiling and automated decisions.
Article 22 GDPR (apparently) provides a general prohibition of individual decisions based “solely” on automated processing – including profiling –, but its provisions raise doubts to the doctrine.[3] Will the provisions of Article 22 GRPD only apply when there is no relevant human intervention in the decision-making process? If a human being examines and weighs other factors when making the final decision, will it not be made solely on the basis of the automated processing, and the prohibition in Article 22(1) GDPR will not apply?[4] Furthermore, such provision is limited to automated decisions that i) produce effects in the legal sphere of the data subject or that ii) significantly affect him/her in a similar manner. As long as there is no preliminary reference on those operative concepts of the Article 22 GDPR (especially with regard to similarly significant effects), the scope of protection of that provision remains unknown (what it protects, what it prohibits).
Meanwhile, according to Article 22(2) GDPR, such prohibition does not apply if the decision (i) is necessary for the conclusion or performance of a contract between the data subject and a controller; (ii) is authorized by Union or Member State law to which the controller is subject; (iii) is based on the data subject’s explicit consent. It should be noted that the exceptions of contractual necessity and explicit consent carry substantial data protection risks, requiring a high level of individual control over personal data – the effectiveness of which is highly questionable. The prohibition of Article 22(1) GDPR runs the risk of becoming blurred in the exceptions of Article 22(2) GDPR (especially contract and consent), based on clauses that subvert the meaning of the autonomy of the will, to which individuals adhere out of ignorance or lack of alternative.
In any way, in situations where such a general prohibition does not apply, the data controller must take appropriate measures to safeguard the rights and interests of the data subject, in particular the right to i) obtain human intervention from the controller, ii) express his/her point of view and iii) challenge the decision [Article 22(3) GDPR]. To this extent, any review of the automated inference must be carried out by someone with the appropriate authority and competence to change the result. In summary, Article 22 GDPR implies i) a general prohibition of exclusively automated individual decisions, however ii) there are exceptions to this prohibition, and ii) where such exceptions apply, the rights provided for in Article 22(3) GDPR must be protected.
It happens that in the processes of exploration and mining of large data sets – via data mining and machine learning –, any decision that does not require any human control to extract the outputs inferred by a learning agent is considered to be exclusively automated. This is why the GDPR requires that the effects on the legal sphere of the data subject be more than trivial, otherwise the learning algorithms would be left without the raw material to evolve – and technological development would be compromised. To this extent, some computer engineers raise many doubts about the feasibility of the provisions of the GDPR, because the fuzzy logic that underlies AI systems would not allow the average person to understand the inference process.[5]
The CJEU has not yet been confronted on the rights exercisable over inferred data or whether the GDPR adequately protects them. The CJEU has only ruled on automated collection of information indicating personal preferences/interests/desires/aversions (i. e., cookies).[6] It so happens that on such information, learning algorithms act to predict individualizable behaviours – and therefore it is no longer sufficient to focus on the lawfulness of the collection of input data. The CJEU can (partially) fill in the gaps left by the EU legislator through an interpretative route, and it can do so in the light of the fundamental right to the protection of personal data provided in Article 8 of the Charter of Fundamental Rights of the European Union (CFREU).[7] Nevertheless, it is crucial this is not only prompted by the national judge but also made aware by European doctrine.
In any case, as the U. S. sociologist Shoshana Zuboff clarifies, one of the first challenges to understanding and regulating AI systems has to do precisely with the confusion between what the author understands as “surveillance capitalism” and the digital technologies it uses. Surveillance capitalism would not be the technology itself – but rather a dynamic that permeates the technology and controls its use. In other words, surveillance capitalism would be a type of marketplace unthinkable outside the digital medium, but it would not represent the digital. This distinction is important to measure the extent to which some perplexities associated with the digital market are not challengeable simply through the protection of personal data.[8]
In the short time, European Commission aims to regulate political advertising on the Internet. Attempts to influence elections or voter behaviour is an uncontrolled race of opaque methods. Citizens often did not know whether certain online content is political or not. And they need to know why they are getting political advertising, who paid for it and what personal data was used to target selected people with election advertising. Particularly, it should be regulated that sensitive data that is exchanged between friends on social media platforms – about sexual orientation, religion or political views, etc. – may no longer be misused to find target groups for political purposes.
In this line, in the European Parliament resolution of 20 October 2020 on the Digital Services Act and fundamental rights, it was suggested that the current EU legal framework governing digital services should be updated with a view to addressing the challenges posed by the fragmentation between the Member States and new technologies, such as the prevalence of profiling and algorithmic decision-making that permeates all areas of life, as well as ensuring legal clarity and respect for fundamental rights, in particular the freedom of expression and the right to privacy in a futureproof manner given the rapid development of technology.[9]
Individuals are more and more subject to assessments and decisions taken by or with the assistance of AI systems – which are extremely difficult or practically impossible to understand and possibly challenge. In the current “state of the art” of inferred data, individuals affected by automated assessments and decisions do not have the necessary means to check how they have been adopted, nor whether the applicable standards for processing personal data have been properly respected. Furthermore, they have fewer effective resources to challenge these automated evaluations and decisions affecting them, compared to situations where harm is caused by traditional technology – and this undermines the fundamental right to effective judicial protection enshrined in Article 47 CFREU. In the recent past, a report of FRA (European Union Agency for Fundamental Rights) identified the main challenges posed to the fundamental right to effective judicial protection by operations based on AI systems. It concluded that only the explainability of such operations guarantees the injured party the possibility of appearing before a court, alleging the facts that effectively embody the violation of a right.[10]
[1] See EU Digital Principles Public Consultation (digital-strategy.ec.europa.eu).
[2] On this subject, see Sandra Wachter/Brent Mittelstadt, A right to reasonable inferences: re-thinking data protection law in the age of big data and AI, Columbia Business Law Review, No. 2, 2019; Alexandre Veronese/Alessandra Silveira/Amanda Lemos, Artificial intelligence, Digital Single Market and the proposal of a right to fair and reasonable inferences: a legal issue between ethics and techniques, UNIO EU Law Journal, No 5(2), 2019.
[3] Regarding the general prohibition of individual decisions based “solely” on automated processing please see the Article 29 Working Party’s Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. For more development on this subject, see Alessandra Silveira, Profiling and cybersecurity: a perspective from fundamental rights’ protection in the EU, Francisco Andrade/Pedro Freitas/Joana Covelo Abreu (eds.), Legal developments on cybersecurity and related fields, Springer International Publishing, Cham/Sweden (forthcoming).
[4] On this subject, see Alessandra Silveira/Tiago Sérgio Cabral, Da utilização de inteligência artificial em conformidade com o RGPD: breve guia para responsáveis pelo tratamento, Jefferson Carús Guedes/Henrique Alves Pinto (eds.), Inteligência Artificial aplicada ao processo de tomada de decisões, Editora D’Plácido, Belo Horizonte/MG/Brasil, 2020; Tiago Sérgio Cabral, AI and the Right to Explanation: Three Legal Bases under the GDPR, Dara Hallinan/Ronald Leenes/Paul De Hert (eds), Data Protection and Privacy: Data Protection and Artificial Intelligence, Oxford, Hart Publishing, 2021.
[5] On this subject, see Cesar Analide/Diogo Morgado Rebelo, Inteligência artificial na era data-driven, a lógica fuzzy das aproximações soft computing e a proibição de sujeição a decisões tomadas exclusivamente com base na exploração e prospeção de dados pessoais, Fórum de proteção de dados, Comissão Nacional de Proteção de Dados, n.º 6, novembro/2019. The authors explain that the processing operations within AI make use of analytical models whose approximate predictions externalize fuzzy arguments that accept different degrees of truth (almost, maybe, somewhat) and not just the distinction between truth and falsehood.
[6] See judgments Wirtschaftsakademie 2018 (5 June 2018, C-210/16, ECLI:EU:C:2018:388, recital 33), Fashion ID 2019 (29 July 2019, C-40/17, ECLI:EU:C:2019:629, recital 106), and Planet49 2019 (Planet49 judgment of 1 October 2019, C-673/17, ECLI:EU:C:2019:801, recitals 45 and 67), in which the CJEU admitted that the collection of data through cookies amounts to the processing of personal data.
[7] Currently the number of decisions regarding cases addressing article 22 of the GDPR is still low. The most notable cases are probably the Ola & Uber judgments by the Amsterdam District Court.
[8] See Shoshana Zuboff, A era do capitalismo da vigilância – a disputa por um futuro humano na nova fronteira do poder, Relógio d’Água, Lisboa, 2020.
[9] See European Parliament resolution of 20 October 2020 on the Digital Services Act and fundamental rights issues posed [2020/2022(INI), recital 17], https://www.europarl.europa.eu/doceo/document/TA-9-2020-0274_EN.html
[10] See FRA European Union Agency for Fundamental Rights, Getting the future right – Artificial intelligence and fundamental rights, Report, 2020. On this subject, see Alessandra Silveira/Alexandre Veronese/Joana Covelo Abreu/Tiago Sérgio Cabral, Da construção ética e jusfundamental de uma “inteligência artificial de confiança” na União Europeia e os desafios da tutela jurisdicional efetiva, Willis Guerra Filho/Lucia Santaella/Dora Kaufman/Paola Cantarini (eds.), Direito e inteligência artificial: fundamentos, vol. 1 – Inteligência artificial, ética e direito, Lumen Juris Editora, Rio de Janeiro, 2021.
Pictures credits: geralt.
Pingback: Again: on the prohibition of generalised and indiscriminate retention of metadata for the purpose of combating serious crime – Official Blog of UNIO