On rebalancing powers in the digital ecosystem in recent CJEU case law (or on the battle between David and Goliath)

Alessandra Silveira  (Editor of this official blog, Academic Coordinator of Jean Monnet Centre of Excellence “Digital Citizenship & Technological Sustainability” - CitDig, Erasmus+) 
           

There is no doubt that European Union (EU) law is committed to a certain rebalancing of powers in the digital ecosystem. And why is that? Because today there is a clear imbalance of power in favour of digital service providers, which requires a strengthening of the position of users in their relationship with providers. The Internet has become a space made up of platforms, where unilaterally established and non-transparent business models are developed. This attempt to rebalance power in the digital ecosystem is an exercise in social justice that only the EU can foster. And this trend is particularly noticeable in the field of personal data protection.

The emergence of a business model based on data – and profiling based on inferred data – reveals the imbalance of power between users and platforms. This has led some authors to recognise the quasi-public powers exercised by technology companies on the Internet: they regulate, enforce and resolve conflicts of interest, acting in an uncontrolled way that we would not even allow public authorities to do in the context of the rule of law. But the problem must be contextualised: what is personal data?

Continue reading “On rebalancing powers in the digital ecosystem in recent CJEU case law (or on the battle between David and Goliath)”

Editorial of March 2024

By the Alessandra Silveira 

On inferred personal data and the difficulties of EU law in dealing with this matter

The right not to be subject to automated decisions was considered for the first time before the Court of Justice of the European Union (CJEU) in the recent SCHUFA judgment. Article 22 GDPR (on individual decisions based solely on automated processing, including profiling) always raised many doubts to legal scholars:[1] i) what a decision taken “solely” on the basis of automated processing would be?; ii) would this Article provide for a right or, rather, a general prohibition whose application does not require the party concerned to actively invoke a right?; iii) to what extent this automated decision produces legal effects or significantly affects the data subject in a similar manner?; iv) will the provisions of Article 22 GDPR only apply where there is no relevant human intervention in the decision-making process?; v) if a human being examines and weighs other factors when making the final decision, will it not be made “solely” based on the automated processing? [and, in this situation, will the prohibition in Article 22(1) GDPR not apply]?

To these doubts a German court has added a few more. SCHUFA is a private company under German law which provides its contractual partners with information on the creditworthiness of third parties, in particular, consumers. To that end, it establishes a prognosis on the probability of a future behaviour of a person (‘score’), such as the repayment of a loan, based on certain characteristics of that person, on the basis of mathematical and statistical procedures. The establishment of scores (‘scoring’) is based on the assumption that, by assigning a person to a group of other persons with comparable characteristics who have behaved in a certain way, similar behaviour can be predicted.[2]

Continue reading “Editorial of March 2024”

Finally, the ECJ is interpreting Article 22 GDPR (on individual decisions based solely on automated processing, including profiling)

Alessandra Silveira (Editor)
           

1) What is new about this process? Article 22 GDPR is finally being considered for before the European Court of Justice (ECJ) – and on 16 March 2023, the Advocate General’s Opinion in Case C-634/21 [SCHUFA Holding and Others (Scoring)][1] was published. Article 22 GDPR (apparently) provides a general prohibition of individual decisions based “solely” on automated processing – including profiling – but its provisions raise many doubts to the legal doctrine.[2] Furthermore, Article 22 GDPR is limited to automated decisions that i) produce effects in the legal sphere of the data subject or that ii) significantly affect him/her in a similar manner. The content of the latter provision is not quite clear, but as was suggested by the Data Protection Working Party (WP29), “similar effect” can be interpreted as significantly affecting the circumstances, behaviour or choices of data subjects – for example, decisions affecting a person’s financial situation, including their eligibility for credit.[3] To this extent, the effectiveness of Article 22 GDPR may be very limited until EU case law clarifies i) what a decision taken solely on the basis of automated processing would be, and ii) to what extent this decision produces legal effects or significantly affects the data subject in a similar manner.

2) Why is this case law so relevant? Profiling is an automated processing often used to make predictions about individuals – and may, or may not, lead to automated decisions within the meaning of the Article 22(1) GDPR. It involves collecting information about a person and assessing their characteristics or patterns of behaviour to place them in a particular category or group and to draw on that inference or prediction – whether of their ability to perform a task, their interest or presumed behaviour, etc. To this extent, such automated inferences demand protection as inferred personal data, since they also make it possible to identify someone by association of concepts, characteristics, or contents. The crux of the matter is that people are increasingly losing control over such automated inferences and how they are perceived and evaluated by others. The ECJ has the opportunity to assess the existence of legal remedies to challenge operations which result in automated inferences that are not reasonably justified. As set out below, the approach adopted by the Advocate General has weaknesses – and if the ECJ adopts the conditions suggested by the Advocate General, many reasonable interpretative doubts about Article 22 GDPR will persist.

3) What questions does Article 22 GDPR raise?  Does this Article provide for a right or, rather, a general prohibition whose application does not require the party concerned to actively invoke a right?  What is a decision based “solely” on automated processing? (which apparently excludes “largely” or “partially” but not “exclusively” automated decisions). Will the provisions of Article 22 GRPD only apply where there is no relevant human intervention in the decision-making process? If a human being examines and weighs other factors when making the final decision, will it not be made “solely” based on the automated processing? [and, in this situation, will the prohibition in Article 22(1) GDPR not apply]?

Continue reading “Finally, the ECJ is interpreting Article 22 GDPR (on individual decisions based solely on automated processing, including profiling)”

Editorial of December 2021

By Alessandra Silveira (Editor)

AI systems and automated inferences – on the protection of inferred personal data

On 23 November 2021 the European Commission published the consultation results on a set of digital rights and principles to promote and uphold EU values in the digital space – which ran between 12 May and 6 September 2021.[1] This public consultation on digital principles is a key deliverable of the preparatory work for the upcoming “Declaration on digital rights and principles for the Digital Decade”, which European Commission will announce by the end of 2021. The consultation invited all interested people to share their views on the formulation of digital principles in 9 areas: i) universal access to internet services; ii) universal digital education and skills for people to take an active part in society and in democratic processes; iii) accessible and human-centric digital public services and administration; iv) access to digital health services; v) an open, secure and trusted online environment; vi) protecting and empowering children and young people in the online space; vii) a European digital identity; viii) access to digital devices, systems and services that respect the climate and environment; ix) ethical principles for human-centric algorithms.  

Continue reading “Editorial of December 2021”

Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks

by Piedade Costa de Oliveira (Former official of the European Commission - Legal Service)
Disclaimer: The opinions expressed are purely personal and are the exclusive responsibility of the author. They do not reflect any position of the European Commission

The use of algorithms for automated decision-making, commonly referred to as Artificial Intelligence (AI), is becoming a reality in many fields of activity both in the private and public sectors.

It is common ground that AI raises considerable challenges not only for the area for which it is operated in but also for society as a whole. As pointed out by the European Commission in its White Paper on AI[i], AI entails a number of potential risks, such as opaque decision-making, gender-based bias or other kinds of discrimination or intrusion on privacy.

In order to mitigate such risks, Article 22 of the GDPR confers on data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them or similarly significantly affects them[ii].

Continue reading “Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks”