Editorial of March 2024

By the Alessandra Silveira 

On inferred personal data and the difficulties of EU law in dealing with this matter

The right not to be subject to automated decisions was considered for the first time before the Court of Justice of the European Union (CJEU) in the recent SCHUFA judgment. Article 22 GDPR (on individual decisions based solely on automated processing, including profiling) always raised many doubts to legal scholars:[1] i) what a decision taken “solely” on the basis of automated processing would be?; ii) would this Article provide for a right or, rather, a general prohibition whose application does not require the party concerned to actively invoke a right?; iii) to what extent this automated decision produces legal effects or significantly affects the data subject in a similar manner?; iv) will the provisions of Article 22 GDPR only apply where there is no relevant human intervention in the decision-making process?; v) if a human being examines and weighs other factors when making the final decision, will it not be made “solely” based on the automated processing? [and, in this situation, will the prohibition in Article 22(1) GDPR not apply]?

To these doubts a German court has added a few more. SCHUFA is a private company under German law which provides its contractual partners with information on the creditworthiness of third parties, in particular, consumers. To that end, it establishes a prognosis on the probability of a future behaviour of a person (‘score’), such as the repayment of a loan, based on certain characteristics of that person, on the basis of mathematical and statistical procedures. The establishment of scores (‘scoring’) is based on the assumption that, by assigning a person to a group of other persons with comparable characteristics who have behaved in a certain way, similar behaviour can be predicted.[2]

Continue reading “Editorial of March 2024”

Editorial of December 2021

By Alessandra Silveira (Editor)

AI systems and automated inferences – on the protection of inferred personal data

On 23 November 2021 the European Commission published the consultation results on a set of digital rights and principles to promote and uphold EU values in the digital space – which ran between 12 May and 6 September 2021.[1] This public consultation on digital principles is a key deliverable of the preparatory work for the upcoming “Declaration on digital rights and principles for the Digital Decade”, which European Commission will announce by the end of 2021. The consultation invited all interested people to share their views on the formulation of digital principles in 9 areas: i) universal access to internet services; ii) universal digital education and skills for people to take an active part in society and in democratic processes; iii) accessible and human-centric digital public services and administration; iv) access to digital health services; v) an open, secure and trusted online environment; vi) protecting and empowering children and young people in the online space; vii) a European digital identity; viii) access to digital devices, systems and services that respect the climate and environment; ix) ethical principles for human-centric algorithms.  

Continue reading “Editorial of December 2021”

Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks

by Piedade Costa de Oliveira (Former official of the European Commission - Legal Service)
Disclaimer: The opinions expressed are purely personal and are the exclusive responsibility of the author. They do not reflect any position of the European Commission

The use of algorithms for automated decision-making, commonly referred to as Artificial Intelligence (AI), is becoming a reality in many fields of activity both in the private and public sectors.

It is common ground that AI raises considerable challenges not only for the area for which it is operated in but also for society as a whole. As pointed out by the European Commission in its White Paper on AI[i], AI entails a number of potential risks, such as opaque decision-making, gender-based bias or other kinds of discrimination or intrusion on privacy.

In order to mitigate such risks, Article 22 of the GDPR confers on data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them or similarly significantly affects them[ii].

Continue reading “Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks”