The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act

Inês Neves (Lecturer at the Faculty of Law, University of Porto | Researcher at CIJ | Member of the Jean Monnet Module team DigEUCit) 
           

March 2024: a significant month for both women and Artificial Intelligence

In March 2024 we celebrate women. But March was not only the month of women. It was also a historic month for AI regulation. And, as #TaylorSwiftAI has shown us,[1] they have a lot more in common than you might think.

On 13 March 2024, the European Parliament approved the Artificial Intelligence Act,[2] a European Union (EU) Regulation proposed by the European Commission back in 2021. While the law has yet to be published in the Official Journal of the EU, it is fair to say that it makes March 2024 a historical month for Artificial Intelligence (‘AI’) regulation.

In addition to the EU’s landmark piece of legislation, the Council of Europe’s path towards the first legally binding international instrument on AI has also made progress with the finalisation of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.[3] As the EU’s cornerstone legislation, this will be a ‘first of its kind’, aiming to uphold the Council of Europe’s legal standards on human rights, democracy and the rule of law in relation to the regulation of AI systems. With its finalisation by the Committee on Artificial Intelligence, the way is now open for future signature at a later stage. While the non-self-executing nature of its provisions is to be expected, some doubts remain as to its full potential, given the high level of generality of its provisions, and their declarative nature.[4]

Continue reading “The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act”

Editorial of March 2024

By the Alessandra Silveira 

On inferred personal data and the difficulties of EU law in dealing with this matter

The right not to be subject to automated decisions was considered for the first time before the Court of Justice of the European Union (CJEU) in the recent SCHUFA judgment. Article 22 GDPR (on individual decisions based solely on automated processing, including profiling) always raised many doubts to legal scholars:[1] i) what a decision taken “solely” on the basis of automated processing would be?; ii) would this Article provide for a right or, rather, a general prohibition whose application does not require the party concerned to actively invoke a right?; iii) to what extent this automated decision produces legal effects or significantly affects the data subject in a similar manner?; iv) will the provisions of Article 22 GDPR only apply where there is no relevant human intervention in the decision-making process?; v) if a human being examines and weighs other factors when making the final decision, will it not be made “solely” based on the automated processing? [and, in this situation, will the prohibition in Article 22(1) GDPR not apply]?

To these doubts a German court has added a few more. SCHUFA is a private company under German law which provides its contractual partners with information on the creditworthiness of third parties, in particular, consumers. To that end, it establishes a prognosis on the probability of a future behaviour of a person (‘score’), such as the repayment of a loan, based on certain characteristics of that person, on the basis of mathematical and statistical procedures. The establishment of scores (‘scoring’) is based on the assumption that, by assigning a person to a group of other persons with comparable characteristics who have behaved in a certain way, similar behaviour can be predicted.[2]

Continue reading “Editorial of March 2024”

The need for an egalitarian ethical framework for emerging technologies

Manuel Protásio (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – Bolsa UI/BD/152801/2022) 
           

The blurring boundary between humans and machines introduces a crucial dichotomy between consciousness and information, shaping the dynamics of our technological engagement and the “limbo” between humans and technologies, situated around perception, is central to how the law assesses its potential effects on human behaviour.

According to Kantian philosophy, the act of perception is a private, subjective, and observer-dependent mechanism, which, by its nature, grants the subject a sensation of agency over the physical reality – their environment. This feeling of agency can be understood as the empowering subjective experience that is often translated into the individual’s freedom and autonomy. If it is true that the synthetical perception confers agency over the perceived objects as they are read into our reality, it must also be true that illusions – reasoning mistakes based on our perception – can be triggered if our perception follows systematic errors that occur whenever we store wrong information about our reality regarding perceived objects, or when we use the wrong model of perception to interpret the external world.[1] 

What technologies like Augmented Reality (AR) or Artificial Intelligence (AI) will cause to our perception in the short and long-term is to convey analytical information from the physical world and thus trigger potential changes in our synthetical perception, which can lead to the loss of agency of our own our reality. Virtual Reality (VR), on the other hand, can trigger the same effect by deceiving the synthetical sensory feedback of our biological perception and replicating it through technological means.   

Continue reading “The need for an egalitarian ethical framework for emerging technologies”

EU’s policies to AI: are there blindspots regarding accountability and democratic governance?

Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 
           

In her recent State of the Union (SOTEU) 2023 speech, the President of the European Commission Ursula von der Leyen addressed several pressing issues, including artificial intelligence (AI). In this regard, the President of the European Commission highlighted that leading AI creators, academics and experts have issued a warning about AI, stressing that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, adding that AI is advancing at a faster pace than its creators predicted.[1]

The President of the European Commission also argued that of the three pillars of the global framework for AI – guardrails, governance, and guiding innovation – guardrails is the most important, and in this sense, AI must be developed in a way that is human-centred, transparent, and accountable. Indeed, in Europe we have witnessed such an approach to the development of AI, as evidenced by various official documents and reports from different scientific communities, [2] also emphasising the need to build trust in this type of technology.

Continue reading “EU’s policies to AI: are there blindspots regarding accountability and democratic governance?”

Editorial of October 2023

By the Editorial Team 

“Answering the call of history” – on the 2023 “State of the Union” speech (SOTEU) by President Ursula von der Leyen

On 13 September 2023, the President of the European Commission, Ursula von der Leyen, gave a speech summing up her term in office – perhaps even anticipating re-election. To this end, she presented results, arguing that her Commission had managed to implement more than 90 per cent of the political guidelines it presented in 2019.

The motto of the “State of the Union” (SOTEU) 2023 speech was “Answering the call of history”. In what sense? In the sense that history is happening while Russia is waging a full-scale war against the founding principles of the United Nations (UN) Charter. The President of the Commission tried to explain to what extent the European Union (EU) is up to this challenge. But Ursula von de Leyen also demonstrated the extent to which history demands the deepening of the integration process, its “becoming”.

Continue reading “Editorial of October 2023”

Editorial of July 2023

By Alessandra Silveira (Editor) and Maria Inês Costa (PhD candidate, School of Law, University of Minho) 

Regulating Artificial Intelligence (AI): on the civilisational choice we are all making

It is worth highlighting the role of the European Parliament (EP) in taking its stance on the negotiation of the AI Regulation, which in turn aims to regulate the development and use of AI in Europe.[1] With the EP having approved its position, European Institutions may start trilogue negotiations (the Council voted on its position on December 2022). The AI Regulation that will apply across the European Union (EU) will only enter into force if the co-legislators agree on a final wording.

The AI Regulation follows a risk-based approach, i.e., establishes obligations for those who provide and those who use AI systems, according to the level of risk that the application of the AI system entails: is the risk high, is it low, is it minimal? In other words, there is a hierarchisation of risks, and the different levels of risk will correspond to more or less regulation, more or less impositions, more or less restrictions. The EP’s position, even if introducing further safeguards (for example, on generative AI) does not deviate from the idea that the Regulation should protect citizens without jeopardising technological innovation. To this extent, systems with an unacceptable level of risk to people’s safety should be banned, and the EP extended the list of prohibited AI uses under the Commission’s original proposal. These are, for instance, systems used to classify people based on their social behaviour or personal characteristics (such as Chinese-style social control systems); emotion recognition systems in the workplace and educational establishments; predictive policing systems based on profiling or past criminal behaviour; remote and real-time biometric identification systems (such as facial recognition) in publicly accessible spaces, etc.

Continue reading “Editorial of July 2023”

Finally, the ECJ is interpreting Article 22 GDPR (on individual decisions based solely on automated processing, including profiling)

Alessandra Silveira (Editor)
           

1) What is new about this process? Article 22 GDPR is finally being considered for before the European Court of Justice (ECJ) – and on 16 March 2023, the Advocate General’s Opinion in Case C-634/21 [SCHUFA Holding and Others (Scoring)][1] was published. Article 22 GDPR (apparently) provides a general prohibition of individual decisions based “solely” on automated processing – including profiling – but its provisions raise many doubts to the legal doctrine.[2] Furthermore, Article 22 GDPR is limited to automated decisions that i) produce effects in the legal sphere of the data subject or that ii) significantly affect him/her in a similar manner. The content of the latter provision is not quite clear, but as was suggested by the Data Protection Working Party (WP29), “similar effect” can be interpreted as significantly affecting the circumstances, behaviour or choices of data subjects – for example, decisions affecting a person’s financial situation, including their eligibility for credit.[3] To this extent, the effectiveness of Article 22 GDPR may be very limited until EU case law clarifies i) what a decision taken solely on the basis of automated processing would be, and ii) to what extent this decision produces legal effects or significantly affects the data subject in a similar manner.

2) Why is this case law so relevant? Profiling is an automated processing often used to make predictions about individuals – and may, or may not, lead to automated decisions within the meaning of the Article 22(1) GDPR. It involves collecting information about a person and assessing their characteristics or patterns of behaviour to place them in a particular category or group and to draw on that inference or prediction – whether of their ability to perform a task, their interest or presumed behaviour, etc. To this extent, such automated inferences demand protection as inferred personal data, since they also make it possible to identify someone by association of concepts, characteristics, or contents. The crux of the matter is that people are increasingly losing control over such automated inferences and how they are perceived and evaluated by others. The ECJ has the opportunity to assess the existence of legal remedies to challenge operations which result in automated inferences that are not reasonably justified. As set out below, the approach adopted by the Advocate General has weaknesses – and if the ECJ adopts the conditions suggested by the Advocate General, many reasonable interpretative doubts about Article 22 GDPR will persist.

3) What questions does Article 22 GDPR raise?  Does this Article provide for a right or, rather, a general prohibition whose application does not require the party concerned to actively invoke a right?  What is a decision based “solely” on automated processing? (which apparently excludes “largely” or “partially” but not “exclusively” automated decisions). Will the provisions of Article 22 GRPD only apply where there is no relevant human intervention in the decision-making process? If a human being examines and weighs other factors when making the final decision, will it not be made “solely” based on the automated processing? [and, in this situation, will the prohibition in Article 22(1) GDPR not apply]?

Continue reading “Finally, the ECJ is interpreting Article 22 GDPR (on individual decisions based solely on automated processing, including profiling)”

The future regulation on non-contractual civil liability for AI systems

By Susana Navas Navarro (Professor at the Universidad Autónoma de Barcelona)

I was surprised and struck by the fact that, after all the work carried out within the European Union (“EU”), on the subject of civil liability for Artificial Intelligence (“AI”) Systems, the European Commission has opted for a Directive (the Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence or “Proposal for a Directive”) as the instrument to regulate this issue. Moreover, a Directive with a content focused exclusively on two issues: a) the disclosure of relevant information for evidence purposes or to decide whether or not to bring forth a lawsuit and against whom (Article 3) and b) the presumption of the causal link between the defendant’s fault and the result or absence thereof that an AI system should produce (Article 4). The argument for this is the disparity of civil liability regimes in Europe and the difficulties there have always existed in harmonization (see the Explanatory Memorandum accompanying the Proposal, p. 7). Choosing a Regulation as proposed by the European Parliament[1] or the proposals of the White Paper on AI would have allowed such harmonisation, and could have included rules on evidence. It seems to me that behind this decision lies the invisible struggle, previously evidenced in other issues, between the Commission and the European Parliament. I believe that the risks for all involved in the use and handling of AI systems, especially high-risk ones, are compelling reasons in favour of harmonization and strict liability.

In relation to this aspect, the Proposal for a Directive abandons the risk-based approach that had been prevailing in this area, since it assumes that the civil liability regimes in most of the Member States are based on fault. This is referred to, for example, in Article 3(5) when presuming the breach of duty of care by the defendant or directly in Article 2(5) when defining the action for damages, or Article 4(1) when admitting the presumption of the causal link between it and the result produced by the IA system or by the absence or failure in the production of such a result which causes the damage. Therefore, if in the national civil liability regime, the case was subsumed under a strict liability regime (e.g., equated to the use or operation of a machine or vicarious liability of the employer), these rules would not apply. National procedural systems, in relation to access to evidence, are not so far r from the provisions of this future Directive.

Continue reading “The future regulation on non-contractual civil liability for AI systems”

Editorial of December 2021

By Alessandra Silveira (Editor)

AI systems and automated inferences – on the protection of inferred personal data

On 23 November 2021 the European Commission published the consultation results on a set of digital rights and principles to promote and uphold EU values in the digital space – which ran between 12 May and 6 September 2021.[1] This public consultation on digital principles is a key deliverable of the preparatory work for the upcoming “Declaration on digital rights and principles for the Digital Decade”, which European Commission will announce by the end of 2021. The consultation invited all interested people to share their views on the formulation of digital principles in 9 areas: i) universal access to internet services; ii) universal digital education and skills for people to take an active part in society and in democratic processes; iii) accessible and human-centric digital public services and administration; iv) access to digital health services; v) an open, secure and trusted online environment; vi) protecting and empowering children and young people in the online space; vii) a European digital identity; viii) access to digital devices, systems and services that respect the climate and environment; ix) ethical principles for human-centric algorithms.  

Continue reading “Editorial of December 2021”

Editorial of June 2021

By Tiago Sérgio Cabral (Managing Editor)

Data Governance and the AI Regulation: Interplay between the GDPR and the proposal for an AI Act

It is hardly surprising that the recent European Commission’s proposal for a Regulation on a European Approach for Artificial Intelligence (hereinafter the “proposal for an AI Act”) is heavily inspired by the GDPR. From taking note of the GDPR’s success in establishing worldwide standards to learning from its shortcomings, for example by suppressing the stop-shop mechanism (arguably responsible for some of its enforcement woes).[1]

The proposal for an AI Act should not be considered a GDPR for AI for one singular reason: there is already a GDPR for AI, and it is called the GDPR. The scope and aims of the proposal are different, but there is certainly a high degree of influence and the interplay between the two Regulations, if the AI Act is approved, will certainly be interesting. In this editorial we will address one particular aspect where the interplay between the GDPR and the AI act could be particularly relevant: data governance and data set management.

Before going specifically into this subject, it is important to know that the AI Act’s proposed fines have a higher ceiling than the GDPR’s: up to 30,000,000 euros or, if the offender is company, up to 6% of its total worldwide annual turnover for the preceding financial year (article 71(3) of the proposal for an AI Act). We should note, nonetheless, that this specific value is applicable to a restricted number of infringements, namely:

Continue reading “Editorial of June 2021”