The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act

Inês Neves (Lecturer at the Faculty of Law, University of Porto | Researcher at CIJ | Member of the Jean Monnet Module team DigEUCit) 
           

March 2024: a significant month for both women and Artificial Intelligence

In March 2024 we celebrate women. But March was not only the month of women. It was also a historic month for AI regulation. And, as #TaylorSwiftAI has shown us,[1] they have a lot more in common than you might think.

On 13 March 2024, the European Parliament approved the Artificial Intelligence Act,[2] a European Union (EU) Regulation proposed by the European Commission back in 2021. While the law has yet to be published in the Official Journal of the EU, it is fair to say that it makes March 2024 a historical month for Artificial Intelligence (‘AI’) regulation.

In addition to the EU’s landmark piece of legislation, the Council of Europe’s path towards the first legally binding international instrument on AI has also made progress with the finalisation of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.[3] As the EU’s cornerstone legislation, this will be a ‘first of its kind’, aiming to uphold the Council of Europe’s legal standards on human rights, democracy and the rule of law in relation to the regulation of AI systems. With its finalisation by the Committee on Artificial Intelligence, the way is now open for future signature at a later stage. While the non-self-executing nature of its provisions is to be expected, some doubts remain as to its full potential, given the high level of generality of its provisions, and their declarative nature.[4]

Continue reading “The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act”

The need for an egalitarian ethical framework for emerging technologies

Manuel Protásio (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – Bolsa UI/BD/152801/2022) 
           

The blurring boundary between humans and machines introduces a crucial dichotomy between consciousness and information, shaping the dynamics of our technological engagement and the “limbo” between humans and technologies, situated around perception, is central to how the law assesses its potential effects on human behaviour.

According to Kantian philosophy, the act of perception is a private, subjective, and observer-dependent mechanism, which, by its nature, grants the subject a sensation of agency over the physical reality – their environment. This feeling of agency can be understood as the empowering subjective experience that is often translated into the individual’s freedom and autonomy. If it is true that the synthetical perception confers agency over the perceived objects as they are read into our reality, it must also be true that illusions – reasoning mistakes based on our perception – can be triggered if our perception follows systematic errors that occur whenever we store wrong information about our reality regarding perceived objects, or when we use the wrong model of perception to interpret the external world.[1] 

What technologies like Augmented Reality (AR) or Artificial Intelligence (AI) will cause to our perception in the short and long-term is to convey analytical information from the physical world and thus trigger potential changes in our synthetical perception, which can lead to the loss of agency of our own our reality. Virtual Reality (VR), on the other hand, can trigger the same effect by deceiving the synthetical sensory feedback of our biological perception and replicating it through technological means.   

Continue reading “The need for an egalitarian ethical framework for emerging technologies”

EU’s policies to AI: are there blindspots regarding accountability and democratic governance?

Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 
           

In her recent State of the Union (SOTEU) 2023 speech, the President of the European Commission Ursula von der Leyen addressed several pressing issues, including artificial intelligence (AI). In this regard, the President of the European Commission highlighted that leading AI creators, academics and experts have issued a warning about AI, stressing that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, adding that AI is advancing at a faster pace than its creators predicted.[1]

The President of the European Commission also argued that of the three pillars of the global framework for AI – guardrails, governance, and guiding innovation – guardrails is the most important, and in this sense, AI must be developed in a way that is human-centred, transparent, and accountable. Indeed, in Europe we have witnessed such an approach to the development of AI, as evidenced by various official documents and reports from different scientific communities, [2] also emphasising the need to build trust in this type of technology.

Continue reading “EU’s policies to AI: are there blindspots regarding accountability and democratic governance?”

Editorial of July 2023

By Alessandra Silveira (Editor) and Maria Inês Costa (PhD candidate, School of Law, University of Minho) 

Regulating Artificial Intelligence (AI): on the civilisational choice we are all making

It is worth highlighting the role of the European Parliament (EP) in taking its stance on the negotiation of the AI Regulation, which in turn aims to regulate the development and use of AI in Europe.[1] With the EP having approved its position, European Institutions may start trilogue negotiations (the Council voted on its position on December 2022). The AI Regulation that will apply across the European Union (EU) will only enter into force if the co-legislators agree on a final wording.

The AI Regulation follows a risk-based approach, i.e., establishes obligations for those who provide and those who use AI systems, according to the level of risk that the application of the AI system entails: is the risk high, is it low, is it minimal? In other words, there is a hierarchisation of risks, and the different levels of risk will correspond to more or less regulation, more or less impositions, more or less restrictions. The EP’s position, even if introducing further safeguards (for example, on generative AI) does not deviate from the idea that the Regulation should protect citizens without jeopardising technological innovation. To this extent, systems with an unacceptable level of risk to people’s safety should be banned, and the EP extended the list of prohibited AI uses under the Commission’s original proposal. These are, for instance, systems used to classify people based on their social behaviour or personal characteristics (such as Chinese-style social control systems); emotion recognition systems in the workplace and educational establishments; predictive policing systems based on profiling or past criminal behaviour; remote and real-time biometric identification systems (such as facial recognition) in publicly accessible spaces, etc.

Continue reading “Editorial of July 2023”

The future regulation on non-contractual civil liability for AI systems

By Susana Navas Navarro (Professor at the Universidad Autónoma de Barcelona)

I was surprised and struck by the fact that, after all the work carried out within the European Union (“EU”), on the subject of civil liability for Artificial Intelligence (“AI”) Systems, the European Commission has opted for a Directive (the Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence or “Proposal for a Directive”) as the instrument to regulate this issue. Moreover, a Directive with a content focused exclusively on two issues: a) the disclosure of relevant information for evidence purposes or to decide whether or not to bring forth a lawsuit and against whom (Article 3) and b) the presumption of the causal link between the defendant’s fault and the result or absence thereof that an AI system should produce (Article 4). The argument for this is the disparity of civil liability regimes in Europe and the difficulties there have always existed in harmonization (see the Explanatory Memorandum accompanying the Proposal, p. 7). Choosing a Regulation as proposed by the European Parliament[1] or the proposals of the White Paper on AI would have allowed such harmonisation, and could have included rules on evidence. It seems to me that behind this decision lies the invisible struggle, previously evidenced in other issues, between the Commission and the European Parliament. I believe that the risks for all involved in the use and handling of AI systems, especially high-risk ones, are compelling reasons in favour of harmonization and strict liability.

In relation to this aspect, the Proposal for a Directive abandons the risk-based approach that had been prevailing in this area, since it assumes that the civil liability regimes in most of the Member States are based on fault. This is referred to, for example, in Article 3(5) when presuming the breach of duty of care by the defendant or directly in Article 2(5) when defining the action for damages, or Article 4(1) when admitting the presumption of the causal link between it and the result produced by the IA system or by the absence or failure in the production of such a result which causes the damage. Therefore, if in the national civil liability regime, the case was subsumed under a strict liability regime (e.g., equated to the use or operation of a machine or vicarious liability of the employer), these rules would not apply. National procedural systems, in relation to access to evidence, are not so far r from the provisions of this future Directive.

Continue reading “The future regulation on non-contractual civil liability for AI systems”

Editorial of December 2021

By Alessandra Silveira (Editor)

AI systems and automated inferences – on the protection of inferred personal data

On 23 November 2021 the European Commission published the consultation results on a set of digital rights and principles to promote and uphold EU values in the digital space – which ran between 12 May and 6 September 2021.[1] This public consultation on digital principles is a key deliverable of the preparatory work for the upcoming “Declaration on digital rights and principles for the Digital Decade”, which European Commission will announce by the end of 2021. The consultation invited all interested people to share their views on the formulation of digital principles in 9 areas: i) universal access to internet services; ii) universal digital education and skills for people to take an active part in society and in democratic processes; iii) accessible and human-centric digital public services and administration; iv) access to digital health services; v) an open, secure and trusted online environment; vi) protecting and empowering children and young people in the online space; vii) a European digital identity; viii) access to digital devices, systems and services that respect the climate and environment; ix) ethical principles for human-centric algorithms.  

Continue reading “Editorial of December 2021”

Editorial of June 2021

By Tiago Sérgio Cabral (Managing Editor)

Data Governance and the AI Regulation: Interplay between the GDPR and the proposal for an AI Act

It is hardly surprising that the recent European Commission’s proposal for a Regulation on a European Approach for Artificial Intelligence (hereinafter the “proposal for an AI Act”) is heavily inspired by the GDPR. From taking note of the GDPR’s success in establishing worldwide standards to learning from its shortcomings, for example by suppressing the stop-shop mechanism (arguably responsible for some of its enforcement woes).[1]

The proposal for an AI Act should not be considered a GDPR for AI for one singular reason: there is already a GDPR for AI, and it is called the GDPR. The scope and aims of the proposal are different, but there is certainly a high degree of influence and the interplay between the two Regulations, if the AI Act is approved, will certainly be interesting. In this editorial we will address one particular aspect where the interplay between the GDPR and the AI act could be particularly relevant: data governance and data set management.

Before going specifically into this subject, it is important to know that the AI Act’s proposed fines have a higher ceiling than the GDPR’s: up to 30,000,000 euros or, if the offender is company, up to 6% of its total worldwide annual turnover for the preceding financial year (article 71(3) of the proposal for an AI Act). We should note, nonetheless, that this specific value is applicable to a restricted number of infringements, namely:

Continue reading “Editorial of June 2021”

Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks

by Piedade Costa de Oliveira (Former official of the European Commission - Legal Service)
Disclaimer: The opinions expressed are purely personal and are the exclusive responsibility of the author. They do not reflect any position of the European Commission

The use of algorithms for automated decision-making, commonly referred to as Artificial Intelligence (AI), is becoming a reality in many fields of activity both in the private and public sectors.

It is common ground that AI raises considerable challenges not only for the area for which it is operated in but also for society as a whole. As pointed out by the European Commission in its White Paper on AI[i], AI entails a number of potential risks, such as opaque decision-making, gender-based bias or other kinds of discrimination or intrusion on privacy.

In order to mitigate such risks, Article 22 of the GDPR confers on data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them or similarly significantly affects them[ii].

Continue reading “Artificial intelligence: 2020 A-level grades in the UK as an example of the challenges and risks”

Universal basic income and artificial intelligence

8506058779_a4ff678f49_o

 by Charize Hortmann, Master in Human Rights, UMinho

Currently the world’s economy has reached an unprecedent juncture. If by one side has never been so much wealth generally accumulated[i], by the other is undeniable that inequality between the richest and the poorest increases by the minute[ii]. At the same time, we are getting close to fulfilling the greatest threat brought out by the first Industrial Revolution. The technological unemployment[iii], due the advance and the improvement of certain technologies, like Artificial Intelligence (AI) and the Internet of Things (IoT).

Considering this scenery, much has been thought about coming up with solutions that seek to curb the progress of social inequalities, as well as being an alternative to the possibility of facing a massive unemployment worldwide.
Continue reading “Universal basic income and artificial intelligence”

Is the European Union’s legal framework ready for AI-enabled drone deliveries? A preliminary short assessment – from the Commission Implementing Regulation 2019/947/EU to data protection

19793862459_d3350b1a38_o

 by Marília Frias, Senior Associate at Vieira de Almeida & Associados
 and Tiago Cabral, Master in EU Law, University of Minho

1. As we are writing this short essay, a significant percentage of the world population is at home, in isolation, as a preventive measure to stop the spread of the COVID-19 pandemic. Of course, for isolation to be effective, people should only leave their houses, when strictly necessary, for instance, to shop essential goods and, frequently, preventive measures include orders of closure directed to all non-essential businesses.

2. Unfortunately, the European Union (hereinafter, “EU”) is one of the epicentres of the pandemic. As a result, some European citizens are turning to e-commerce to buy goods not available in the brick-and-mortar shops that are still open. Meanwhile, others opt to bring their shopping into the online realm simply to reduce the risk of contact and infection. Currently, sustaining the market as best as possible under these conditions to avoid a (stronger) economic crisis should be one of the key priorities. Furthermore, with a growing number of people working remotely, it is also vital to guarantee that the necessary supplies can arrive in time and with no health-related concerns attached.

3. Nowadays, most delivery services work based on humans who physically get the product from point A and deliver it to point B. The system is more or less the same, whether the reader orders a package from China or delivery from the pizza place 5 minutes away from the reader’s house. Obviously, more people will be involved in the delivery chain in our first example, but it is still, at its core, a string of people getting the order from point A to point B. This is a challenge for those working in the delivery and transportation businesses who have to put their health on the line to ensure swift delivery of products to the ones who are at home.
Continue reading “Is the European Union’s legal framework ready for AI-enabled drone deliveries? A preliminary short assessment – from the Commission Implementing Regulation 2019/947/EU to data protection”