Iris collection as a proof of personhood: current trends on biometric recognition

Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 
           

In Portugal, more than 300,000 people have already “sold” their iris scan to Worldcoin Foundation, which in return offers them cryptocurrency. In March 2024, the Portuguese data protection authority (hereinafter, the CNPD) decided to suspend the company’s collection of iris and facial biometric data for 90 days in order to protect the right to the protection of personal data, especially of minors, following in the footsteps of Spain, which also temporarily banned the company’s activities for privacy reasons.[1]

In a statement, the CNPD explains that the company has already been informed of this temporary suspension, which will last until the investigation is completed and a final decision is made on the matter. The adoption of this urgent provisional measure comes in the wake of “dozens of reports” received by the CNPD in the last month, which report the collection of data from minors without the authorisation of their parents or other legal representatives, as well as deficiencies in the information provided to data subjects, the impossibility of deleting data or revoking consent.[2] In CNPD’s press release, one can read that “[g]iven the current circumstances, in which there is unlawful processing of the biometric data of minors, combined with potential infringements of other GDPR rules, the CNPD considered that the risk to citizens’ fundamental rights is high, justifying an urgent intervention to prevent serious or irreparable harm.”[3]

Continue reading “Iris collection as a proof of personhood: current trends on biometric recognition”

The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act

Inês Neves (Lecturer at the Faculty of Law, University of Porto | Researcher at CIJ | Member of the Jean Monnet Module team DigEUCit) 
           

March 2024: a significant month for both women and Artificial Intelligence

In March 2024 we celebrate women. But March was not only the month of women. It was also a historic month for AI regulation. And, as #TaylorSwiftAI has shown us,[1] they have a lot more in common than you might think.

On 13 March 2024, the European Parliament approved the Artificial Intelligence Act,[2] a European Union (EU) Regulation proposed by the European Commission back in 2021. While the law has yet to be published in the Official Journal of the EU, it is fair to say that it makes March 2024 a historical month for Artificial Intelligence (‘AI’) regulation.

In addition to the EU’s landmark piece of legislation, the Council of Europe’s path towards the first legally binding international instrument on AI has also made progress with the finalisation of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.[3] As the EU’s cornerstone legislation, this will be a ‘first of its kind’, aiming to uphold the Council of Europe’s legal standards on human rights, democracy and the rule of law in relation to the regulation of AI systems. With its finalisation by the Committee on Artificial Intelligence, the way is now open for future signature at a later stage. While the non-self-executing nature of its provisions is to be expected, some doubts remain as to its full potential, given the high level of generality of its provisions, and their declarative nature.[4]

Continue reading “The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act”

EU’s policies to AI: are there blindspots regarding accountability and democratic governance?

Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 
           

In her recent State of the Union (SOTEU) 2023 speech, the President of the European Commission Ursula von der Leyen addressed several pressing issues, including artificial intelligence (AI). In this regard, the President of the European Commission highlighted that leading AI creators, academics and experts have issued a warning about AI, stressing that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, adding that AI is advancing at a faster pace than its creators predicted.[1]

The President of the European Commission also argued that of the three pillars of the global framework for AI – guardrails, governance, and guiding innovation – guardrails is the most important, and in this sense, AI must be developed in a way that is human-centred, transparent, and accountable. Indeed, in Europe we have witnessed such an approach to the development of AI, as evidenced by various official documents and reports from different scientific communities, [2] also emphasising the need to build trust in this type of technology.

Continue reading “EU’s policies to AI: are there blindspots regarding accountability and democratic governance?”

Editorial of July 2023

By Alessandra Silveira (Editor) and Maria Inês Costa (PhD candidate, School of Law, University of Minho) 

Regulating Artificial Intelligence (AI): on the civilisational choice we are all making

It is worth highlighting the role of the European Parliament (EP) in taking its stance on the negotiation of the AI Regulation, which in turn aims to regulate the development and use of AI in Europe.[1] With the EP having approved its position, European Institutions may start trilogue negotiations (the Council voted on its position on December 2022). The AI Regulation that will apply across the European Union (EU) will only enter into force if the co-legislators agree on a final wording.

The AI Regulation follows a risk-based approach, i.e., establishes obligations for those who provide and those who use AI systems, according to the level of risk that the application of the AI system entails: is the risk high, is it low, is it minimal? In other words, there is a hierarchisation of risks, and the different levels of risk will correspond to more or less regulation, more or less impositions, more or less restrictions. The EP’s position, even if introducing further safeguards (for example, on generative AI) does not deviate from the idea that the Regulation should protect citizens without jeopardising technological innovation. To this extent, systems with an unacceptable level of risk to people’s safety should be banned, and the EP extended the list of prohibited AI uses under the Commission’s original proposal. These are, for instance, systems used to classify people based on their social behaviour or personal characteristics (such as Chinese-style social control systems); emotion recognition systems in the workplace and educational establishments; predictive policing systems based on profiling or past criminal behaviour; remote and real-time biometric identification systems (such as facial recognition) in publicly accessible spaces, etc.

Continue reading “Editorial of July 2023”

The future regulation on non-contractual civil liability for AI systems

By Susana Navas Navarro (Professor at the Universidad Autónoma de Barcelona)

I was surprised and struck by the fact that, after all the work carried out within the European Union (“EU”), on the subject of civil liability for Artificial Intelligence (“AI”) Systems, the European Commission has opted for a Directive (the Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence or “Proposal for a Directive”) as the instrument to regulate this issue. Moreover, a Directive with a content focused exclusively on two issues: a) the disclosure of relevant information for evidence purposes or to decide whether or not to bring forth a lawsuit and against whom (Article 3) and b) the presumption of the causal link between the defendant’s fault and the result or absence thereof that an AI system should produce (Article 4). The argument for this is the disparity of civil liability regimes in Europe and the difficulties there have always existed in harmonization (see the Explanatory Memorandum accompanying the Proposal, p. 7). Choosing a Regulation as proposed by the European Parliament[1] or the proposals of the White Paper on AI would have allowed such harmonisation, and could have included rules on evidence. It seems to me that behind this decision lies the invisible struggle, previously evidenced in other issues, between the Commission and the European Parliament. I believe that the risks for all involved in the use and handling of AI systems, especially high-risk ones, are compelling reasons in favour of harmonization and strict liability.

In relation to this aspect, the Proposal for a Directive abandons the risk-based approach that had been prevailing in this area, since it assumes that the civil liability regimes in most of the Member States are based on fault. This is referred to, for example, in Article 3(5) when presuming the breach of duty of care by the defendant or directly in Article 2(5) when defining the action for damages, or Article 4(1) when admitting the presumption of the causal link between it and the result produced by the IA system or by the absence or failure in the production of such a result which causes the damage. Therefore, if in the national civil liability regime, the case was subsumed under a strict liability regime (e.g., equated to the use or operation of a machine or vicarious liability of the employer), these rules would not apply. National procedural systems, in relation to access to evidence, are not so far r from the provisions of this future Directive.

Continue reading “The future regulation on non-contractual civil liability for AI systems”