EU’s policies to AI: are there blindspots regarding accountability and democratic governance?

Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 
           

In her recent State of the Union (SOTEU) 2023 speech, the President of the European Commission Ursula von der Leyen addressed several pressing issues, including artificial intelligence (AI). In this regard, the President of the European Commission highlighted that leading AI creators, academics and experts have issued a warning about AI, stressing that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, adding that AI is advancing at a faster pace than its creators predicted.[1]

The President of the European Commission also argued that of the three pillars of the global framework for AI – guardrails, governance, and guiding innovation – guardrails is the most important, and in this sense, AI must be developed in a way that is human-centred, transparent, and accountable. Indeed, in Europe we have witnessed such an approach to the development of AI, as evidenced by various official documents and reports from different scientific communities, [2] also emphasising the need to build trust in this type of technology.

All of these tend to agree on some key factors: i) the development of AI systems should be focused on improving the lives of citizens, society, and the economy as a whole, ii) they should be transparent, traceable, secure and not encourage unfair bias, iii) they must empower humans, allowing them to make informed decisions and promote their fundamental rights, and iv) must foster a sustainable environment.

While these objectives are regularly stated, as it is a priority to protect human dignity and privacy in the European Union (EU), caution has been issued that the implementation of AI may pose a threat to the fundamental values on which the EU was founded and could result in breaches of individual rights, extending to the safeguarding of personal data and privacy, access to a fair trial and an effective judicial remedy, as well as consumer protection.[3] In fact, the risks are vast because the use of AI extends to all kinds of sectors in our society, and its omnipresence cannot be denied.

Given the significance of regulating the development and utilisation of AI, the legal framework applicable to AI in the EU exemplifies this concern and commitment to mitigate risks, and it embodies a human-centric vision, including the notion that AI systems “should be overseen by people, rather than by automation, to prevent harmful outcomes”.[4] We have previously discussed the EU AI Act on this blog,[5] but we shall briefly outline some of its main provisions, especially considering the changes proposed by the European Parliament in June 2023. For instance, it defines prohibited practices, such as the banning of AI systems by or on behalf of public authorities to assess individuals’ reliability, over a given period of time, based on their social behaviour or known or predicted personal or personality characteristics, with social scoring leading to unfair treatment or discrimination. It also outlines various levels of risk for AI systems based on their opacity, complexity, data dependence and autonomous behaviour. High-risk systems will undergo more assessments and compliance tests, whereas providers of non-high-risk AI systems will only be encouraged to follow a recommended code of conduct, which aligns with the same requirements as high-risk systems.[6]

As one can observe, common elements are evident in various EU initiatives concerning the development and regulation of AI systems, often employing keywords or key expressions such as trust, safety, traceability, acceptability, risk assessment, fairness, non-discrimination, and so on. While all of regulation efforts are to be commended, several authors point to the need to delve deeper into what these terms really mean, the true impact of these new rules in a world that is undergoing a real paradigm shift, and the need to evaluate the emergent approaches critically and continuously to the use of AI in contemporary democracies.

In fact, Lilian Edwards delivered an analysis of the EU AI Act, identifying, among other things, the insufficient public consultation and (lack of) accountability. The author highlights that, despite high-risk AI providers having to certify that they comply with fundamental rights, in so far as they are obliged to undergo conformity assessments, they do not give users a voice in this matter, although the latter are the ones who will really be affected in terms of their fundamental rights. Moreover, considering “industry-dominated technical bodies” possessing the capacity to translate regulations into highly specific standards, these are likely to exert substantial influence over the industry and its construction. There is no democratic process here that incorporates civil society – hence, there is a general lack of public input.[7]

Regarding the risk assessment criteria, it is determined that “the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.” Also, “the list of high-risk AI systems in Annex III[8] contains a limited number of AI systems whose risks have already materialised or are likely to materialise in the near future.” In this respect, Natali Helberger and Nicholas Diakopoulos contend that although considerable effort has gone into constructing this regulation, the risk assessment criteria is not adequate when considering generative AI systems.

The authors go on to state that ChatGPT and other generative AI systems diverge from “traditional” AI systems – for which the law was originally drafted – in two crucial respects: dynamic context and scale of use. Generative AI systems lack a specific context or predetermined conditions of use, which facilitates their open and easily controllable scale of deployment. As a result, they can serve various purposes across a diverse range of contexts, thereby endangering fundamental rights.[9]

How do such risks arise? Given that, according to the AI Act, “for some specific AI systems, only minimum transparency obligations are proposed, in particular when chatbots or ‘deep fakes’ are used”, this may end up generating a lack of regulation. Indeed, Helberg and Diakopoulos elaborate that for chatbots such as ChatGPT, the burden of determining whether the system poses high or low risk rests on the user rather than the supplier. However, it is essential for generative AI suppliers to prioritise safety from the outset, otherwise, “any potential biases, privacy violations, unlawful uses of content or other instances of unfairness in the data or the model will trickle down into a myriad of possible future applications.”[10]

As Dag Elgesem explains, chatbots can have a significantly detrimental impact on democracies, even if the associated risks are not directly individual and concrete – hence, do not meet the criteria of being risky in specific roles and contexts, as the AI Act determines. To elaborate on this point, the author refers to OpenAI’s report, which noted that the GPT3 chat could produce highly persuasive extremist and false content. In this regard, the primary danger lied with society at large – not the individual –, due to the dissemination of harmful material that may radicalise people, fuel polarisation, create distrust in information sources and erode rational discourse over time.

In multiple contexts, the harm stems from the widespread deployment of AI systems that propagate falsehoods, rather than from specific circumstances. Hence, for Elgesem, the main concern is that generative AI can broadly propagate radicalising disinformation, affecting democracies, but current regulations overlook risks that are diffused across applications and not localised to specific scenarios.[11]

It should be mentioned that one can identify scholars advocating for different regulatory approaches than the one laid down on the Act, for instance, applying a common ground which upholds data governance, non-discrimination, and cybersecurity for all AI systems, but enforcing more stringent regulation and control on the deployers and professional users of AI when it is being used in high-risk scenarios – only in truly high-risk settings should the AI Act be applied in its entirety, rather than as a standard to all large generative AI models.[12]

Regarding the issue of non-discrimination, Hacker et al. argue that it should be addressed at the beginning of a system’s development, rather than being propagated throughout the AI chain. The authors recommend improving training data, rather than solely focusing on end uses. For example, to avoid gender bias, names and/or images should be changed in training data related to professions. This approach aims to ensure creators have a regulatory burden on their creations – they should invest in reducing the risk of discrimination of the models they build.[13]

Is it worth mentioning that on 17 October 2023, the (Spanish Presidency of the) Council of the EU published a set of draft compromises regarding the EU AI Act, which covers very significant matters, some of which we discuss in this article.[14] It is noted that there are upcoming discussions at the fourth trilogue, where representatives from the Parliament, Council, and Commission will convene to address the following issues: classification of AI systems as high-risk, a list of high-risk AI use cases (excluding those related to biometrics and law enforcement authorities, which will be dealt with later), and testing of high-risk AI systems in real-world conditions outside AI regulatory sandboxes, i.e., outside controlled environments.

In addition, the document addresses general purpose AI systems (GPAI) and, in this regard, states that it is necessary to ensure the proper allocation of responsibilities in the value chain when used at scale by downstream suppliers to develop high-risk AI systems. Furthermore, a distinction is made between foundation AI models and narrow traditional AI models, which leads to certain obligations. In this respect, foundation AI models, being “capable to competently perform a wide range of distinctive tasks”, must be subject to certain conditionalities, both before and after being placed on the market. Before releasing the basic model to the market, comprehensive documentation on the model, the training process, standardised protocol evaluations, and benchmarks must be prepared. Once on the market, the downstream provider must receive the information and documentation to test the basic models they use to build products. By setting out these criteria, the aim is to guarantee sufficient information for conducting audits, detecting and rectifying mistakes, and fostering transparency and accountability.

In this document, it is considered that GPAI systems carry higher risks due to their generality and wide range of adoption, as the terminology suggests, and regarding “very capable” foundation models, i.e., models whose capabilities go beyond what currently exists and are not yet fully understood, they should be subject to additional obligations. Moreover, “further consideration is […] needed as to how to ensure inclusion of guardrails, at either (very capable) model or GPAI system (at scale) level, to address the risk of illegal or harmful output of the GPAI system, including safeguards against misuse or autonomous use to generate such output”, which suggests the importance of addressing not only the system itself, but also its intended use. We believe that there is scope for this draft proposal to start addressing some of the concerns outlined by researchers and the scientific community in general, as it goes deeper into different types of AI systems and tries to introduce more obligations for effective accountability and less bias, although there is still a long way to go.

More recently, The European Data Protection Supervisor (EDPS) has released an Opinion (44/2023) on the Regulation, considering recent legislative advancements. For instance, the Opinion recommends that the Regulation should explicitly forbid “any use of AI to carry out any type of ‘social scoring’ – and not only when performed ‘over a certain period of time’ or ‘by public authorities or on their behalf’)” –, as well as “any use of AI systems categorising individuals from biometrics into clusters according to ethnicity, gender, as well as political or sexual orientation, or other grounds for discrimination prohibited under Article 21 of the Charter”[15] –, not including the exceptions provided for in the AI Act.[16]  This stricter standpoint adopted by the EDPS emphasises a decisive challenge: how to find the right balance between security requirements and the preservation of citizens’ privacy and data protection.

Why are all these critiques, ongoing evaluations, and political negotiations so crucial? We currently exist in a swiftly changing world, where even the designers of AI have not foreseen the rapid growth of the technology for which they are responsible, as mentioned at the beginning of this article. As Yiannis Laouris recalls in the Onlife Manifesto, in ancient Greece, citizens worked together to search for and analyse meanings and alternatives through a process known as “deliberation”, and the aim was to fully understand the underlying issues, clarify the debate and ultimately reach a consensus. The author goes on to suggest that, in today’s technological world, democracy needs to be reinvented to enable millions of people to participate effectively and to access all the relevant information that results from their decisions.[17]

In today’s unique digital modernity, we are witnessing the active but also unconscious production of personal data by individuals. At the same time, digital existence and its implications for politics are not predetermined – they represent a continuous process of replacing old frames of reference with new, largely unknown ones.[18] It is precisely because of this new reality that regulations are urgently needed, but, above all, their constant adaptation, since the fundamental characteristic of this AI-dominated reality is its unparalleled mutability.

In this vein, this brief article hopefully serves a reminder that is crucial to maintain an ongoing reassessment of multidisciplinary dialogue and practices on AI, with a focused emphasis on fundamental rights placed at the heart of discussion and action. Furthermore, adequate investment in research into legal and ethical considerations relating to the use and development of AI is not only important, but imperative to ensure the long-term preservation of our democracies and the rule of law.


[1] European Commission, “2023 State of the Union Address by President von der Leyen”, 13 September 2023, https://ec.europa.eu/commission/presscorner/detail/en/speech_23_4426.

[2] See, for instance, European Commission, “International Outreach for human-centric Artificial Intelligence initiative”, https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai; AI HLEG, “Ethics guidelines for trustworthy AI”, 8 April 2019, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai; European Commission, “White Paper on Artificial Intelligence: a European approach to excellence and trust”, 19 February 2020, https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en; European Commission, “Fostering a European approach to Artificial Intelligence”, Brussels, 21.4.2021, COM(2021) 205 final, https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM%3A2021%3A205%3AFIN.

[3] European Commission, “White Paper”, 11.

[4] European Parliament, “EU AI Act: first regulation on artificial intelligence”, 14 June 2023, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[5] See Alessandra Silveira and Maria Inês Costa, “Regulating Artificial Intelligence (AI): on the civilisational choice we are all making”, The Official Blog of UNIO – Thinking and Debating Europe, Editorial of July 2023, July 17, 2023, https://officialblogofunio.com/2023/07/17/editorial-of-july-2023/#more-6016.

[6] European Parliamentary Research Service, “Artificial intelligence act – Briefing”, June 2023, 4-6, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf.

[7] Lilian Edwards, “Regulating AI in Europe: four problems and four solutions”, Ada Lovelace Institute, March (2023): 10-12, https://www.adalovelaceinstitute.org/wp-content/uploads/2022/03/Expert-opinion-Lilian-Edwards-Regulating-AI-in-Europe.pdf.

[8] These are, as included in Article 7 of the AI Act: biometric identification and categorisation of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; administration of justice and democratic processes.

[9] Natali Helberger and Nicholas Diakopoulos, “ChatGPT and the AI Act”, Internet Policy Review, 12(1) (2023): 2-3. Doi: https://doi.org/10.14763/2023.1.1682.

[10] Natali Helberger and Nicholas Diakopoulos, “ChatGPT and the AI Act”, 3.

[11] Dag Elgesem, “The AI Act and the risks posed by generative AI models”, NAIS 2023: The 2023 symposium of the Norwegian AI Society, June 14-15, Bergen, Norway (2023): 5, https://ceur-ws.org/Vol-3431/paper3.pdf.

[12] See Philipp Hacker, et al., “Regulating ChatGPT and other Large Generative AI Models”, Fairness, Accountability, and Transparency (FAccT ’23), June 12–15 (2023): 1115-1119. Doi: https://doi.org/10.1145/3593013.3594067.

[13] Philipp Hacker, et al., “Regulating ChatGPT”, 1119-1120.

[14] Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Preparation for the trilogue, Interinstitutional File: 2021/0106(COD), Brussels, 17 October 2023, 2023-10-17-conseil-ia-mandat-de-negociation-10412dc9fadd4e4fa9b0360960fd13af.pdf (table.media).

[15] European Data Protection Supervisor (EDPS), “Opinion 44/2023 on the Proposal for Artificial Intelligence Act in the light of legislative developments”, October 23, 2023, 6, https://edps.europa.eu/system/files/2023-10/2023-0137_d3269_opinion_en.pdf.

[16] Article 5.1: “The following artificial intelligence practices shall be prohibited (…)” (d): the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; (iii)the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.”

[17] Yiannis Laouris, “Commentary”, in The Onlife Manifesto: being human in a hyperconnected era, ed. Luciano Floridi (Heidelberg, New York, Dordrecht, London: Springer Cham, 2015), 32.

[18] Ulrich Beck, The Metamorphosis of the World (Cambridge: Polity Press, 2016), 143-145.

Picture credits: by Tara Winstead on Pexels.com.

Leave a comment