Editorial of July 2023

By Alessandra Silveira (Editor) and Maria Inês Costa (PhD candidate, School of Law, University of Minho) 

Regulating Artificial Intelligence (AI): on the civilisational choice we are all making

It is worth highlighting the role of the European Parliament (EP) in taking its stance on the negotiation of the AI Regulation, which in turn aims to regulate the development and use of AI in Europe.[1] With the EP having approved its position, European Institutions may start trilogue negotiations (the Council voted on its position on December 2022). The AI Regulation that will apply across the European Union (EU) will only enter into force if the co-legislators agree on a final wording.

The AI Regulation follows a risk-based approach, i.e., establishes obligations for those who provide and those who use AI systems, according to the level of risk that the application of the AI system entails: is the risk high, is it low, is it minimal? In other words, there is a hierarchisation of risks, and the different levels of risk will correspond to more or less regulation, more or less impositions, more or less restrictions. The EP’s position, even if introducing further safeguards (for example, on generative AI) does not deviate from the idea that the Regulation should protect citizens without jeopardising technological innovation. To this extent, systems with an unacceptable level of risk to people’s safety should be banned, and the EP extended the list of prohibited AI uses under the Commission’s original proposal. These are, for instance, systems used to classify people based on their social behaviour or personal characteristics (such as Chinese-style social control systems); emotion recognition systems in the workplace and educational establishments; predictive policing systems based on profiling or past criminal behaviour; remote and real-time biometric identification systems (such as facial recognition) in publicly accessible spaces, etc.

Against this background, it is important to question whether these standards are indeed sufficient to ensure that AI developed and used in the EU fully respects EU rights and values, including safety, privacy, transparency, and non-discrimination. Moreover, one should ponder to what extent do these standards ensure proper human oversight in AI-related processes (at least those systems that have a significant impact on citizens’ fundamental rights), as proposed by citizens at the Conference on the Future of Europe. These matters raise many doubts.

In fact, this Regulation on AI has been under negotiation for two years, and by the time it is implemented, digital realities for which it was not originally designed will exist, similar to what happened with the General Data Protection Regulation (GDPR) or what will eventually follow with the Digital Services Regulation (which will be implemented in February 2024). And why? Because the rules that intend to regulate the use of digital technologies come to life tendentially late, as the law cannot keep pace with technological development. Law has always acted in deferred time, not in real time. Indeed, here we will need cooperation between the various areas of knowledge, with legal practitioners having to interpret available legal solutions in a manner adequate to new technological developments but, particularly, we will need a concerted effort by computer scientists and engineers themselves to develop AI in a manner that is aligned to the general societal principles accepted in the EU.  

Yet, it is worth emphasising that it is certainly better to have some regulation in this area than none at all. That is, generative AI systems (such as the famous ChatGPT) are better off having to reveal that content has been generated by AI (an obligation introduced by the EP), to help us distinguish image manipulation techniques, to assist us in discerning what is real from what is not. It is certainly preferable to regulate the testing of AI systems before their use, as well as to provide for a right of explanation on decisions taken based on AI systems (at least those systems that have a significant impact on citizens’ fundamental rights).

However, individuals need to be informed about the limitations of law in this matter, and to be alerted to the civilisational choice we are all making. AI is a technology that runs counter to established models of ordering and explaining the world, insofar as it presents patterns and predictions that humans are unable to discern and conceptualise – it does not operate in the realm of human reason, for which concepts such as causality and intention are relevant. Conversely, the patterns that emerge from AI systems are inaccessible to human consciousness and frequently inexpressible in human language, and computer engineers cannot explain how some AI systems arrived at a given result.[2]

It is true that, daily and on a frequent basis, we use technologies that we do not explain or control for the sake of convenience. However, explainability and (above all) reasonableness relating AI systems is paramount, because we are dealing with a phenomenon of a different nature. In other words, in the presence of AI systems, human reason is no longer the only form of intelligence applied to understanding reality.[3] When human reason no longer has the exclusive role in exploring and shaping reality, when we accept AI as an adjunct to our own perceptions and thoughts, how will we ultimately see ourselves and our role in the world? How can AI be reconciled with concepts of personal autonomy and human dignity, core values of our democracies and legal orders? Indeed, when it is not evident why a result has been achieved, it is not possible to assess the changes needed to arrive at a different solution, nor is it possible to challenge an unfavourable outcome adequately and consistently.

In the context of reflecting on AI and its benefits for society in various domains – from health to national security – we are quickly confronted with a myriad of challenges that this new paradigm raises ultimately in how we think about human beings and how we shape societies. Indeed, AI challenges existing notions of security, human rights, and governance, [4] and there is a growing need for governments and societies to learn how to respond to this technological breakthrough. It is pivotal to discuss to what extent we have enough models to take this new path without collapsing, because AI can deceive us.

Recently, Geoffrey Hinton, a cognitive psychologist and computer scientist, and one of the greatest experts in AI, explained this in an interview with Fareed Zacaria.[5] He revealed that only a few months ago he had a sort of epiphany. He went on to explain that ever since working on AI, Hinton had wanted to develop a technology that would come close, that would mimic the human brain, but always assuming that the brain would be far superior. However, he suddenly realised that the algorithm he was developing could possibly already be better than the brain, and that if he scaled up, it would be smarter than humans, because networked computers learn instantly. That is, when one learns something, they all learn it – and each one learns different things and transmits them to the other simultaneously. Conversely, the transmission of knowledge between humans does not work like that, it is much slower and more laborious. Indeed, this is worrying because we have no other examples in history of a higher intelligence being controlled by a lower intelligence. Will it keep working for us when it becomes more “intelligent”? And why would it not deceive us about critical infrastructures, for example?

Some neuroscientists reject the idea that the brain will be overtaken by machines. Miguel Nicolelis, for example, argues that the digital ecosystem is not capable of faithfully reproducing mental processes, nor of emulating the human power of creation and inventiveness. The author of “O verdadeiro criador de tudo” (in English, “The true creator of everything”) explains how the human brain has evolved to become an organic computer without rival in the known universe, mainly due to three fundamental properties: its malleability to adapt and learn; its ability to allow several individuals to synchronise their minds around a task, goal or belief; and its incomparable capacity for abstraction. He admits, however, that intelligence risks being shaped by algorithms and impoverished, in a kind of dystrophy of the potential capabilities of the human brain.[6]

On the other hand, neuroscientist António Damásio has already recognised the possibility that machines can feel. This could break new ground in the history of AI, especially robotics, as the universe of affections is the foundation for the higher intelligence that conscious minds have gradually developed, expanded and imposed. According to Damásio, such machines with feelings would develop functional elements related to consciousness, as feelings are part of the path to consciousness – but such feelings will not equal those of living creatures.[7]

What is certain is that we are overdue for this debate, given that, as Miguel Poiares Maduro wrote a few days ago in a piece in the Expresso newspaper,[8] AI is not just changing our lives, it is changing what humanity is. Here is the crucial question: how will the evolution of AI affect human perception, cognition, interaction?[9] Faced with this “digital metamorphosis”, as Ulrich Beck put it,[10] do we have models to conceptualise it?

Indeed, the issue is more complex than just regulating the use of AI, because AI poses problems about the understanding of reality and the role of humans in this reality. As Geoffrey Hinton alerted, it is not as simple as controlling climate change: for this we have a recipe, which entails reducing carbon emissions and greenhouse gases. It is a costly endeavour, but we know what to do to achieve the green transition. However, with AI, we still lack clarity on the steps to take.[11]


[1] European Parliament, “AI Act: a step closer to the first rules on Artificial Intelligence”, Press Release, May 11, 2023, accessed July 12, 2023, https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence.

[2] On this theme see Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, A era da inteligência artificial (The Age of A.I.) (Alfragide: Dom Quixote, 2021). 

[3] Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, A era da inteligência artificial.

[4] Alex Wilner, “Cybersecurity and its discontents: Artificial intelligence, the Internet of Things, and digital misinformation”, International Journal, vol. 73(2), 2018, 316, https://journals.sagepub.com/doi/abs/10.1177/0020702018782496.

[5] CNN, “On GPS: does AI already threaten humanity?”, accessed July 13, 2023, https://edition.cnn.com/videos/tv/2023/06/11/exp-gps-0611-hinton-on-ai-threat-to-mankind.cnn.

[6] See Miguel Nicolelis, O verdadeiro criador de tudo – como o cérebro humano moldou o universo tal como o conhecemos (Elsinore, 2021).

[7] See António Damásio, Sentir & saber – a caminho da consciência (Feeling & knowing – making minds conscious), Temas e Debates (Lisbon: Bertrand, 2020).

[8] See Miguel Poiares Maduro, “O fim ou um novo início?”, Expresso, June 16, 2023, 37.

[9] See Henry Kissinger, Eric Schmidt, Daniel Huttenlocher, A era da inteligência artificial, 21.

[10] Ulrich Beck, A metamorfose do mundo (The metamorphosis of the world) (Lisbon: Edições 70, 2017).

[11] CNN, “On GPS: does AI already threaten humanity?”.

Picture credits: Photo by Pavel Danilyuk on Pexels.com.

2 thoughts on “Editorial of July 2023

  1. Pingback: Editorial of October 2023 – Official Blog of UNIO

  2. Pingback: EU’s policies to AI: are there blindspots regarding accountability and democratic governance? – Official Blog of UNIO

Leave a comment