The need for an egalitarian ethical framework for emerging technologies

Manuel Protásio (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – Bolsa UI/BD/152801/2022) 
           

The blurring boundary between humans and machines introduces a crucial dichotomy between consciousness and information, shaping the dynamics of our technological engagement and the “limbo” between humans and technologies, situated around perception, is central to how the law assesses its potential effects on human behaviour.

According to Kantian philosophy, the act of perception is a private, subjective, and observer-dependent mechanism, which, by its nature, grants the subject a sensation of agency over the physical reality – their environment. This feeling of agency can be understood as the empowering subjective experience that is often translated into the individual’s freedom and autonomy. If it is true that the synthetical perception confers agency over the perceived objects as they are read into our reality, it must also be true that illusions – reasoning mistakes based on our perception – can be triggered if our perception follows systematic errors that occur whenever we store wrong information about our reality regarding perceived objects, or when we use the wrong model of perception to interpret the external world.[1] 

What technologies like Augmented Reality (AR) or Artificial Intelligence (AI) will cause to our perception in the short and long-term is to convey analytical information from the physical world and thus trigger potential changes in our synthetical perception, which can lead to the loss of agency of our own our reality. Virtual Reality (VR), on the other hand, can trigger the same effect by deceiving the synthetical sensory feedback of our biological perception and replicating it through technological means.   

Continue reading “The need for an egalitarian ethical framework for emerging technologies”

EU’s policies to AI: are there blindspots regarding accountability and democratic governance?

Maria Inês Costa (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – UI/BD/154522/2023) 
           

In her recent State of the Union (SOTEU) 2023 speech, the President of the European Commission Ursula von der Leyen addressed several pressing issues, including artificial intelligence (AI). In this regard, the President of the European Commission highlighted that leading AI creators, academics and experts have issued a warning about AI, stressing that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”, adding that AI is advancing at a faster pace than its creators predicted.[1]

The President of the European Commission also argued that of the three pillars of the global framework for AI – guardrails, governance, and guiding innovation – guardrails is the most important, and in this sense, AI must be developed in a way that is human-centred, transparent, and accountable. Indeed, in Europe we have witnessed such an approach to the development of AI, as evidenced by various official documents and reports from different scientific communities, [2] also emphasising the need to build trust in this type of technology.

Continue reading “EU’s policies to AI: are there blindspots regarding accountability and democratic governance?”