Manuel Protásio (PhD Candidate at the School of Law of the University of Minho. FCT research scholarship holder – Bolsa UI/BD/152801/2022)
▪
The blurring boundary between humans and machines introduces a crucial dichotomy between consciousness and information, shaping the dynamics of our technological engagement and the “limbo” between humans and technologies, situated around perception, is central to how the law assesses its potential effects on human behaviour.
According to Kantian philosophy, the act of perception is a private, subjective, and observer-dependent mechanism, which, by its nature, grants the subject a sensation of agency over the physical reality – their environment. This feeling of agency can be understood as the empowering subjective experience that is often translated into the individual’s freedom and autonomy. If it is true that the synthetical perception confers agency over the perceived objects as they are read into our reality, it must also be true that illusions – reasoning mistakes based on our perception – can be triggered if our perception follows systematic errors that occur whenever we store wrong information about our reality regarding perceived objects, or when we use the wrong model of perception to interpret the external world.[1]
What technologies like Augmented Reality (AR) or Artificial Intelligence (AI) will cause to our perception in the short and long-term is to convey analytical information from the physical world and thus trigger potential changes in our synthetical perception, which can lead to the loss of agency of our own our reality. Virtual Reality (VR), on the other hand, can trigger the same effect by deceiving the synthetical sensory feedback of our biological perception and replicating it through technological means.
In Neuropsychiatry, for instance, it is argued that “autonomous choice depends for its existence upon certain human functions such as the ability to reason, judge, and assess consequences”.[2] If this is the case, then there is no autonomy when these capacities are compromised. Technologies like AR or VR, which can assign meaning according to the purposes of the user or the developer, either by overlaying digital information over our reality or by confining us in an immersive virtual environment, will have an active participation in our conceptual reality. The same is true for AI, whether through the form of Large Language Models (LLMSs), such as ChatGPT, or by adding another layer to the decision-making process of humans, this use will eventually lead to a technological input in our conceptual reality, with uncertain consequences and legal relevance to our society. The same might be argued for the role of phones, whose effects have already changed the way communication works permanently.
This academic inquiry engages with the epistemological framework of Cognitive Integration,[3] exploring its implications in the context of VR and AI. The main idea posits that individuals can assume epistemological responsibility for their belief processes when these processes emanate from their cognitive abilities and result from cooperative interactions within their cognitive systems. That epistemological responsibility is, in fact, what we, as individuals, assume that is perceived in a certain situation – the individual and subjective responsibility of our own perception. If an individual is in an altered state of consciousness, such responsibility will automatically be diminished or increased, depending on whether such individual can be regarded as an expert in a specific subject matter, after being assessed by the law.
Within the realm of VR and AI, the technical complexities and potential applications of these technologies can be interpreted as extensions of our biologically rooted cognitive perception of the world. This suggests that the sensory validation of beliefs and concepts transcends reliance solely on our integrated biological cognitive system to include extensions facilitated by these technological means. Drawing parallels with real-life scenarios, this situation echoes that of an individual undergoing cognitive alterations induced by external stimuli or conditions that may compromise the reliability of their knowledge.
In fact, a noteworthy aspect of VR and AI lies in their ability to create scenarios where users may not question their own perception processes but, instead, bolster their confidence in their reliability. Users may find themselves in positions akin to that of experts in specific realms of knowledge, or conversely, they may be susceptible to distraction in the real environment, placing them in a state of ignorance. In the latter scenario, their cognitive position aligns with that of an individual experiencing an altered state of consciousness.
This exploration raises a pivotal question for Epistemology and Philosophy concerning users of AI, VR, or any Extended Reality (ER) technology. This raises questions regarding the concept of a cognitively enhanced individual and prompts a reconsideration of what constitutes genuine knowledge. If someone can effortlessly acquire specific knowledge that traditionally demanded significant time and effort, does mere acquaintance with analytical information regarding a certain aspect of the world suffice, or is the investment of time and effort indispensable for authentic knowledge acquisition?
This discourse contributes to the ongoing philosophical dialogue surrounding cognitive processes, technological augmentation, and the evolving nature of knowledge in a decade characterised by advancements in VR and AI.
As society navigates the age of ChatGPT and immersive ER experiences, a profound shift in our understanding of perception and reality is imminent. The advent of advanced AI systems and virtual environments necessitates ethical considerations and societal regulations to ensure responsible use. Failure to make informed decisions could result in a reality where the boundaries between the authentic and the simulated become indistinguishable, potentially compromising individual agency, autonomy, and the fabric of our shared reality. The European Union (EU) should advocate for an ethical framework centred on endowments, prioritising an egalitarian framework over a utilitarian, consumer-driven approach in regulating technology access and use.
Brain-Computer Interfaces (BCIs) as an Ethical Horizon
BCIs hold immense promise in establishing a direct link between the human brain and computers, offering unprecedented opportunities for collaboration. This is particularly significant for individuals with physical disabilities, as BCIs pave the way for enhanced communication and device control. Regulatory challenges surround the ethical framework for equitable BCI access. Dworkin’s “lucky egalitarianism” theory[4] suggests prioritising individuals with deficiencies, such as physical disabilities, aligning the access to this technology with the ethos of justice.[5]
Bridging Endowments to Virtual Reality (VR) and ChatGPT
Extending Dworkin’s “lucky egalitarianism” theory to emerging technologies like VR and ChatGPT establishes a comprehensive ethical approach. This framework emphasises prioritising individuals in need, ensuring technology empowers those with physical or cognitive limitations.
Within the EU, a commitment to human rights and equality is an institutional framework inspired by Dworkin’s theory. Regulatory bodies should collaborate to establish rules addressing societal inequalities and promoting inclusivity in the regulation of the technological landscape.
Informed ethical decision-making, prioritising endowments and considering the marginalised, positions the EU at the forefront of shaping a future where technology fosters equality instead of profit. This establishes a foundation for a balanced and just technological landscape.
The crucial role of public discourse in ethical decision-making
In shaping ethical guidelines, institutional frameworks and regulatory bodies play a pivotal role. However, public discourse is equally essential. Informed citizens, engaged in the dialogue surrounding technologies like BCIs, VR, and ChatGPT, can contribute to a robust ethical framework. The current mechanisms in the EU are insufficient for addressing potential ethical concerns raised by the public, thereby hindering the regulatory procedures undertaken by EU institutions and Member States concerning emerging technologies. A democratic exchange of ideas ensures the collective voice shapes regulations, addressing concerns and goals for the technological landscape of the EU and now, more than ever, there is a great need to align expectations between the tech industry and the rest of the stakeholders, and such alignment cannot be achieved without higher levels of transparency and increased participation within the existing EU mechanisms.
Towards an ethical future
The integration of BCIs, VR, and ChatGPT into daily lives requires careful consideration and proactive measures. Through a Dworkinian approach that prioritises endowments and justice, we can shape a future where technology serves as a force for societal growth in contrast with aligning the interests of society with a utilitarian approach focused only on markets and consumers. The ethical choice of which framework should the law follow in regulating the access and use of a certain technology is by default democratic but the procedures and means to accomplish such social feedback are not efficient or effective. This “inertia” within the institutions should not and cannot be regarded as a sign that the market should regulate itself and that it should establish the right ethical framework for its objectives. In a capitalist environment, it is only natural to expect innovation to take a utilitarian approach, since the incentives and rewards are mostly related to profit. It is up to the other stakeholders, whether institutional or not, to conceive an ethical framework for the market. By adopting and defending an ethical framework that ensures that access to and use of these technologies is based on an egalitarian objective and not on profit, the EU can be at the forefront of a movement towards a more inclusive and ethically-based technological landscape.
[1] Andy Clark and David J. Chalmers, “The Extended Mind”, Analysis 58 (1): 7-19 (1998). Doi: 10.1093/analys/58.1.7.
[2] M Carmela Epright, “Coercing future freedom: consent and capacities for autonomous choice”, The Journal of Law, Medicine & Ethics 38(4) (2010): 800. Doi: 10.1111/j.1748-720X.2010.00533.x.
[3] “(…) claiming that artifacts can be parts of an agent’s cognitive system presupposes an account of how such external elements can be properly integrated into our cognitive loops”, in Spyridon Orestis Palermos, “Knowledge and Cognitive Integration”, Synthese, vol. 191, no. 8 (2014):1931–51
[4] Serena Olsaretti (ed.) The Oxford Handbook of Distributive Justice (Oxford Handbooks, 2018). Doi: https://doi.org/10.1093/oxfordhb/9780199645121.001.0001.
[5] Kwan Wei Kevin Tan, “Elon Musk’s Neuralink is now recruiting people with serious disabilities like ALS to test if his brain chips are safe”, Business Insider, September 20, 2023, https://www.businessinsider.com/elon-musk-neuralink-recruiting-people-disabilities-test-brain-chips-2023-9.
Picture credits: by fauxels on Pexels.com.

