
Robert Junqueira [Executive Coordinator of the Research and Scientific Careers Bureau of the Research Centre for Justice and Governance (JusGov)] [1]
As AI systems are developed and used by a wide range of individuals and organisations – not least military bodies, as recent events in Iran attest[2] –, it can become unclear who is responsible when something goes wrong. At its core, the debate surrounding responsibility for harm caused by a system (biological or otherwise) with a fractured or nonexistent legal personality is not unprecedented. Well before the age of algorithmic governance, legal and moral reasoning laid considerable groundwork for determining liability under circumstances wherein the link between intent and outcome is obscured by technical artefacts, chains of command, organisational setups, and status-based asymmetries.
In ancient Rome, for instance, legal issues around agency and liability were frequently addressed, prompting the legal order to evolve and respond with gradually emerging solutions. While not necessarily providing us with ready-made schemes, such precedents nonetheless draw our attention to the fact that legal issues involving responsibility have traditionally arisen and remedies were found as a result of incremental steps rather than by means of abrupt, one-off changes. This fact, the problems faced by our ancient peers, and the ways in which they managed them, offer valuable lessons and useful models when tackling today’s pressing AI regulatory challenges.[3]
Customs decay and inventions flourish, but the vexations of legal inquiry do but slumber to wake again. Now, they are stirring anew, propelled into everyday life by algorithms. In response to an emerging concern that transcends national borders, the European Union has positioned itself at the forefront of the regulation of so-called Artificial Intelligence. This pioneering stance is reflected in the formulation of a regulatory landmark (the AI Act) which, by proposing a regulatory architecture that could serve as a stepping stone worldwide, seeks to harmonise legislative frameworks within the EU and to leverage transnational cooperation.
The AI Act looks to make navigating the landscape of responsibility of the hypercomplex society of our day somehow manageable,[4] yet it does so within narrow boundaries, excluding systems as long as they are developed, placed on the market, put into service, or used exclusively for military, defence, or national security purposes (Art. 2; Recital 24). This exclusion is dynamically determined: a system originally intended for civilian use may fall outside the scope of the Act as a consequence of being repurposed for wholly military purposes, whereas a system originally designed for military use may fall within the scope of the Act as a result of being repurposed for civilian, law enforcement, humanitarian, or other purposes the Act does not exclude. In cases of mixed purposes, the Act covers the non-excluded one(s).
While what is regulated by the Act is fairly clear and the regulation is pursuant of Art. 4(2) of the Treaty on European Union (TEU), uncertainty about who is responsible when something goes wrong arises due to the way responsibility lies with a broad network of roles (providers, deployers, importers, and distributors), obligations triggered by the functional role, risk profile, and use of systems (rather than intent), and duties that may be front-loaded at the outset or extend across their lifespan (transparency, oversight, monitoring, and reporting). Altogether, factors like these give rise to an intricate, hard-to-parse horizon of responsibility. The AI Act itself acknowledges and structures this knottiness by mapping roles across a regulatory network and by tying duties to objective risk categories and use cases.
The AI Act addresses these issues by defining the regulated parties and their roles throughout the network (Arts. 2-3: providers, deployers, importers, distributors), categorising systems based on objective risk levels and use cases (Art. 6 and Annex III; prohibitions in Art. 5), stipulating role-specific obligations (Arts. 9-15 as high-risk requirements; Art. 16 as provider duties; Art. 26 for deployers; transparency duties in Arts. 13 and 50; and GPAI provider duties in Art. 53) and building governance and enforcement instruments to operationalise responsibility (conformity assessment and notified bodies in Arts. 30-43; EU database for high-risk systems in Art. 71; national authorities and the AI Office in Arts. 64-70; penalties in Art. 99).
Even with this framework in place, uncertainties can remain regarding roles, triggers, and accountability. While the AI Act successfully provides greater legal clarity on roles and duties, the practical allocation of responsibilities amid evolving regulatory and technological landscapes, multiple parties and their overlapping obligations, and the attachment of duties to objective categories rather than intent—i.e., duties that attach because a system falls within a regulated type or use case, not because a person meant to cause harm—can still produce significant levels of unpredictability, especially as compliance is assessed through a highly complex, deeply technical, and quickly morphing evidentiary record involving possibly very large numbers of parties and the various stages of the lifecycle of the relevant system(s).
Regulation of AI is sorely needed. Good rules are far more than an exercise of power; they are also a means of holding power to account and fostering trusting relationships. In today’s AI ecosystem, clear expectations about responsibilities are needed by the many different actors involved, ranging from developers and companies that deploy systems to people who use them or are otherwise affected by their self-directed functionality. And, because such sophisticated technologies frequently defy geographical and institutional boundaries, a shared baseline for AI governance is all the more critical.
The AI Act is an attempt to preserve relational trust and public accountability while enabling innovation under a framework that remains faithful to the constitutional values of the EU. Framed as a regulation stipulating standardised guidelines for AI and championing human-centric and reliable systems, the Act conveys the message that the bloc’s strategy is not only about imposing restrictions, but also about fostering prudent growth. This development is, moreover, consonant with the spirit and desideratum of the Union: to exert global influence through soft law, that is, to act as a normative power. Such an aspiration does not entail the assertion of universal validity, but rather the provision of a foundation and an incentive for the development of strategies and solutions tailored to the specific circumstances and needs of diverse configurations within the global socio-political and economic fabric.
Inevitably, this presupposes, as it were, an EU-specific repertoire of political values, comprising a discourse on democracy, the rule of law, and fundamental rights (as enshrined in Art. 2 TEU). These values are not abstract or purely legalistic notions, but are grounded in moral reasoning and thoughtfully designed to promote overarching societal prosperity. And yet this is not, nor does it need to be, a claim that the EU offers the only or most legitimate or desirable way of life, or that other civilisational and ideological narratives are misguided or substandard. Standing on a more humble footing, the EU simply has to acknowledge that it is well placed to provide an auspicious reference point in times of global uncertainty.
Reaching as wide a consensus as possible globally would indeed be advisable, but the fact remains that, regardless of how other powers handle such an opportunity, the EU is duty-bound to properly govern the flow and impact of border-crossing technologies within EU territory. In doing so, it must remain true to its own values. This means, at the very least, setting out clear expectations about what counts as acceptable design and deployment, what kinds of uses are off-limits, and what safeguards must be in place when systems operate in ways that can materially affect people’s lives. It also means making those expectations legible across the whole ecosystem, from developers and providers to deployers, public authorities, and end users, so that the same baseline can guide practice, oversight, and contestation.
Questions of responsibility invariably arise in this context. A community that prioritises values such as dignity, fairness, inclusivity, and due process also needs to define a strategy and take action to address any threats or damage to their integrity. In other words, precepts must perforce be transmuted into our shared grammar of obligations, constraints, and redress, instead of just floating above conduct like clinquant finery. If we want those values to matter in real cases, we must be able to unambiguously spell out what concrete safeguards were owed, who owed them, and to whom they were owed. Consider, for example, credit scoring and profiling, i.e., algorithmic assessments that predict creditworthiness or categorise people according to risk. These examples make the point bluntly: without a clear duty to disclose information and a duty-holder to be held accountable, the language of fairness and due process can but be lipstick on a pig.[5]
We need an adequate grammar for attributing responsibility that builds on our baseline of values and takes into account conduct, position, and the risks inherent in certain undertakings. Bearing this in mind, the approach taken by the EU seems less improvised or a mere reaction to isolated problems and closer to a response to a structural governance gap. Indeed, the EU’s regulation of AI signals an official response to the growing fragility of traditional systems of behavioural governance and to the unease that emerges when the settled and time-honoured legal compass of principles is confronted with the novel questions raised by the accelerated and pervasive process of digitalisation through which we are currently passing. This erosion is indicative of a disturbance to a status quo oriented towards efficiency, wherein prudence was assumed to be intrinsic to the human factor.
As so-called intelligent technologies increasingly distinguish themselves through self-directed behaviour, triggering complex execution processes without synchronous supervision by a conscious personality and producing effects that may be profound and detrimental at the level of fundamental rights, it becomes apparent that we are called upon to promote a thorough moral scaffolding and to equip ourselves with a robust and up-to-date legal order. With a view to exercising critical control over the emergence of AI, it falls to moral science to safeguard the justification for the intangibility of the exclusively human nature of responsibility; it is then incumbent upon legal science to draw upon this foundation and to preserve the stability of a minimum common ethical core.
Legal studies in this field must always remain attentive to the productive dialectic between these two spheres. This entails an ever-deeper understanding of the discrepancy between technical compliance and moral capacity, encompassing both what the legal order may seek to enforce and what belongs to the inalienable realm of responsibility. Legal research in this context is thus indispensable and ought to remain aware that computational reasoning is devoid of moral intent, insofar as it lacks free and autonomous will, leaving no room for transgression or transcendence: a machine cannot falter when operating due to faith, love, honour, or mere fancy. Preserving the sense of difference between the human and the machine will be essential if we are to perpetuate a culture of morally grounded discernment, shielding it from being crushed by the autotelic and heavy machinery of a technocentric and, by extension, dehumanised civilisation.
An urgent need to invest in EU Law AI-centred research is most evident in the judicial context. The AI Act already takes this approach by imposing transparency, record-keeping, and human oversight duties. Nevertheless, translating these duties successfully into judicial practice calls for sustained doctrinal and methodological efforts. In view of the amorality of AI, it is above all imperative that the EU Law scientific community address every scenario in which automated data processing may arise as the cornerstone of the judicial function, continuously building on the responses we have been devising to address the question of whether it is desirable for the final verdict in matters of justice to remain a product of human deliberation.
For the sake of caution, it is critical to recognise that judicial decision-making cannot be reduced to patterns—that is, the average quantum—of past behaviours collected and systematised from datasets algorithmically generated and processed. In adjudicating a real-life case, it is a defining prerogative of the judiciary to attempt to foresee the future of the persons to whom its decision is addressed. This is a personal and indivisible future, which need not correspond to the statistically median outcome derived from prior anticipations of the lives of countless other individuals, with their undeniable specificities, and which will inevitably differ from the singular recipients of the jurisdictional provision in the case at hand.
On this and other related issues, the path signposted by the EU is one of openness to moderation in the face of the current feverish enthusiasm for technological progress. Nonetheless, we should not halloo before we are out of the wood. Only if properly explored will this openness make it possible to reap the benefits of technological innovation without compromising the human countenance and the foundational premises that sustain us as a community governed by law. It is now up to R&D Units such as the Research Centre for Justice and Governance of the School of Law of the University of Minho, steadfast in their commitment to engaging meaningfully with peers in their own and other fields of study, to undertake the labour-intensive task of, besides developing principled responses to the national-exclusive regulatory space left outside the AI Act, leveraging the Union’s openness and preventing it from turning into a principium sine fructu.
[1] More information and contact: https://www.robertjunqueira.com/. This set of reflections arose from dialogues with both the Coordinator of the Research Group for Studies on European Union Law of the Research Centre for Justice and Governance of the School of Law of the University of Minho, Professor Pedro Madeira Froufe, the President of the Ethics Committee of the Polytechnic University of Cávado and Ave, Professor Irene Portela, and my colleague Beatriz Melo. Further to this, this is a result of a kind invitation extended by an eminent scholar of European Union legal studies, Professor Alessandra Silveira.
[2] See Namir Shabibi and Alex Croft, “AI, a dead student, and US airstrikes: how a civilian became caught up in a new age of warfare”, The Independent, March 10, 2026, https://www.independent.co.uk/news/world/middle-east/ai-airstrike-civilian-killed-us-centcom-iraq-anthropic-b2926712.html.
[3] See, e.g., Klaus Heine and Alberto Quintavalla, “Bridging the accountability gap of artificial intelligence: what can be learned from Roman law?” Legal Studies 44 (2024): 65–80, https://doi.org/10.1017/lst.2022.51.
[4] To find out more about the hypercomplex society and why it may be impossible to govern, read Piero Dominici, Beyond black swans: inhabiting indeterminacy (Cham: Springer, 2026), https://doi.org/10.1007/978-3-032-09029-4.
[5] See Alessandra Silveira, “Automated individual decision-making and profiling [on case C-634/21 – SCHUFA (Scoring)],” UNIO – EU Law Journal 8, no. 2 (2023): 74–85, https://doi.org/10.21814/unio.8.2.4842.
Picture credit: by Tara Winstead on pexels.com.
