
Robert Junqueira [Executive Coordinator of the Research and Scientific Careers Bureau of the Research Centre for Justice and Governance (JusGov)] [1]
As AI systems are developed and used by a wide range of individuals and organisations – not least military bodies, as recent events in Iran attest[2] –, it can become unclear who is responsible when something goes wrong. At its core, the debate surrounding responsibility for harm caused by a system (biological or otherwise) with a fractured or nonexistent legal personality is not unprecedented. Well before the age of algorithmic governance, legal and moral reasoning laid considerable groundwork for determining liability under circumstances wherein the link between intent and outcome is obscured by technical artefacts, chains of command, organisational setups, and status-based asymmetries.
In ancient Rome, for instance, legal issues around agency and liability were frequently addressed, prompting the legal order to evolve and respond with gradually emerging solutions. While not necessarily providing us with ready-made schemes, such precedents nonetheless draw our attention to the fact that legal issues involving responsibility have traditionally arisen and remedies were found as a result of incremental steps rather than by means of abrupt, one-off changes. This fact, the problems faced by our ancient peers, and the ways in which they managed them, offer valuable lessons and useful models when tackling today’s pressing AI regulatory challenges.[3]
Continue reading “Wait before hallooing: some remarks on the EU’s response to the rise of AI”








