
Bruno Saraiva [master’s student in European Union Law and Digital Citizenship & Technological Sustainability (CitDig) scholarship holder])
Why the EU’s approach to AI development differs from that of the U.S. or China is a question that spans philosophy, sociology, geopolitics, and economics. But the simplest answer may be the following: they are different. Each polity carries distinct priorities, institutions, and constraints – and these differences translate into divergent AI trajectories.
In Europe, this divergence goes beyond regulation and economics; it extends to the very technical models being developed. While the U.S. and China pursue scale through ever-larger general-purpose systems, the EU has signaled a regulatory preference for limited models – special-purpose systems trained with curated data.
This post explores the methodological virtues of that approach. In a world where large models struggle with trust, reliability, and compliance with rights-based law, the EU’s strategy offers an alternative: models designed to minimise hallucinations, resist “model collapse”, and reduce opacity. By embedding rigor into training practices, the EU may not only advance trustworthy AI but also begin addressing its competitiveness woes, as underscored by the Draghi Report.[1]
Continue reading “Limited model approach: the merits of methodological rigor in the European legal order concerning AI developments”








