
Maria Clara Pina (master’s student in Human Rights at the School of Law of the University of Minho)
I.
Currently, in the so-called era of techno-solutionism,[1] digital technologies, including Artificial Intelligence (AI), have become widely used.[2] We are witnessing the emerging but rapidly evolving phenomenon of border management and control through the use of new technologies[3] and automated individual decision-making (Article 22 of the General Data Protection Regulation, henceforth “GDPR”),[4] which employ AI, and promise faster and more efficient decisions. However, these systems have the potential to harm human rights. Migration is becoming a transaction that requires migrants to exchange biometric and biographical data for access to resources or a jurisdiction – and to be seen as people[5] with inherent rights and dignity.
At the same time, the number of migrants in the European Union (EU)[6] is growing, making it worthwhile to analyse the impact of these technologies and their regulation (or lack thereof), given their inevitable and rapid evolution, but, above all, the constant character of the migratory phenomenon over time, and the vulnerability inherent to the status of migrant. In this context, complex legal challenges arise, requiring the analysis of the EU regulatory framework on the use of AI in the context of border management, asylum and migration, considering the main gaps within the AI Act[7] and its far-reaching implications on the human rights of migrants.
Continue reading “AI in the context of border management, migration and asylum in the EU: technological innovation vs. fundamental rights of migrants in the AI Act”








