In 2024, the European Parliament and the Council of the EU adopted the Artificial Intelligence Act. Set to enter into force this year, in 2026, the legislative framework intends to regulate the use of AI based on the risk level of AI systems, as determined by their purpose and impact on fundamental rights (REG EU 2024/1689).
Cross-referencing to fundamental rights is nothing new for the EU; after all, it has invested considerable effort in promoting itself as a normative power internationally, with an emphasis on democracy and fundamental rights. Likewise, the EU is determined to deliver a human-centred approach to digital transformation and technology that respects human rights and protects data and privacy (European Commission 2022).
However, there is always a catch. The European Parliament’s vote on March 26th on the AI Act implementation and migration policy gave us a glimpse of potential conflicts between the EU’s commitment to human rights and certain policy directions.
Before delving into that, we will take a quick look at what the AI Act means to identify the main issue.
Artificial Intelligence is used to make critical decisions across different fields, and it has become a pivotal part of digital policy. Since its use may pose risks and raise concerns regarding fundamental rights, the EU has developed the AI Act to regulate the deployment of such technology.
With 113 articles, the AI Act aims to ensure transparency obligations for AI systems and outlines four risk categories: minimal, limited, high, and unacceptable.
AI applications impact various policies and fields, including law enforcement, migration, asylum, and border control management. In these areas, AI systems are categorised as high-risk (REG EU 2024/1689).
Broadly speaking, AI is used in migration and border control management to conduct border control operations, develop surveillance systems, prevent illegal migration, and support administrative processes (Bircan and Korkomaz 2021). More specifically, six EU member states use AI to manage cases, detect identity fraud, and identify and assess languages. Further countries intend to use this technology too, while future developments include using AI to predict migration flows. Yet, in 2020, the EU Agency for Fundamental Rights reported that Artificial Intelligence could undermine fundamental rights (EMN-OECD INFORM 2022).
While the scope of the AI Act is to govern the use of AI and prevent risks, the Act also introduces exemptions to certain provisions, undermining the EU’s rights-driven regulatory model in digital policy. For instance, Article 49 of the AI Act requires the registration of high-risk AI systems in public EU databases to ensure transparency and, consequently, public scrutiny. But the same provision does not apply to certain AI systems used in law enforcement, migration, asylum, and border control management. In fact, those systems must be registered in non-public EU databases, only accessible to the Commission and national authorities, with no possibility of public scrutiny.
Furthermore, the AI Act does not affect Member States’ national security competences, which are explicitly excluded from the provisions (REG (EU) 2024/1689).
These aspects suggest that concerns over AI and fundamental rights could become an even more pressing issue if we think that migrants are often portrayed as threats to security. Since national security is excluded from the AI Act, could this allow certain border management practices to be classified as matters of national security, creating a regulatory grey area and potentially opening loopholes?
Various NGOs have naturally voiced concerns about how the AI Act may harm the fundamental rights of migrants, highlighting gaps in accountability and fundamental rights assessment, inadequate stakeholder involvement, and dangerous loopholes, such as the exclusion of AI systems used in national security. Moreover, different compliance deadlines apply to AI systems implemented in the EU’s large-scale migration databases. For instance, Eurodac and the Schengen Information System have until 2030 to comply with the AI Act, potentially normalising the use of surveillance technology on marginalised people (Amnesty International 2024, EDRi 2024).
Additionally, despite warnings from NGOs that amendments would weaken transparency requirements for high-risk AI systems (Amnesty International 2025), the European Parliament agreed to the omnibus proposals to revise the AI Act on 26 March, with 569 votes in favour. The declared intent was to simplify the framework. Concretely, the EP’s vote delays the application of certain AI Act rules to high-risk systems, such as those used in migration and border management, pushing the deadline to December 2027, rather than this year, as originally planned (EP 2026). The EP also approved the return regulation, which allows Member States to build external deportation centres under the pretext of combatting irregular migration (Euronews 2026). The next steps, of course, include rounds of negotiations with the Council, although a different outcome seems unlikely given the stance of member states on migration.
The EU has declared its intention to pave the way for a more rights-driven and humane approach to AI and digital policy. However, recent developments indicate that securing borders through high-risk AI systems, potentially at the expense of people on the move, may now be more pressing to the European Union than upholding fundamental rights.
REFERENCES
Amnesty International. “EU Simplification: Throwing Human Rights Under the Omnibus”. 2025. https://www.amnesty.org/en/latest/news/2025/11/eu-simplification-throwing-human-rights-under-the-omnibus/
Amnesty International. “EU’s AI Act Fails to Set Gold Standard for Human Rights”. European Institutions Office, 2024. https://www.amnesty.eu/news/eus-ai-act-fails-to-set-gold-standard-for-human-rights/
“Artificial Intelligence Act: Delayed Application, Ban on Nudifier Apps | News | European Parliament”. March 26, 2026. https://www.europarl.europa.eu/news/en/press-room/20260323IPR38829/artificial-intelligence-act-delayed-application-ban-on-nudifier-apps
Bircan, Tuba, and Emre Eren Korkmaz. “Big Data for Whose Sake? Governing Migration Through Artificial Intelligence.” Humanities and Social Sciences Communications 8, no. 1, 2021. https://doi.org/10.1057/s41599-021-00910-x
European Commission. 2022. “European Declaration on Digital Rights and Principles for the Digital Decade.” Communication. COM(2022) 28. Brussels: European Commission.
European Digital Rights (EDRi). “#ProtectNotSurveil: The EU AI Act Fails Migrants and People on the Move – European Digital Rights (EDRi)”. March 13, 2024. https://edri.org/our-work/protect-not-surveil-eu-ai-act-fails-migrants-people-on-the-move/
European Migration Network and OECD. “The Use of Digitalisation and Artificial Intelligence in Migration Management.” February 2022. https://www.oecd.org/content/dam/oecd/en/topics/policy-issues/migration/EMN-OECD-INFORM-FEB-2022-The-use-of-Digitalisation-and-AI-in-Migration-Management.pdf
European Union. 2024. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on laying down harmonised rules on artificial intelligence, 2024 O.J. L 2024/1689
Genovese, Vincenzo. “EU Parliament Approves Controversial Bill to Increase Migrant Returns.” Euronews, March 26, 2026. https://www.euronews.com/my-europe/2026/03/26/eu-parliament-approves-controversial-bill-to-increase-migrant-returns.