Social order in the age of artificial intelligence : the use of technology in migration governance and decision-making


Apurva Vohra


University of British Columbia

Date Issued


Document Type



Master of Laws - LLM




Artificial Intelligence (AI) and untested technologies such as facial recognition, aerial surveillance drones, biometrics and automated decision making tools are increasingly being used by states and international organizations to control and regulate migration. The purpose of this research is to understand why the international migration regime overlooks overt manifestations of racial discrimination against migrants while sanctioning the use of untested technologies they are subjected to. In order to study the use of AI and related technologies, this research assesses two case studies in which technological tools are deployed by United Nations High Commissioner for Refugees and Canada for migration management. The case studies found that in the absence of accountability and oversight the use of these technological solutions leads to the violation of privacy rights of the subjects, discrimination, and raises concerns about transparency and vulnerability. Moreover, there is a stark power imbalance between the migrants on one hand and states and international organisations that employ these AI technologies on the other hand as the migrants are subjected to these technologies without their active consent and any possibility for recourse. Consequently, migrants have become involuntary experimental subjects of these technologies. The research aims to make three contributions: First, the project maps the terrain of the international migration regime by demonstrating the use of untested technologies on migrants without any regard to their rights and in the absence of any oversight. Second, the project articulates a new analytical framework informed by perspectives from Critical Race Theory, Third World Approaches to International Law and Critical Technological Studies to decipher how the foundational inequality inherent in the migration regime is replicated by the use of these technologies. Moreover, the migrants’ lack of agency and consent related to the use of these technological tools magnifies broader dynamics of inequality. Third, the project makes a normative claim that race-literate and race-conscious technological designs (as opposed to race-neutral ones) will train the algorithms to avoid their propensity of replicating racial inequalities by capturing diverse cultural and social contexts without introducing bias into the technologies.

Date Available



Attribution-NonCommercial-NoDerivatives 4.0 International




Law, Peter A. Allard School of