Skip to Content

Aye, robot

But don’t forget the human approach.

Decision flow-chart
iStock

Leveraging artificial intelligence for efficiency is smart business practice – automating the mundane lets people concentrate on the complex.

And in the immigration context in particular, it should never be forgotten who’s – or what’s – good at what.

The CBA’s Immigration Law Section says in a letter to the Immigration Minister that discretion could be lost if technological tools are allowed to become proxy decision-makers. AI and machine learning should inform a human’s decision-making process, not fetter it.

“We fear that automated decision-making tools could rely on discriminatory and stereotypical markers, such as appearance, religion or travel patterns, as flawed proxies for more relevant and personalized data, thus entrenching bias into a seemingly neutral tool,” the Section writes.

“Automated decision-makers may not grasp the nuanced and complex nature of many refugee and immigration applications, potentially leading to serious breaches of human rights such as privacy, due process, and equality.”

The federal government has been using AI and machine learning since 2014 to perform activities traditionally carried out by immigration officials, and to support the evaluation of some immigrant and visitor applications.

A government directive on automated decision-making, which took effect in April, aims to ensure that AI is used in a way that reduces risks and leads to more “efficient, accurate, consistent, and interpretable decisions.” The government has also created an Algorithmic Impact Assessment tool to help federal departments understand the risks associated with automated decision-making.

“While we view the directive and AIA as positive developments, we have concerns about automated decision-making in the immigration context,” the Section says in a letter to the Immigration Minister.

The Section says both the Directive and the AIA should include a more detailed definition of “unintended bias” and focus on addressing biases rooted in protected characteristics such as gender, religion, race and country of origin.

The Section urges Immigration, Refugees and Citizenship Canada to ensure “thoughtful and specific” selection criteria are built into technological tools, and updated to keep pace with legislation, ministerial instructions and social mores. It also suggests these tools be constantly scrutinized for data breaches in order to protect applicants’ privacy.

The Section also recommends the government establish an independent body “with the power and expertise to oversee and review all uses of automated decision-making by the federal government, including appropriate procurement practices and engagement with the private technology sector.”