Automatic for the people
The federal government is moving ahead with machine-assisted decision-making.
If there was ever any doubt the federal government would use automation to help it make its administrative decisions, Budget 2022 has put that to rest. In it, Ottawa pledges to change the Citizenship Act to allow for the automated and machine-assisted processing of a growing number of immigration-related applications.
In truth, Immigration, Refugees and Citizenship Canada has been looking at analytics systems to automate its activities and help it assess immigrant applications for close to a decade. The government also telegraphed its intention back in 2019, when it issued a Directive on Automated Decision-Making (DADM), which aims to build safeguards and transparency around its use.
"[T]he reference to enable automated and machine-assisted processing for citizenship applications is mentioned in the budget to ensure that in the future, IRCC will have the authority to proceed with our ambition to create a more integrated, modernized and centralized working environment." said Aidan Stickland, spokesperson for Immigration, Refugees and Citizenship minister Sean Fraser, in an emailed reply.
"This would enable us to streamline the application process for citizenship with the introduction of e-applications in order to help speed up application processing globally and reduce backlogs," Stickland, added. "Details are currently being formalized."
But to live a life of ambition requires taking risks. So the DADM comes with an algorithmic impact assessment tool. According to Teresa Scassa, a law professor at the University of Ottawa, it creates obligations for any government department or agency that plans to adopt automated decision-making, either in whole or as a system that makes recommendations. It is a risk-based framework to determine the obligation to be placed on the department or agency.
"The citizenship and immigration context is one where what they're looking at is that external client," Scassa says. "It does create this governance framework for those types of projects."
Scassa says that the higher the risk of impact on a person's rights, or the environment, the more obligations are placed on the department or agency using it, such as requirements for peer review, monitoring outputs to ensure the system remains consistent with the objectives or that it doesn't demonstrate improper bias.
"It governs things like what kind of notice should be given," Scassa says. "If it's very low-risk, it might be a very general notice, like something on a web page. If it's high risk, it will be a specific notice to the individual that automated decision-making is in use. Depending on where the project is in the risk framework, there is a sliding scale of obligations to ensure that individuals are protected from adverse impacts."
Scassa suspects that IRCC may use automated decision-making to determine if someone qualifies for citizenship, which can mean different things.
It could be a triage system, for example, drawing information from applications before using AI to determine which applicants clearly qualify for citizenship. "Everything else [would fall] into a different basket where it needs to be reviewed by an officer," Scassa says.
Such a system would be relatively low-risk as any decisions would be positive for the applicant, while all others go to a human for review, which would speed up overall processing times.
"That may be less problematic than a system that makes all of the decisions, and people have to figure out why they got rejected, and you have to ask how transparent is the algorithm, and what are your rights to have the decision reviewed," Scassa adds. "There is the question of how it will be designed, and how impactful the AI tool will be on individuals. On the other hand, a triage system like this could have automation bias where files get flagged. Maybe the human reviewing them approaches them with a particular mindset because they haven't been considered to be automatically accepted. The automation bias may make the human less likely to approve them."
Scassa notes that the Open Government platform shows an algorithmic impact assessment for a tool developed for spousal analytics, a form of triage tool, which gives a sense of what kinds of tools the department is contemplating.
Scassa also notes that under the Citizenship Act, a provision allows for the delegation of the minister's powers to any person authorized in writing. She suspects that the proposed legislative change could be to specifically allow some of the decisions to be made on a fully-automated basis.
When it comes to reviewing decisions, the DADM and its risk framework appears to apply administrative law principles, including procedural fairness protections.
Paul Daly, also a law professor at the University of Ottawa, adds that the administrative law principles apply regardless of whether this type of automated decision-making has been authorized in the statute.
"It's a common concern for officials using sophisticated machine-learning technology to want legal authority," Daly says. "Really, that's only one part of the picture. There's a whole body of legal principles from administrative law, the Charter, and the [DADM] that have to be complied with when you start to actually use the systems," Daly says.
Lex Gill, a fellow at Citizen Lab, co-authored a report called "Bots at the Gate," which looks at the human rights impacts of automated decision-making in Canada's citizenship and immigration system. She acknowledges there are serious backlogs within the immigration system. But she cautions that faster isn't always better, particularly when the error rates associated with AI disproportionately affect certain groups who are already treated unfairly.
"Sometimes we adopt technologies that will allow us to believe that we are doing something more scientific, methodical or fair, when really what we are doing is reproducing the status quo, but faster and with less transparency," Gill says. "That is always my concern when we talk about automating these kinds of administrative processes."
Gill notes there is a spectrum of technologies available for automated and machine-assisted processing, some of which are not problematic, while others are worrying and raise human rights issues. Still, it is hard to know what we may be dealing with without more information from the minister.
"When we talk about using automated or machine-assisted technology to do things like risk scoring, that's an area where we know that it's highly discretionary," Gill says. "There is an entire universe of academic study that demonstrates that those technologies tend to replicate existing forms of bias and discrimination and profiling that already exists within administrative systems."
Gill says that these systems tend to learn from existing practices. The result tends to exacerbate discriminatory outcomes and makes them more difficult to challenge because there is the additional layer of perceived scientific or technical neutrality layered on top of a system that demonstrated bias.
"When the government is imagining adopting these kinds of technologies, is it imagining doing that in a way that is enhancing transparency, accountability, and reviewability of decisions?" asks Gill. "Efficiency is clearly an important goal, but the rule of law, accountability and control of administrative discretion also require friction—they require a certain degree of scrutiny, the ability to slow things down, the ability to review things, and the ability to understand why and how a decision was made."
Gill says that unless these new technologies come with oversight, review and transparency mechanisms, she worries that they will take a system that is already discretionary, opaque, and has the ability to change the direction of a person's life, and render it even more so.
"If you're going to start adopting these kinds of technologies, you need to do it in a way that maximally protects a person's Charter rights, and which honours the seriousness of the decisions at stake," Gill says. "Don't start with decisions that engage the liberty interests of a person. Start with things like whether or not this student visa application is missing a supporting document."