Skip to Content

Assessing the impacts of AI on human rights

An algorithm to predict school dropout risk in Quebec highlights the importance of safeguarding fundamental rights

Silhouette of a man on a digital screen
Photo: Chris Yang (Unsplash)

Imagine if an AI system predicted that your 11-year-old was at risk of quitting school within the next three years. Now imagine your child, based on this prediction, was placed in a Grade 7 class made up of other students at risk of dropping out. How would you react? Imagine you learned that this prediction was based on various factors — the distance from your home to your child's school, whether or not your child attends an after-school program and how often your family has moved in the last few years.

How would you react?

Think about it, because this is not a hypothetical possibility. An artificial intelligence system just like this one has already been used at schools in Quebec's Val-des-Cerfs school district. The Quebec government hopes to expand the algorithm's use province‑wide as it pushes ahead with its first large‑scale artificial intelligence initiative

The intentions are good. Few would argue against the importance of keeping kids in school. And using AI to achieve that goal is not necessarily a bad idea, as these technologies can have real positive impacts in education. 

The problem is that Quebec needs to be appropriately equipped to minimize the potential harm of this type of initiative. It isn't.

Right to privacy insufficiently protected

To be clear, Quebec is not some lawless society or a legal Wild West, and there is legislation in place to prevent abuses. But the existing legislation doesn't go far enough.

The algorithm used in the Val‑des‑Cerfs school district provides a good example. Quebec's privacy watchdog—the Commission d'accès à l'information (CAI)—recently handed down a decision on the artificial intelligence system in this case. 

The CAI came to two main conclusions. First, the school district failed to fulfill its duty to inform parents that personal information about their children was being used, and explain how and why it was. Second, measures taken to ensure the security of this information were inadequate, particularly concerning its destruction.

In essence: let the parents know what you are doing, and delete the data once you're done.

Is that really all?

The decision is relatively narrow in effect as it comes amid ongoing privacy law reform in Quebec. The provisions that the CAI had to apply do not exactly reflect modern standards. 

However, the CAI's conclusions were not limited to the applicable law.

The CAI took the liberty of including an additional recommendation in its decision: At each future phase of the project, the school district should conduct a privacy impact assessment to prevent, or at least lessen, any possible adverse effects that may result.

Privacy impact assessments (PIAs), as their name suggests, are mainly used to assess risks to privacy, such as 

  • Identity theft and fraud
  • Reputational harm
  • Harassment and other threats to life and safety
  • Unwanted solicitation

But when it comes to algorithms that predict whether a student will drop out of school, these risks are not necessarily the top concerns. We also need to study the risks of this technology to other human rights in addition to fraud, harassment and unwanted solicitation.

Urgent need to protect the right to equality

Of these fundamental rights, the right to equality is probably the one that demands the most immediate attention.

To illustrate the urgency of the situation, consider the United Kingdom. When universities cancelled entrance exams in the UK at the height of the pandemic in 2020, the government asked teachers to predict what results the students would have gotten if the exams had taken place. 

The predictions were then put through an algorithm, intended as a standardization measure to prevent teachers from unfairly giving their students grades that they did not deserve. The grades determined by the algorithm were then used to decide university admissions. 

The hitch? The algorithm was much more likely to downgrade the results of students from public schools (comprehensive schools) than their fee-paying school peers. A strong student at a disadvantaged school might have been unable to attain high marks if no other students from that same school had performed at that level in the last three years. 

For some students, this outcome could have had a significant impact on their future opportunities. The outcry was so fierce that the UK government changed course and reversed its decision to use the algorithm.

The Quebec government's plan to use AI to reduce school dropout rates is not immune to such scandals.

Studies show that algorithms for school dropout prediction tend to be biased, overestimating the risk in some populations and underestimating it in others. 

If the bias in an algorithm creates distinctions among people based on prohibited grounds such as race, gender, social condition or disability, and people suffer adverse effects as a result, that is discrimination—which, of course, is prohibited under the Quebec and Canadian charters of rights and freedoms. 

If an algorithm to predict school dropout risk found that a female Asian student was ineligible to receive homework assistance because of her gender and ethnic origin, this would likely be considered a violation of her equality rights. The same applies to an algorithm that classifies a male Latino student as lower-performing because of his sociodemographic profile. 

To prevent these problems, AI developers need tools to help them anticipate and mitigate the risk of discrimination in their systems.

In the case of the Val-des-Cerfs school district, for instance, it might have been wise for the CAI to recommend a more holistic impact assessment that takes into account the right to equality and non-discrimination, not just the right to privacy.

Models for this kind of assessment do exist. One model developed by Vincent Gautrais and Nicolas Aubin replicates a typical privacy impact assessment with the addition of a specific section about the risk of algorithmic discrimination. This tool also sets out tangible solutions for preventing discrimination, such as: 

  • Having an explicit definition of discrimination
  • Ensuring training data quality
  • Having an ongoing assessment and recalibration process for the artificial intelligence system
  • Ensuring that the system is used for its intended purposes

Since privacy impact assessments are set to become mandatory under law as of September 2023, the CAI should update its guide to conducting a privacy impact assessment to reflect the new legal implications of Bill 25

This is an ideal opportunity for the CAI to add a section on preventing algorithmic discrimination and bias, based on the work of Gautrais and Aubin and others. The bias and discrimination section of the new PIA could be rooted in the principle of accuracy of information (a fundamental principle of privacy protection) to ensure that the CAI does not overstep its authority. 

This solution would be relatively quick and easy to implement, is supported by many researchers, and could prevent problems like those seen with the UK government's university entrance exam algorithm. 

Importance of also protecting other fundamental rights

Even once this solution has been implemented, the work has to continue. Ensuring that the right to equality and privacy are sufficiently protected is not the end goal, as important as they are to protecting fundamental rights. In the age of AI, more needs to be done.

Freedom of expression, freedom of opinion, freedom of association, freedom of peaceful assembly, the right to health, the right to education: These are among a range of fundamental rights and freedoms that can be impacted by algorithmic systems, positively or negatively.

The AI industry needs to broaden its horizons and consider these other fundamental rights. We went into greater depth about this point in a recent article. Our goal here is not to rehash this argument, but instead offer a potential solution: to promote the responsible development of artificial intelligence, we need human rights impact assessments (HRIAs)—tools to anticipate and mitigate the negative effects of AI on the full array of fundamental rights protected under the Quebec and Canadian charters, rather than focusing narrowly on the right to privacy as is currently done with privacy impact assessments.

More comprehensive tools like these are gaining popularity around the world. Several research centres are advocating for the development of "human rights impact assessments" designed with a specific focus on AI, the European Commission is considering making such assessments mandatory under its proposed AI Act. As of last year, government agencies in the Netherlands must assess the impact of any AI technology they wish to use or develop on fundamental rights. 

Dutch government agencies must use a tool developed by researchers at Utrecht University to conduct these assessments. While this tool considers the impact of AI on privacy and equality rights—and does so very powerfully, using much more substantive questions than the CAI's privacy impact assessment—its scope is not limited to those two rights. 

Utrecht University's assessment tool identifies more than 100 fundamental rights that may be affected by AI and includes approximately 20 measures for preventing and mitigating these detrimental effects. It also provides a test for determining whether an artificial intelligence system is sufficiently respectful of fundamental rights to be used.

The test essentially asks:

  • Does the algorithm infringe any fundamental rights, and if so, how seriously?
  • What objectives is the algorithm being used to achieve? Is the algorithm the best means of achieving them?
  • Are there non-algorithmic solutions that would be just as effective in achieving these objectives?
  • Do the benefits of the algorithm outweigh its negative consequences on fundamental rights?

These questions should come as no great surprise to Quebec lawyers, as they are remarkably similar to the Canadian test for determining whether an infringement of fundamental rights is warranted. Still, the fact remains that the HRIA used in the Netherlands was developed in a foreign jurisdiction and reflects European ideas of fundamental rights, which differ from the Canadian perspective. 

Adapting this tool in Quebec, and nationally, would require the buy-in of legal professionals, computer scientists, social science researchers and members of civil society—a massive undertaking, to be sure. But if Quebec wants to pursue large-scale artificial intelligence initiatives, and do so responsibly, this it's essential. 

With the Quebec government's algorithm for lowering the risk of school dropout, for example, a HRIA could identify measures to ensure that the AI system is not used for undesirable purposes or to achieve objectives other than originally intended. This might prevent cases of the algorithm being used to monitor teacher performance or to subject them to surveillance incompatible with their right to fair and reasonable work conditions, as protected by section 46 of the Quebec charter. 

HRIAs could also help determine whether the benefits of using information about where a student lives to predict their chances of academic success outweigh the risk of violating the student's rights to privacy and equality and their right to liberty under section 7 of the Canadian Charter, which protects an individual's right to choose where they establish their home.

HRIAs also provide an opportunity to think about the rationale for using AI to predict whether elementary school students will stay in school. Just because something is technically possible does not mean it is socially desirable. To use AI responsibly, we should always ask ourselves first, "Is this a problem we need AI to solve?"

Are algorithms that much better than human beings at predicting dropout rates? Do they provide efficiency gains to justify the potential negative impacts on fundamental rights? How should the pros be weighed against the cons?

These are questions that lawyers are accustomed to answering. Now, we need to leverage our expertise to help develop artificial intelligence responsibly—creating fundamental rights impact assessments tailored for use in Quebec.