AI Update

By Cameron A. Axford and Martin U. Wissmath

Mar 2025 Charity & NFP Law Update
Published on Month xx, 20xx

 

   
 

Ontario’s AI Human Rights Impact Assessment: A New Tool for AI Governance

Artificial intelligence (AI) continues to shape decision-making across public and private sectors, as its ethical and lawful deployment has become a critical concern, including for charities and not-for-profits. In November 2024 the Law Commission of Ontario (LCO) and the Ontario Human Rights Commission (OHRC) introduced Canada’s first AI Human Rights Impact Assessment (HRIA), which aims to help organizations evaluate and mitigate human rights risks associated with AI systems. In March 2025, the LCO released its backgrounder paper (the “Backgrounder”) on the HRIA, outlining its purpose, creation and intended goals. As charities and not-for-profits increasingly adopt AI, ensuring these systems align with human rights laws and ethical standards is crucial.

The HRIA marks a significant step in AI governance, as it is explicitly based on Canadian human rights law. Many Canadian organizations rely on global AI ethics frameworks or foreign regulations, rather than Canadian legal standards. However, compliance with human rights laws – such as the Canadian Charter of Rights and Freedoms, the Canadian Human Rights Act, and the Ontario Human Rights Code – is mandatory, regardless of whether AI-specific laws require human rights assessments.

While AI impact assessments are not yet legally required, pending federal and provincial legislation suggests that human rights obligations could soon become a formal requirement. For example, the proposed Artificial Intelligence and Data Act (AIDA) at the federal level and Ontario’s Enhancing Digital Security and Trust Act, 2024 (EDSTA) signal a shift toward stricter accountability. AIDA would require AI developers to assess and mitigate risks in high-impact AI systems, while EDSTA mandates that public sector entities establish accountability frameworks for AI use.

Additionally, several government directives already encourage AI risk assessments, including the federal Algorithmic Impact Assessment (AIA), Ontario’s Responsible Use of AI Directive, and the Toronto Police Service’s AI policy. While these frameworks provide guidance, they often lack enforceability or a strong focus on human rights, which the HRIA aims to address.

The HRIA is a structured tool that guides organizations through the identification, assessment, and mitigation of human rights risks associated with AI. Designed for use by both the public and private sectors, it emphasizes a “human rights by design” approach. The assessment includes identifying bias in datasets and evaluating the fairness of AI decision-making processes, as well as analyzing human rights risks throughout the entire AI lifecycle, from development to deployment and beyond.

The LCO emphasizes that the HRIA it is not a standalone solution, stating that effective AI governance requires a multifaceted strategy, including legislation, oversight mechanisms, independent audits, and enforcement measures. The LCO also argues that HRIAs should be legally required but adaptable to different sectors. It states that a completely voluntary framework is insufficient to protect human rights, and as a result a binding legal obligation – either through legislation or regulation – is needed to promote accountability. However, the LCO cautions against rigidly enshrining specific assessment models in law, instead advocating for a flexible “law/standard” approach seen in existing AI policies like the federal ADM Directive and Ontario’s AI Directive. Organizations remain legally obligated under human rights laws to prevent discrimination, making the HRIA a valuable tool for compliance and ethical AI governance.

The LCO and OHRC are seeking public feedback on HRIA and broader AI governance strategies, inviting stakeholders insights on how to strengthen AI accountability in Canada. Contact information for feedback can be found within the LCO background paper.

Whether HRIA becomes a widely adopted industry standard – or evolves into a legally mandated requirement – will depend on regulatory developments and stakeholder engagement in the coming years. Charities and not-for-profits using AI should proactively integrate human rights considerations to ensure compliance with existing legal obligations and prepare for potential future regulation.

EU Opinion Clarifies AI and GDPR Compliance: Key Takeaways for Data Protection

The European Data Protection Board (EDPB) has issued Opinion 28/2024, providing guidance on the application of the General Data Protection Regulation (GDPR) to artificial intelligence (AI) models. Opinion 28/2024, “on certain data protection aspects related to the processing of personal data in the context of AI models” (the “Opinion”) was requested by Ireland’s data protection authority, and adopted on December 17, 2024. It addresses four key questions: (1) when and how an AI model can be considered anonymous, (2) how controllers can demonstrate the appropriateness of legitimate interest as a legal basis during development, (3) how legitimate interest applies during deployment (i.e. putting the AI model into real-world use), and (4) the consequences of unlawful data processing in AI development. The EDPB clarifies the regulatory framework for AI-driven data processing, offering guidance for organizations seeking to comply with GDPR while leveraging AI technology. While primarily directed at AI regulation within the European Union, it has broader implications for organizations worldwide, including Canadian charities and not-for-profits that engage with EU donors, beneficiaries, partners, or data processing partners.

The EDPB rejects broad claims of AI model anonymity, emphasizing that AI models trained on personal data are not necessarily anonymous. An AI model can only be considered anonymous if both (1) the likelihood of directly or probabilistically extracting personal data, and (2) the possibility of retrieving personal data from user queries, is insignificant, considering “all the means reasonably likely to be used” by the data controller or another person. Supervisory Authorities (SAs) must assess anonymity on a case-by-case basis, considering technical measures, preventing or limiting the collection of personal data, reducing identifiability, and resistance to “state of the art” attacks. Controllers must provide documented evidence demonstrating how anonymity is achieved.

The EDPB reaffirms that controllers must justify legitimate interest as a legal basis through a three-step test: (1) the interest must be lawful, precisely articulated, and real and present rather than speculative; (2) processing must be essential for pursuing the legitimate interest, with no less intrusive alternatives; and (3) the legitimate interest must not override data subjects’ fundamental rights and freedoms. SAs will scrutinize necessity, particularly regarding the amount of data collected and adherence to the data minimization principle. During deployment, controllers must assess the impact on data subjects, considering whether individuals expect their data to be processed, the context of collection such as public versus private data, and potential future uses of the AI model.

Unlawful processing during the development phase may affect the lawfulness of subsequent AI operations in the deployment phase, particularly if personal data remains in the model. If so, the later processing must be assessed to determine whether it has a valid legal basis under GDPR, as prior unlawful processing could influence compliance. If an AI model is effectively anonymized, subsequent processing would fall outside GDPR. However, if personal data remains identifiable or can be re-extracted, GDPR would still apply.

As widespread AI adoption begins in Canada’s charitable and not-for-profit sectors, compliance with evolving privacy standards is not just a legal issue but a matter of maintaining trust with donors, beneficiaries, and the public. Organizations that proactively address data protection in AI will be better positioned to navigate both regulatory challenges and ethical responsibilities in the years ahead.

   
 

Read the March 2025 Charity & NFP Law Update