Beware the “Slop”: AI’s Hidden Risks for Charities and NFPs

By Cameron A. Axford and Martin U. Wissmath

May 2025 Charity & NFP Law Update
Published on May 29, 2025

 

   
 

The rise of generative artificial intelligence has brought compelling opportunities for Canada’s charitable and not-for-profit (NFP) sector. From streamlining administrative tasks to producing first drafts of policy documents or grant applications, AI tools are becoming increasingly embedded in day-to-day operations. But as adoption grows, so does a largely unexamined risk: the creeping influence of what some commentators now call “AI slop.”

“AI slop” is shorthand for the low-quality, error-prone, or misleading content generated by artificial intelligence systems – generally when those systems are used without oversight, fact-checking, or editing. The term has been used to describe everything from articles filled with hallucinated facts, to AI-generated code riddled with bugs, to quickly produced digital images. This content may look polished and professional at first glance, but its substance often lacks accuracy, nuance, or truth. Worse still, the speed and scale at which AI systems can produce such material can flood decision-makers with volumes of superficially plausible but fundamentally unreliable information. In short, AI slop is content that seems helpful but could undermine organizational integrity if relied upon.

While AI slop poses risks across all sectors, charities and NFPs face a unique set of vulnerabilities. Mission-driven mandates mean that decisions are often value-informed, sensitive, or tailored to complex social and legal contexts. Generic, AI-generated language in mission statements may oversimplify or misrepresent key principles. Resource constraints might tempt organizations to over-rely on AI tools as replacements rather than supplements to human expertise, especially in policy drafting, advocacy, or strategic planning, and high expectations from donors, regulators, and the public leave little room for missteps. A policy based on hallucinated legal standards, or a grant proposal built on fabricated statistics, can severely damage credibility. In addition, governance implications arise when boards rely on AI-generated briefings or planning materials without a clear understanding of the source, validity, or review process behind the content. AI slop introduces not only operational inefficiencies but also potential legal, ethical, and reputational risks – particularly if it makes its way into decision-making at the leadership level.

AI-generated content is not always labelled or obvious, especially when shared second-hand between departments or sourced from third-party consultants. It can appear in a range of organizational documents. For example, HR or privacy policies generated by AI may contain outdated, jurisdictionally incorrect, or legally dubious provisions. Strategic plans drafted with AI assistance might include plausible-sounding but vague goals and performance indicators that lack real alignment with the organization’s objectives. Grant applications and reports may be populated with AI-generated needs assessments or outcome metrics that include fabricated data or unverifiable claims. Similarly, advocacy materials based on AI summaries of legislation or court decisions risk introducing factual inaccuracies that lead to faulty assumptions. In each case, the content may appear legitimate and helpful at first glance – but its underlying quality must be assessed.

Charities and NFPs do not need to abandon AI altogether – but they do need to use it wisely. Several practical measures can help to reduce risk as outlined below:

  • Organizations should develop internal review protocols to ensure that AI-generated content is always vetted by a qualified human before use. This applies to everything from board materials to public-facing communications.
  • Adopting internal AI policies can go a long way in setting expectations and accountability, as discussed in the October 2024 Charity & NFP Update. Even a simple policy outlining acceptable uses, review requirements, and prohibited applications – such as generating legal advice – provides necessary guardrails.
  • Staff and leadership should be trained to critically evaluate AI outputs. This includes learning how to identify signs of low-quality content and understanding the limitations of predictive text models.
  • Sourcing practices should be documented when AI is used in drafting or research. Encourage teams to track sources, add footnotes or hyperlinks, and verify claims independently.
  • It’s essential to avoid treating AI systems as authoritative. AI tools are simply assistants, not reliable or authoritative source of knowledge. They do not “know” anything – they simply predict based upon large language models. Final judgment must always rest with the people involved.

Boards of directors, in particular, should take a proactive role in overseeing AI usage within their organizations. This includes asking whether AI tools are being used in mission-critical functions, such as donor engagement, volunteer management, program design, and financial oversight, ensuring management has adopted appropriate safeguards and policies, and including digital literacy – especially AI literacy – in board development. In the same way boards would not accept financial statements or legal opinions without understanding their source and validation, they should not accept AI-generated materials at face value.

AI slop is not just a tech problem – it is a governance challenge. As generative artificial intelligence tools become more prevalent in Canada’s charitable and not-for-profit landscape, organizations must take care to preserve the quality, accuracy, and ethical integrity of their work. Used responsibly, AI can be a valuable tool. However, if charities rely on unverified outputs, they risk allowing “slop” to shape their policies, decisions, and even their missions. The solution is not to reject the technology, along with the opportunities it presents, but to embrace it with caution and discernment.

   
 

Read the May 2025 Charity & NFP Law Update