Canadian Report on Ethical AI Use Found to Contain Over 15 Fake Sources

Image: The Hawk (Wikimedia Commons)
In a twist of irony, a recommendation for ethical AI use has come under scrutiny as an in-depth Canadian government report on artificial intelligence (AI) in education was discovered to include at least 15 plagiarized or fabricated sources. The revelation has cast serious doubts on the credibility of AI-generated content and its potential impact on policy.
The Report in Question
The Department of Education, Newfoundland and Labrador, released the report titled:
“A Vision for the Future: Transforming and Modernizing Education” on August 28, 2025.
Key points about the report:
- It outlines a 10-year plan to upgrade public schools and post-secondary institutions across the province.
- Contains 110 recommendations focused on modernizing education.
- Co-chaired by Anne Burke and Karen Goodnough of Memorial University’s Faculty of Education.
- Emphasizes AI in education, with a strong focus on ethics and responsible technology development.
Upon closer inspection, experts discovered that many cited sources in the report do not actually exist.
Discovery of Fabricated Citations
The fictitious references were identified through a meticulous examination of the report’s literature citations. Notable findings include:
- One cited work is a 2008 National Film Board film called “Schoolyard Games”, which does not exist according to the National Film Board.
- The same reference appears in a University of Victoria style guide as a sample bibliography entry. Somehow, this fictional example was included in the report as an actual source.
Expert Opinions:
- Aaron Tucker, Assistant Professor at Memorial University (AI history expert), stated: “If that’s AI, I don’t know, but generating sources is a clear indicator of artificial intelligence.”
- Sarah Martin, Political Science Professor at Memorial University, commented: “Around the references I can’t find, I certainly don’t know what else it would be. It is unsettling that this document, meant to influence education policy, contains specious citations.”
Implications for AI in Policy-Making
The discovery of bogus sources in a report advocating ethical AI raises critical concerns:
- AI hallucination: The phenomenon where AI generates content that sounds plausible but lacks factual basis.
- Erosion of credibility: Relying on unverified AI content could undermine policy documents and public trust in government authorities.
Josh Lepawsky, Professor at Memorial University, noted:
“Errors happen. False citations are something entirely different where you’re kind of demolishing the credibility of the material.”
This incident also underscores the need for monitoring and accountability when deploying AI in academic or policy research:
- AI can aid in information gathering and drafting.
- Human experts must verify content for accuracy and reliability.
Response from Authorities
The Department of Education issued a statement:
- They are aware of “a few potential errors in citations”.
- The online report will be updated to correct mistakes in the coming days.
Co-chairs Anne Burke and Karen Goodnough affirmed:
- The report was authored with expert assistance.
- Any claims suggesting otherwise are inaccurate.
Despite these assurances, the incident has raised questions about transparency and accountability in AI use for policy-making:
- Relying on AI-generated content without checks can spread misinformation.
- It may compromise the integrity of well-intentioned public policy.
Broader Concerns and Future Directions
The Newfoundland and Labrador case is not isolated. Concerns have arisen in other jurisdictions regarding AI in education and policy research. Key recommendations for the future include:
- Regulation and oversight
- Policymakers must establish guidelines to ensure AI does not propagate misinformation.
- Training and resources
- Teachers and government staff need skills to critically evaluate AI-generated content.
- Transparency and accountability
- AI systems should include mechanisms to detect and correct errors reliably.
Conclusion
While AI holds significant promise for transforming education and policy-making, this incident serves as a cautionary tale:
- Human expertise must remain central to policy development.
- Proper verification is essential to maintain credibility and trust in educational reforms and AI use.
The discovery of fake sources in a report advocating ethical AI highlights the potential risks of unmonitored AI content generation, reinforcing the need for rigorous checks and balances.



