CALGARY — Artificial intelligence (AI) is increasingly being weaponized inside Canada’s immigration system, with federal officials warning that fake refugee claims and fabricated legal arguments are slipping into applications.According to The Globe and Mail, both Immigration, Refugees and Citizenship Canada (IRCC) and the Immigration and Refugee Board of Canada (IRB) have detected applicants using AI to generate false narratives, including references to court rulings that do not exist.The IRB says the trend is creating new challenges for decision-makers already facing heavy caseloads, as longer submissions padded with AI-generated material add complexity without improving the strength of claims.Officials say some filings now cite fictional case law or misrepresent legal precedents, forcing staff to spend additional time verifying information.In cases where fraud is confirmed, applicants can face a five-year ban from entering Canada.Multiple agencies — including the Canada Border Services Agency (CBSA), IRCC and the Royal Canadian Mounted Police (RCMP) — are involved in investigating immigration fraud.Federal officials acknowledge the growing role of artificial intelligence in these schemes but are refusing to release detailed examples, arguing that doing so could help bad actors refine their tactics.The concerns come as Ottawa faces mounting criticism over broader weaknesses in its immigration enforcement. A recent report from Karen Hogan, Auditor General of Canada, found the department failed to properly investigate more than 149,000 international students flagged for non-compliance, citing “critical weaknesses” in anti-fraud controls.Immigration lawyer Max Berger warned that AI could effectively replace so-called “ghost consultants” who have long been accused of fabricating refugee stories for clients..Alberta introduces immigration oversight law to crack down on fraud and restore system trust .He said claimants attempting to game the system can now generate detailed persecution narratives at no cost, potentially undermining the integrity of the refugee determination process.While many claims are decided on written submissions alone, Berger noted that oral hearings remain a key safeguard, allowing board members to directly test credibility and probe inconsistencies.The rise of AI has already prompted action at the judicial level.In 2024, the Federal Court introduced a directive requiring lawyers and litigants to disclose any use of artificial intelligence in their submissions.At the same time, Ottawa is expanding its own use of AI — not to make decisions, but to detect fraud and streamline operations.Immigration officials say machine-learning tools are being deployed to flag suspicious applications, identify irregular travel patterns and detect manipulated documents, including altered academic records, bank statements and digitally modified photographs.The IRB is also exploring AI to speed up internal processes, such as drafting summaries and preparing case files, while maintaining that final decisions will remain in human hands.The tribunal’s latest planning documents say new tools are being developed to help decision-makers produce clearer, more concise rulings while improving efficiency across the system.Even so, officials insist the technology will not be used to determine whether an applicant can stay in Canada — a line they say will not be crossed despite the rapid expansion of AI on both sides of the immigration process.