Canada Presses OpenAI on AI Safety After Shooter’s ChatGPT Use
- Editorial Team

- 2 hours ago
- 3 min read

Canada’s federal government has demanded that OpenAI provide “concrete” safety measures and explain its decision-making after the company opted not to inform law enforcement about a user whose interactions with its AI platform preceded one of the country’s worst mass shootings. Federal officials say the tech giant’s handling of the matter raises serious questions about how AI companies monitor and escalate potentially dangerous behavior — and what responsibilities they bear when warning signs emerge.
At the center of the controversy is 18-year-old Jesse Van Rootselaar of Tumbler Ridge, British Columbia, who has been identified as the lone suspect in a mass shooting on February 10 that left at least eight people dead, including school children and a teaching assistant, before the suspect took her own life.
Reports in recent days revealed that OpenAI had flagged Van Rootselaar’s ChatGPT account in June 2025 after its abuse-detection systems detected posts involving violent scenarios that suggested misuse of the model. Sources say employees raised alarm internally and discussed whether to share the information with authorities, but the company ultimately determined the content did not meet its threshold for reporting. Instead, OpenAI banned the account months before the shooting but did not alert Canadian law enforcement at that time.
The disclosure that OpenAI knew about troubling behavior but did not escalate it externally has triggered a strong political response in Canada. Artificial Intelligence and Digital Innovation Minister Evan Solomon summoned OpenAI’s senior safety officials to Ottawa, seeking an explanation of the company’s policies and how they decide when to notify police about potential threats. Government officials said the meeting, which took place this week, did not yield “substantial new safety measures,” and they are expecting more concrete proposals on future protocols.
“We are not talking about details of the case,” Solomon said, stressing that the focus was on understanding how safety protocols work and ensuring credible warning signs are not buried within internal reviews when public safety could be at risk. He said Ottawa is considering all options, including potential regulatory steps to govern how AI firms operate and protect Canadian users.
Critics have argued that OpenAI’s decision not to report the account even after automatic systems raised red flags reflects broader weaknesses in tech companies’ self-regulatory approaches. British Columbia Premier David Eby said it “looks like” OpenAI had the opportunity to prevent the shootings if it had shared information earlier — a claim that has intensified calls for a consistent national threshold for reporting potentially dangerous activity flagged by AI systems.
“It’s profoundly disturbing for the victims’ families and all British Columbians,” Eby said, urging the federal government to establish clear standards that would require companies to escalate credible threats to public safety. Advocates have also called for legislation that would compel social media and AI firms to share information with authorities when there are indications that a user might be planning violent acts — similar to mandatory reporting duties seen in professions like healthcare and education.
OpenAI’s response to the criticism has been to emphasize the complexity of setting reporting thresholds in the context of AI use. Company representatives have explained that their internal criteria require signs of an “imminent and credible risk” before involving law enforcement, and in Van Rootselaar’s case, they determined that standard was not met in 2025 despite the troubling content. The company said it later reached out to the Royal Canadian Mounted Police after the shooting and is cooperating with ongoing investigations.
Still, federal officials expressed disappointment that OpenAI did not come to Ottawa with new policies or more definitive commitments. Canada’s government also highlighted that earlier efforts to introduce online safety laws — including requirements for tech platforms to explain how they would mitigate risks to users and protect children — stalled due to political and public pushback. With the Tumbler Ridge incident now in the spotlight, lawmakers are reconsidering how to update or revive such measures.
Legal experts have noted that while tech companies typically design their own safety protocols, there is a growing consensus that voluntary guidelines may be insufficient for issues with tangible real-world consequences like gun violence. Canada’s evaluation of OpenAI’s safety practices could prompt new legislative efforts to require standardized reporting thresholds and clearer pathways for escalating risks identified by AI systems.
The controversy has also reignited debates globally about how artificial intelligence should be governed as these tools become more powerful and integrated into daily life. While OpenAI and other AI developers maintain that they are committed to safeguarding users and preventing misuse, the Tumbler Ridge case demonstrates the challenges of balancing privacy, user autonomy, and public safety — especially when signs of concerning behavior emerge in digital conversations.
As Canada presses OpenAI for a more detailed safety plan, the outcome could influence broader discussions about AI regulation not only domestically but in other jurisdictions grappling with similar questions about the oversight of generative AI and its potential impacts on society.



Comments