AI & ML

OpenAI Banned Suspect’s ChatGPT Account Months Before Deadly Canada School Shooting

by Suraj Malik - 16 hours ago - 3 min read

OpenAI suspended a ChatGPT account linked to the suspect in the deadly Tumbler Ridge, Canada mass shooting more than six months before the attack, after internal systems flagged the user for potential violent misuse. However, the company did not alert law enforcement at the time because it determined there was no specific, imminent threat.

The case is now reigniting debate over how AI companies should handle disturbing but ambiguous user behavior at scale.

What OpenAI Detected and When

According to the company, its automated safety systems, backed by human review, identified concerning activity tied to an account belonging to Jesse Van Rootselaar in June 2025.

OpenAI says it:

  • Detected possible violent misuse signals
  • Banned the account in June 2025
  • Determined the activity did not meet its reporting threshold
  • Did not contact police at that time

The company maintains the user’s behavior did not constitute a “credible or imminent threat of serious harm” under its existing policy framework.

After the February 2026 attack, OpenAI said it proactively contacted Canadian authorities and shared relevant information to support the investigation.

The Tumbler Ridge Mass Shooting

The suspect, 18-year-old Van Rootselaar, is alleged to have carried out one of Canada’s deadliest attacks in decades.

Authorities say the incident involved:

  • Eight people killed
  • Twenty-seven injured
  • Attack location: Tumbler Ridge Secondary School area
  • Suspect later died from a self-inflicted gunshot wound

Among those killed were the suspect’s mother and step-brother at a nearby residence.

Police have stated the suspect was assigned male at birth but identified as female. Officials say the motive remains under investigation.

The scale of the violence has drawn comparisons to the 1989 Montreal massacre, one of Canada’s worst mass shootings.

Internal Debate Inside OpenAI

Reporting cited by major outlets indicates there was internal discussion at OpenAI about whether the user’s activity warranted escalation.

According to those reports:

  • Some staff believed the prompts were concerning enough to notify authorities
  • Leadership chose not to report the case
  • The decision followed company policy on imminent-threat thresholds

OpenAI’s current framework aims to avoid over-reporting user data unless there is a clear and immediate risk of real-world harm.

The company says it is now reviewing the case and continually reassesses its referral criteria with outside experts.

OpenAI’s Policy Position

OpenAI told the BBC that ChatGPT is trained to:

Refuse assistance with illegal activity

Discourage real-world harm

Escalate only when threats are specific and imminent

The company operates at massive scale, handling hundreds of millions of chats daily, which makes false positives and privacy concerns significant operational factors.

Why This Case Matters

The incident highlights a growing tension in the AI era: how to balance user privacy with preventive safety.

Key questions emerging from the case include:

  1. When does troubling behavior become reportable risk?
  2. Should AI firms adopt lower reporting thresholds?
  3. How can companies avoid both under-reporting and over-surveillance?
  4. What role should human review play at massive scale?

As AI systems become more widely used, these judgment calls are likely to become more frequent and more scrutinized.

Bottom Line

OpenAI’s early ban shows its safety systems did flag concerning behavior months before the Tumbler Ridge attack. But the company’s decision not to alert authorities underscores the difficult line AI firms must walk between privacy protection and preventive intervention.

With regulators, lawmakers and the public now paying closer attention, this case may become a defining moment in how the industry sets future threat-reporting standards.