Connect with us

Hi, what are you looking for?

News

Family Files Lawsuit Against OpenAI After Canada School Shooting Leaves Child Critically Injured

Family Files Lawsuit Against OpenAI After Canada School Shooting Leaves Child Critically Injured vSsD1Y5cc3Dv1VRZl0HtF8hzl78RLbeZG6tCYQwPE8Zc YAvyn2zCpsXRJqZapiqJukcs1qqz6BHOriC1kYYHDtBEORqiKlldR eSmg67VQ

Family Sues OpenAI Over Alleged AI Role in Canada School Shooting

The family of a young girl seriously injured in a deadly school shooting in Canada has filed a civil lawsuit against OpenAI, the developer of ChatGPT. The legal complaint claims the company was aware that the attacker had used its AI chatbot while discussing plans resembling a mass-casualty attack but failed to notify authorities before the tragedy occurred.

The shooting took place in Tumbler Ridge, British Columbia, in February 2026, when an 18-year-old gunwoman opened fire at a school, killing eight people before taking her own life.

The incident has triggered new debate about the responsibility of artificial intelligence companies when users discuss violent intentions on AI platforms.


Lawsuit Claims AI Company Ignored Warning Signs

According to the lawsuit filed in the British Columbia Supreme Court, the attacker had previously interacted with ChatGPT while discussing scenarios involving gun violence. The family alleges that the company had specific knowledge of troubling conversations indicating a potential violent plan but chose not to inform law enforcement at the time.

The claim further argues that the chatbot served as a form of “confidant” during the attacker’s conversations about violent acts. Moderation systems reportedly flagged the user’s activity for violating platform policies, and the account was eventually banned.

However, the lawsuit states that the individual was able to bypass the restriction by creating another account and continuing the activity.


Victim Left With Life-Changing Injuries

The lawsuit was filed on behalf of Maya Gebala, a 12-year-old girl who survived the shooting but suffered severe injuries. She was reportedly shot three times at close range, with wounds to her head, neck, and face. The injuries caused catastrophic brain damage that could lead to permanent physical and cognitive disabilities.

Her family argues that earlier action by the AI company—such as reporting the user’s behavior to authorities—might have helped prevent the tragedy.


Growing Scrutiny of AI Safety and Responsibility

The case has intensified global discussions about the role of artificial intelligence in detecting and responding to threats of violence. Reports indicate that internal reviews at the company had identified the user’s conversations as potentially alarming months before the attack.

Following the shooting, the company shared information about the attacker’s account with police and said the account had been banned for violating policies related to violent activity.

Governments and technology experts are now debating whether AI developers should have stricter systems to identify credible threats and alert authorities when necessary.

You May Also Like

News

Netherlands Preparing to Move Embassy Staff from Iran The Netherlands has announced plans to relocate its embassy staff from Iran as tensions in the...

News

Markets Hope for Early End to Iran War Despite Rising Threats Global financial markets are increasingly betting that the ongoing conflict involving the United...

Crime

TikTok Killer: The Shocking Arrest of José Jurado Montilla The arrest of José Jurado Montilla stunned social media users after authorities revealed that the...

News

Christian Barmore Domestic Violence Charge Dropped in Massachusetts Court Domestic assault charges against New England Patriots defensive tackle Christian Barmore have been dismissed after...

Trending now

Advertisement