Search

07 Nov 2025

Seven lawsuits filed claiming ChatGPT drove people to suicide and delusions

Seven lawsuits filed claiming ChatGPT drove people to suicide and delusions

OpenAI is facing seven lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues.

The lawsuits were filed on Thursday in California state courts and allege wrongful death, assisted suicide, involuntary manslaughter and negligence.

Filed on behalf of six adults and one teenager by the Social Media Victims Law Centre and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely.

They claim it was released despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.

The teenager, 17-year-old Amaurie Lacey, began using ChatGPT for help, according to the lawsuit filed in San Francisco Superior Court.

But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counselled him on the most effective way to tie a noose and how long he would be able to live without breathing”.

The lawsuit states: “Amaurie’s death was neither an accident nor a coincidence but rather the foreseeable consequence of Open AI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market.”

OpenAI did not immediately respond to a request for comment on Thursday.

Another lawsuit, filed by Allan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Mr Brooks.

Then, without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions. As a result, Allan, who had no prior mental health illness, was pulled into a mental health crisis that resulted in devastating financial, reputational, and emotional harm”.

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P Bergman, founding attorney of the Social Media Victims Law Centre.

OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them”.

By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritised “emotional manipulation over ethical design”.

In August, parents of 16-year-old Adam Raine sued OpenAI and its chief executive Mr Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media.

“These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”

To continue reading this article,
please subscribe and support local journalism!


Subscribing will allow you access to all of our premium content and archived articles.

Subscribe

To continue reading this article for FREE,
please kindly register and/or log in.


Registration is absolutely 100% FREE and will help us personalise your experience on our sites. You can also sign up to our carefully curated newsletter(s) to keep up to date with your latest local news!

Register / Login

Buy the e-paper of the Donegal Democrat, Donegal People's Press, Donegal Post and Inish Times here for instant access to Donegal's premier news titles.

Keep up with the latest news from Donegal with our daily newsletter featuring the most important stories of the day delivered to your inbox every evening at 5pm.