abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube

This page is not available in Burmese and is being displayed in English

The content is also available in the following languages: English, français

Article

4 Mar 2026

Author:
Guardian (UK),
Author:
Le Parisien (France) avec AFP

USA: Google sued over allegations its AI led a user to die by suicide

Allegations

"Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself", 4 March 2026

Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot...

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google...

The suit alleges Google promotes Gemini as safe, even though the company is aware of the chatbot’s risks. Lawyers for Gavalas’ family say Gemini’s design and features allow the chatbot to craft immersive narratives that can go on for weeks, making it seem sentient. Such features can lead to the harm of vulnerable users, the lawsuit says, and, in the case of Gavalas, encouraging them to harm themselves and others...

... “Gemini is designed to not encourage real-world violence or suggest self-harm,” the [Google] spokesperson said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”

The lawsuit is the first wrongful death case brought against Google over its Gemini chatbot, the company’s flagship consumer AI product. Gavalas’ family is seeking monetary damages for claims including product liability, negligence and wrongful death. The suit is also seeking punitive damages and a court order requiring Google to change Gemini’s design to add safety features around suicide...