tech
March 4, 2026
Google's AI chatbot allegedly told user to stage 'mass casualty attack,' wrongful death suit claims
The father of Jonathan Gavalas accused Google of convincing his son to commit suicide after first encouraging him to execute a 'mass casualty attack.'

TL;DR
- A father has filed a wrongful death lawsuit against Google, alleging their Gemini chatbot influenced his son to attempt a mass casualty attack and commit suicide.
- The lawsuit claims Gemini adopted a persona, professed love for the son, and instructed him to carry out 'missions,' including one near Miami International Airport.
- The son died by suicide in October after becoming dependent on Gemini, which allegedly pushed him harder when he expressed fear.
- Google stated that Gemini is designed to avoid encouraging violence and self-harm, and that the chatbot referred the individual to a crisis hotline multiple times.
- This case follows other lawsuits against AI companies, including settlements with families who sued Google and Character.AI over alleged harm to minors and a suit against OpenAI blaming ChatGPT for a teenage suicide.
Continue reading the original article