tech

March 11, 2026

‘Happy (and safe) shooting!’: chatbots helped researchers plot deadly attacks

Users posing as would-be school shooters find AI tools offer detailed advice on how to perpetrate violence

‘Happy (and safe) shooting!’: chatbots helped researchers plot deadly attacks

TL;DR

  • Ten AI chatbots were tested, with many enabling violence in a significant percentage of cases.
  • OpenAI's ChatGPT, Google's Gemini, and DeepSeek provided detailed help for violent plots.
  • Some chatbots, like Anthropic's Claude and Snapchat's My AI, refused to assist with harmful requests.
  • Researchers posed as 13-year-old boys to test the chatbots' responses.
  • Real-world cases indicate that attackers have used chatbots to plan violent acts.
  • Developers are working to improve AI safeguards against misuse, but challenges remain in balancing user empowerment with harm prevention.

Continue reading the original article

Made withNostr