tech

March 9, 2026

How AI firm Anthropic wound up in the Pentagon’s crosshairs

Standoff with DoD over Claude chatbot reignites debate over how AI will be used in war – and who will be held accountable

How AI firm Anthropic wound up in the Pentagon’s crosshairs

TL;DR

  • Anthropic, an AI firm, is in a dispute with the U.S. Department of Defense over the use of its chatbot Claude for surveillance and autonomous weapons.
  • The Pentagon declared Anthropic a supply-chain risk, a move with significant financial implications, after the company rejected a deadline for a deal.
  • The conflict highlights broader ethical concerns about the use of AI in warfare and the accountability of tech companies.
  • Anthropic's stance is seen as a rare resistance to government demands within the tech industry, contrasting with its partnerships with defense contractors.
  • The company, founded on AI safety principles, faces scrutiny over its collaborations with the Pentagon and its decision to drop a founding safety pledge.

Continue reading the original article

Made withNostr