tech
March 3, 2026
Don’t bet that the Pentagon – or Anthropic – is acting in the public interest
The lesson here isn’t that one AI company is more ethical than another. It’s that we must renovate our democratic structures

TL;DR
- OpenAI replaces Anthropic as a supplier of AI technology to the US Department of Defense after a political dispute.
- Anthropic's CEO, Dario Amodei, insisted on provisions against "mass surveillance" or "fully autonomous weapons," which were derided by Pentagon officials.
- Donald Trump ordered federal agencies to discontinue using Anthropic models, leading to OpenAI securing potential government contracts.
- The article suggests that AI models are becoming commodified, with little differentiation in performance between top providers.
- Anthropic is positioning itself as a moral AI provider, while OpenAI's Sam Altman vowed to uphold safety principles, potentially politicizing OpenAI's products.
- The Pentagon has numerous AI options, including open-weight models, and can demand products that meet its needs, even for lethal force applications.
- The Trump administration designated Anthropic a "supply-chain risk to national security," impacting its ability to contract with US entities.
- The author argues that the situation underscores the need for democratic structures to regulate military AI use, rather than relying on corporate ethics.
- Autonomous weapon systems are becoming increasingly prevalent, mirroring historical technological advancements in warfare.
Continue reading the original article