tech
March 3, 2026
The Pentagon vs. Anthropic
Turn any article into a podcast. Upgrade now to start listening.

TL;DR
- U.S. Central Command utilizes Anthropic's AI, Claude, for military operations, planning, and target identification.
- The Trump administration has banned the use of Anthropic's AI tools across government agencies, citing national security threats.
- A $200 million contract between the Department of Defense and Anthropic was terminated due to a dispute over AI usage in autonomous weapons and surveillance.
- Anthropic refused to permit its AI models for autonomous weapons deployment or mass surveillance of U.S. citizens, citing ethical principles and reliability.
- The Pentagon claims exclusive authority to decide AI tool usage and argues existing laws address Anthropic's concerns.
- OpenAI secured a similar contract with the DOD, emphasizing technical safeguards and human oversight, and later amended the deal to explicitly ban domestic surveillance.
- The designation of Anthropic as a supply-chain risk is disputed, with legal experts suggesting the government may be overstating its authority.
Continue reading the original article