tech

March 7, 2026

Anthropic’s Ethical Stand Could Be Paying Off

The AI company gave up a $200 million contract—and might be getting something more valuable in return.

Anthropic’s Ethical Stand Could Be Paying Off

TL;DR

  • Anthropic forfeited a $200 million Department of Defense contract due to disagreements over the ethical use of its AI, specifically concerning surveillance and autonomous weapons without human oversight.
  • Following the contract cancellation, Anthropic's AI model, Claude, became a top-10 free app and later the No. 1 downloaded free app in the U.S., with daily downloads exceeding 1 million.
  • The company has seen a surge in new sign-ups daily since the conflict with the DOD began.
  • OpenAI, Anthropic's rival, experienced a significant spike in ChatGPT uninstalls and negative reviews after announcing its own deal with the Pentagon.
  • Anthropic has gained trust and admiration within the AI industry, with many engineers circulating letters of support and some reportedly threatening to leave their companies if similar ethical lines are not honored.
  • Former Republican Representative Denver Riggleman has praised Anthropic's stance and directed his company to work exclusively with Anthropic, questioning the legal basis of the DOD's 'Supply-Chain Risk' designation.
  • Anthropic CEO Dario Amodei stated the company does not want to sell products that could harm people, while OpenAI's deal contains similar 'all lawful purposes' language that critics find insufficient.
  • The author, a former Navy pilot, draws parallels to military pilots' responsibility for life-and-death decisions, highlighting the importance of human oversight in AI applications.

Continue reading the original article

Made withNostr