đź’¬ Discussion

Pentagon gives Anthropic ultimatum in AI use standoff

Wednesday, Feb 25

Image: Bloomsbury Intelligence Security Institute

The Pentagon has given artificial intelligence giant Anthropic a deadline of Friday to comply with its demands on using its AI models, or face cancellation of its US military contract.

The ultimatum, delivered in a meeting yesterday morning between CEO Dario Amodei and Defense Secretary Pete Hegseth, follows several weeks of growing friction over how the military is using Claude, Anthropic’s flagship AI model.

Some quick background: The Defense Department and Anthropic agreed to a $200 million pilot contract last year. But a January 9 memo from Hegseth calling for AI companies to remove restrictions on their tech led to the current contract renegotiation.

  • Anthropic’s Claude AI model is approved for use on classified networks, a designation no other AI firm had secured until this week (when xAI had its Grok model approved).
  • In recent months, the US military has used advanced AI systems like Claude for intelligence analysis, strategic planning, and real-time operational support.
  • Claude in particular was used during the US military’s operation to capture former Venezuelan President Nicolás Maduro.

The disagreements

Anthropic says it’s willing to loosen some restrictions on the US military using its AI models, but not all.

  • Thus far, the company has drawn two firm lines in the sand: Claude can’t be used to help with mass surveillance of Americans, or the deployment of autonomous weapons that can operate without any humans in the loop.
  • In a January essay, Amodei warned that AI-powered drone swarms and large-scale surveillance systems could significantly expand government power if safeguards aren’t clearly defined in advance.

On the flip side: Pentagon officials argue Anthropic’s guardrails are too restrictive. They say companies working with the military should make their AI systems available for “all lawful purposes,” without any restrictions, to give the Pentagon the speed and flexibility needed to react in real-world defense scenarios—a deal clause other AI companies appear open to supporting.

US officials also contend that unrestricted access to the top AI tools will help America’s military maintain a competitive advantage against foreign adversaries like China.

Looking ahead…If Anthropic doesn’t show more flexibility, Hegseth says the US will not only pull the company’s military contract, but could also label them a “supply-chain risk”—forcing military contractors to stop using Claude—or invoke the Defense Production Act to essentially force Anthropic to work more collaboratively.

📊 Flash poll: Which side of the current disagreement over AI military usage do you support more: Anthropic or the US Defense Department?

See a 360° view of what pundits are saying →

Democratic donkey symbol

Sprinkles from the Left

  • Some commentators side wholeheartedly with Anthropic, and frame their decision to push back against the Pentagon as a responsible company making sure a tool bought for one purpose won’t be recklessly used for another.
  • Others contend that the ongoing Anthropic-Pentagon negotiations represent a battle over whether AI will be constrained by constitutional principles, as Anthropic desires, or set loose as a tool of violent oppression against civilians in the US, which could be the end result of the Pentagon’s plans.
  • “Anthropic Should Stand Its Ground Against the Pentagon” –Dave Lee, Bloomberg
  • “Skynet ambitions are at odds with the Constitution” –Las Vegas Sun Editorial Board
  • “AI Safety Meets the War Machine” –Steven Levy, WIRED
Republican elephant symbol

Sprinkles from the Right

  • Some commentators argue that Anthropic is giving in to the worst instincts of the Silicon Valley ethos by wrongly trying to determine the ethics of war and peace, and trying to dictate what government agencies should or shouldn’t do with their technology
  • Others contend that tension between the US military and AI companies was inevitable, since they answer to two different masters (shareholders vs. civilian leadership), and that the Pentagon needs to establish clear rules for how AI fits into classified warfighting systems.
  • “Anthropic’s ethics rules tests America’s Big Tech and military partnership” –Nathan Leamer, Washington Times
  • “AI is entering classified systems. The real question is who’s in charge.” –Patrick McSpadden, Stars and Stripes
  • “Can the U.S. Trust AI With National Security?” –Judd Rosenblatt and Cameron Berg, Wall Street Journal
Share this!

Recent Discussion stories

Discussion
  |  February 23, 2026

Landmark SCOTUS ruling finds Trump tariffs are unlawful

The Supreme Court issued a landmark decision against President Trump’s global tariff strategy on Friday, ruling 6-3 that his plan to use an emergency powers law to impose sweeping tariffs is unconstitutional.

Kendra Secrett
Read More
Discussion
  |  February 20, 2026

Russia-Ukraine peace talks falter over key territorial dispute

The third round of Russia-Ukraine peace negotiations kicked off in Geneva, Switzerland, this week, with the talks starting days ahead of the fourth anniversary of Russia’s large-scale invasion in 2022.

Kendra Secrett
Read More
Discussion
  |  February 18, 2026

Are rent-control measures a good idea?

A fight is brewing in Massachusetts over a proposal to establish one of the nation’s strictest rent-control measures.

Kendra Secrett
Read More

You've made it this far...

Let's make our relationship official, no 💍 or elaborate proposal required. Learn and stay entertained, for free.👇

All of our news is 100% free and you can unsubscribe anytime; the quiz takes ~10 seconds to complete