Anthropic Rejects Pentagon’s Latest Offer, Deepening AI Safeguards Feud Ahead of Deadline
- Editorial Team

- 13 hours ago
- 4 min read

Anthropic PBC has once again rebuffed a revised offer from the Pentagon aimed at resolving a high-stakes dispute over the conditions under which U.S. military forces can use its artificial intelligence software — a confrontation that has drawn national attention and jeopardized the company’s defense contracts. The ongoing standoff pits the AI startup’s ethical commitments against Washington’s uncompromising demand for unrestricted access to advanced technology in defense applications.
In a statement released Thursday, Anthropic reiterated that new language proposed by the Department of Defense failed to satisfy its insistence on key safeguards governing the deployment of its AI tools. These include binding provisions that would explicitly prohibit the use of Anthropic’s models for mass domestic surveillance within the United States and for fully autonomous weapons systems that can engage targets without human oversight.
“These threats do not change our position: we cannot in good conscience accede to their request,” said Chief Executive Officer Dario Amodei, underscoring the company’s longstanding commitment to building AI systems that adhere to ethical boundaries and human-in-the-loop decision-making.
The Crux of the Dispute
The disagreement stems from Pentagon demands that Anthropic agree to let its AI technology be used for all lawful purposes by the U.S. military. That phrase, as required by defense officials, would effectively strip away any company-imposed limitations on how models like Claude — Anthropic’s advanced generative AI system — are employed in classified operations. The Pentagon asserts that as the end user, it alone should determine lawful use cases, including military and intelligence functions.
Anthropic, backed by major tech investors including Amazon and Google, has so far been unique among frontier AI firms in resisting this requirement. The company points to its ethical posture, first articulated in its “Responsible Scaling Policy,” as central to its identity and mission — and says that compromising on these safeguards could undermine democratic values and public trust.
A Pentagon spokesman declined to comment directly on Anthropic’s latest statement, but officials have publicly denied any intention to use AI for illegal surveillance or independently acting weapon systems. Instead, they maintain that the broad language sought is necessary to ensure operational flexibility across a wide range of classified missions.
Deadline and Escalating Pressure
The feud has taken on new urgency as a government deadline looms: Defense Secretary Pete Hegseth has reportedly given Anthropic until Friday at 5:01 p.m. Eastern to agree to the department’s terms or face punitive measures. These could include termination of the company’s existing defense contracts — reportedly worth up to $200 million — and designation as a “supply chain risk”, a label that would severely restrict Anthropic’s ability to do business with other defense contractors.
The “supply chain risk” designation is typically reserved for foreign adversarial technology vendors, making its possible application to an American AI firm unprecedented. Such a classification would likely force DoD partners and subcontractors to certify they do not rely on Anthropic’s AI systems, effectively isolating the company from future military work.
The Pentagon has also hinted at invoking the Defense Production Act, a Cold War-era authority that allows the federal government to compel private companies to act in support of national defense priorities during emergencies. Government officials argue that AI technologies such as Claude are critical national assets, and that the DoD needs unrestricted access to ensure readiness and technological superiority.
Industry and Policy Implications
Anthropic’s stance has drawn both support and criticism from different corners of the tech and policy communities. Advocates for AI safety and civil liberties praise the company’s insistence on preserving guardrails around powerful technology. They argue that completely open use of large language models could make it easier for governments — even democratic ones — to deploy AI in ways that threaten personal privacy or enable autonomous engagement decisions without adequate oversight.
At the same time, defense officials and some U.S. lawmakers have framed the Pentagon’s position as a matter of national security, emphasizing that battlefield requirements and rapid deployment scenarios demand broad flexibility in how AI systems are used. They worry that restrictions could limit effectiveness and interoperability across classified systems.
The broader tech sector is watching closely. Other major AI providers, including those whose models are now entering classified use, have largely acquiesced to defense demands for broader access. This leaves Anthropic’s refusal as a high-profile outlier and raises questions about whether ethical red lines can be maintained amid strategic competition and government pressure.
Future Outlook
With negotiations still technically ongoing, both Anthropic and the Pentagon appear to be engaged in a delicate game of brinksmanship. Anthropic’s leadership insists it does not want to sever ties with the government and remains committed to supporting U.S. national security priorities — provided its ethical conditions are respected. Meanwhile, the Pentagon has made clear it will not allow a private firm to dictate terms for use of technology developed with significant national and commercial interest.
The coming days could determine whether Anthropic stands its ground — potentially at great financial and strategic cost — or bends to government demands that would reshape how AI technologies are adopted across defense sectors. Either way, the dispute highlights the escalating tension between corporate responsibility, ethical constraints, and national security imperatives in the age of artificial intelligence.



Comments