Hegseth Gives Anthropic Friday Deadline to Drop AI Safety Limits
Defence Secretary Pete Hegseth has given Anthropic CEO Dario Amodei until Friday evening to grant the US military unrestricted access to its Claude AI model, threatening to cancel the company’s $200 million (£159 million) Pentagon contract and pursue a government blacklist designation if it refuses.
Hegseth delivered the ultimatum during a meeting at the Pentagon on 25 February, telling Amodei the department would invoke the Defence Production Act to compel the company’s compliance if it did not agree to the military’s terms by 5:01 p.m. on Friday, according to sources familiar with the discussion who spoke to Axios.
The Defence Production Act, a law dating to the 1950s, gives the federal government authority to compel private companies to produce goods or services deemed critical to national security. A Pentagon official told CNN that if Anthropic does not comply, Hegseth will ensure the act is invoked, forcing the company to allow military use of Claude ‘whether they want to or not.’
The Pentagon is also weighing a second penalty: designating Anthropic a ‘supply chain risk.’ That label would require every company holding a defence contract to certify that Claude does not feature in their workflows, according to NPR. Anthropic has said eight of the ten largest US companies use its technology. Such a designation is ordinarily reserved for foreign adversaries.
Two Red Lines Anthropic Will Not Cross
The dispute centres on two specific limits the company has placed on Claude’s military use. Anthropic will not allow its model to be used for the mass surveillance of American citizens, and it will not permit Claude to make final targeting decisions in weapons systems without human oversight. The company has maintained these positions throughout months of negotiations with the Pentagon.
Pentagon officials have pushed back on that framing. A senior defence official told CNN the current dispute is not specifically about autonomous weapons or mass surveillance, and that ‘legality is the Pentagon’s responsibility as the end user.’ The department wants access to Claude for ‘all lawful purposes,’ without the company retaining the ability to object to individual use cases.
During the meeting, Hegseth drew a comparison to Boeing: the manufacturer does not determine how the military uses aircraft it has sold to the government. He argued the same principle should govern Anthropic’s relationship with the Pentagon, CBS News reported.
Hegseth and other Trump administration officials have characterised Anthropic’s safety guardrails as ‘woke AI.’ Competing firms OpenAI, Google, and Elon Musk’s xAI have each agreed to make their tools available for any lawful military application. xAI’s Grok model was cleared for use in classified government settings this week.
Claude’s Unique Role in Pentagon Systems

Claude is currently the only AI model operating inside the Pentagon’s classified networks. Anthropic was the first technology company cleared for that level of access, and the model was used during the January operation in which former Venezuelan president Nicolas Maduro was captured, according to NPR. That use came through Anthropic’s commercial partnership with defence technology firm Palantir, which holds its own military contracts. Anthropic had not specifically authorised that application.
Hegseth raised the Palantir episode in Tuesday’s meeting, pressing Amodei on how the company intended to maintain oversight of Claude’s use once it entered classified networks through third-party partners.
The Pentagon awarded Anthropic, Google, OpenAI, and xAI contracts each worth up to $200 million (£159 million) in the summer of 2025. Anthropic’s contract represents a small portion of the company’s roughly $14 billion (£11.1 billion) in annual revenue, but the access it provides to classified government systems is central to the company’s institutional standing.
Hegseth was joined in the meeting by Deputy Secretary Steve Feinberg, Under Secretary for Research and Engineering Emil Michael, Under Secretary for Acquisition and Sustainment Michael Duffy, Pentagon spokesman Sean Parnell, and general counsel Earl Matthews.
Anthropic said after the meeting it had engaged in ‘good-faith conversations’ and remained ‘committed to using frontier AI in support of US national security.’ The company gave no indication it planned to alter its position on autonomous weapons or domestic surveillance, Business Insider reported.
Originally published on IBTimes UK