Artificial Intelligence
Pete Hegseth Threatens Anthropic Over AI Safeguards in High-Stakes Military Dispute
The US Department of Defense has issued a firm deadline to AI company Anthropic, warning it could lose government contracts if it refuses to loosen restrictions on how its artificial intelligence models are used in military operations. According to officials familiar with the meeting, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that the company must allow broader Pentagon access to its AI systems by Friday evening or face serious consequences.
AI Safeguards vs. Military Flexibility
At the heart of the dispute is Anthropic’s commitment to AI safety and ethical guardrails. The company, known for its chatbot Claude, has publicly positioned itself as a safety-first AI developer. It has drawn clear red lines around certain uses of its technology, particularly:
– Fully autonomous military targeting without human oversight
– Mass domestic surveillance of US citizens
Sources described the Pentagon meeting as cordial but firm. Dario Amodei reportedly reiterated Anthropic’s position that its models should not be used in autonomous lethal operations or broad surveillance programs.
Defense officials, however, argue that military AI systems must be available for “all lawful use cases.” They maintain that responsibility for lawful deployment lies with the Department of Defense — not the technology provider.
Threat of the Defense Production Act
Pete Hegseth’s Pentagon officials have suggested invoking the Defense Production Act if Anthropic refuses to comply. That move could compel the company to provide its AI tools for national security purposes. Officials also warned Anthropic could be labeled a “supply chain risk,” effectively sidelining it from defense procurement.
Anthropic was among four AI firms awarded contracts worth up to $200 million last summer, alongside OpenAI, Google, and xAI. The Pentagon is integrating these companies into GenAI.mil, its secure AI network for internal military use.
While OpenAI and xAI have already agreed to broader terms for military deployment, Anthropic remains the only major contractor to maintain strict usage conditions.
National Security and the Maduro Operation
Anthropic’s technology has reportedly been used in sensitive operations before. Sources indicate its Claude model was utilized through a partnership with Palantir during the January operation that led to the capture of former Venezuelan President Nicolás Maduro.
The episode has fueled questions about how AI tools are being deployed in classified military contexts — and whether corporate safeguards can realistically constrain their use in national security applications.
Growing AI Regulation Debate
The clash reflects broader tensions between Silicon Valley’s AI ethics commitments and Washington’s defense priorities. Anthropic has previously aligned itself with regulatory efforts and transparency measures, even hiring bipartisan figures, including Chris Liddell from Donald Trump’s first administration, to its board.
Observers say the standoff underscores a deeper trust gap between tech companies and the Pentagon.
As the deadline approaches, the dispute could reshape how AI companies structure government contracts—and determine whether safety-minded AI firms can maintain ethical limits as national security demands intensify.


Pingback: Tattvam AI Raises $1.7M for Semiconductor Chip Design with AI