Anthropic Received’t Carry AI Safeguards Amid Ongoing Pentagon Dispute: CEO – Decrypt




In short
Dario Amodei says Anthropic won't take away bans on mass home surveillance and absolutely autonomous weapons.
The Pentagon has threatened contract termination and doable motion beneath the Protection Manufacturing Act.
The standoff follows experiences that the U.S. army used Claude to seize former Venezuelan President Nicolás Maduro
Anthropic CEO Dario Amodei stated Thursday the corporate won't take away safeguards from its Claude AI mannequin, escalating a dispute with the U.S. Division of Protection over how the know-how can be utilized in labeled army methods.The assertion comes because the Protection Division opinions its relationship with Anthropic and weighs potential penalties, together with cancellation of the corporate’s $200 million contract and doable invocation of the Protection Manufacturing Act.“We can't in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors allow use of their methods for “any lawful use.”Whereas the Pentagon has since required AI distributors to undertake customary “any lawful use” language in future agreements, Anthropic remained the one frontier AI agency that resisted turning over management of its AI to the army.On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted army use of Claude. The deadline reportedly is Friday of this week.“It's the Division’s prerogative to pick out contractors most aligned with their imaginative and prescient,” Amodei continued. “However given the substantial worth that Anthropic’s know-how gives to our armed forces, we hope they rethink.”In his assertion, Amodei framed the corporate’s stance as aligned with U.S. nationwide safety objectives.“I consider deeply within the existential significance of utilizing AI to defend the US and different democracies, and to defeat our autocratic adversaries,” he stated.He added that Claude is “extensively deployed throughout the Division of Warfare and different nationwide safety businesses for intelligence evaluation, modeling and simulation, operational planning, cyber operations, and extra.”Warfare on AIThe dispute unfolds in opposition to broader considerations about how superior AI methods behave in high-stakes army situations. In a latest King’s School London research, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.Throughout a speech at SpaceX’s Starbase in Texas in January, Protection Secretary Pete Hegseth stated the U.S. army plans to deploy probably the most superior AI fashions.That very same month, experiences surfaced that Claude was used throughout a U.S. operation to seize former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any particular army operations.“Anthropic understands that the Division of Warfare, not non-public corporations, makes army selections,” he stated. “Now we have by no means raised objections to explicit army operations nor tried to restrict use of our know-how in an advert hoc method.”Regardless of this, Amodei stated utilizing these methods for mass home surveillance or autonomous weapons is incompatible with democratic values and presents critical dangers.“In the present day, frontier AI methods are merely not dependable sufficient to energy absolutely autonomous weapons,” he stated. “We won't knowingly present a product that places America’s warfighters and civilians in danger.”He additionally addressed the Pentagon’s risk to designate Anthropic a “provide chain threat” whereas additionally probably invoking the Protection Manufacturing Act.“These latter two threats are inherently contradictory: one labels us a safety threat; the opposite labels Claude as important to nationwide safety,” he stated.Whereas Amodei has stated the corporate won't adjust to the Pentagon’s request, on the similar time, Anthropic has revised its Accountable Scaling Coverage, dropping a pledge to halt coaching of superior methods with out assured safeguards in place.Robert Weissman, co-president of Public Citizen, stated the Pentagon’s posture alerts broader strain on the tech business.“The Pentagon is publicly bullying Anthropic, and the general public half is intentional, as a result of they need to strain this explicit firm and ship a message to all massive tech and all firms that we intend to do and take no matter we wish and don’t get in our method,” Weissman informed Decrypt.Weissman described Anthropic’s guardrails as “modest” and geared toward stopping “improper surveillance of American individuals or to facilitate the event and deployment of killer robots, AI-enabled weaponry that would launch deadly strikes with out people say so.”“These are probably the most wise and modest guardrails you might provide you with in terms of this highly effective new know-how.”Concerning the Pentagon’s risk of designating Anthropic a “provide chain threat,” Weissman referred to as it a probably crushing penalty from the federal government, and argued it could strain different AI corporations to keep away from imposing comparable limits.“People would possibly use Claude, however not one of the AI corporations, significantly Anthropic, have enterprise fashions based mostly on particular person use; they’re in search of enterprise use,” he stated. “This can be a probably crushing penalty from the federal government.”Whereas the Pentagon has not but stated whether or not it plans to undergo with its risk to terminate the contract or invoke the Protection Manufacturing Act, Weissman stated the Pentagon is signaling to AI corporations that it expects unrestricted entry to their know-how as soon as it's deployed in authorities methods.“The message of the Pentagon is, ‘we’re not going to tolerate this, and we anticipate to have the ability to use the know-how because it’s invented for any objective we wish,’” Weissman stated.The Division of Protection and Anthropic didn't instantly reply to Decrypt’s requests for remark.Day by day Debrief NewsletterStart every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.