Sunday, April 5, 2026
Logo

Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign

Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company

TechnologyBy Wire ServicesFebruary 25, 20265 min read

Last updated: April 4, 2026, 12:46 PM

Share:
Anthropic weakens its safety pledge in the wake of the Pentagon's pressure campaign

The frog's water is beginning to bubble.

Add Engadget on GoogleThe Claude maker's new policy trades hard lines in the sand for flexible gray areas. (Aerps.com / Unsplash)Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.

On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the companys core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropics pitch to businesses and consumers.

“Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not,” Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.

Anthropics quotes in an interview with Time sound reasonable enough in a vacuum. "We felt that it wouldnt actually help anyone for us to stop training AI models," Jared Kaplan, Anthropics chief science officer, told Time. "We didnt really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead."

Anthropic CEO Dario Amodei (Photo by David Dee Delgado/Getty Images for The New York Times) (David Dee Delgado via Getty Images)But you could also read those quotes as the latest example of a hot startup’s ethics becoming grayer as its valuation rises. (Remember Google’s old “Don’t be evil” mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.)

In place of Anthropics previous tripwires, it will implement new "Risk Reports" and "Frontier Safety Roadmaps." These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand.

Anthropic says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the USs anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit."

Defense Secretary Pete Hegseth (Photo by AAron Ontiveroz/The Denver Post) (AAron Ontiveroz via Getty Images)Neither Anthropics announcement nor the Time exclusive mentions the elephant in the room: the Pentagons pressure campaign. On Tuesday, Axios reported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldnt allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement.

If Anthropic doesnt relent, experts say its best bet would be legal action. But will the Pentagons proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isnt included in their workflows.

Claude is the only AI model currently used for the militarys most sensitive work. "The only reason were still talking to these people is we need them and we need them now,” a defense official told Axios. “The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir.

Times story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he said. However, he also raised concerns that the more flexible RSP could lead to a "frog-boiling" effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned.

Painter said the new RSP shows that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."

If you buy something through a link in this article, we may earn commission.

WS
Wire Services

wire

Aggregated news from trusted wire services and news agencies worldwide.

Related Stories