Saturday, April 4, 2026
Logo

Anthropic doesn’t trust the Pentagon, and neither should you

Today we’re talking about the messy, fast-moving situation at Anthropic, the maker of Claude that now finds itself in a very ugly legal battle with the Pentagon. The back-and-forth is complicated, but as of a few days ago, the Pentagon had deemed Anthropic a supply chain risk, and Anthropic has file

TechnologyBy Lauren SchaferMarch 12, 202614 min read

Last updated: April 1, 2026, 2:38 AM

Share:
Anthropic doesn’t trust the Pentagon, and neither should you

Today we’re talking about the messy, fast-moving situation at Anthropic, the maker of Claude that now finds itself in a very ugly legal battle with the Pentagon.

The back-and-forth is complicated, but as of a few days ago, the Pentagon had deemed Anthropic a supply chain risk, and Anthropic has filed a lawsuit challenging that designation, saying the government has violated its First and Fifth Amendment rights by “seeking to destroy the economic value created by one of the world’s fastest-growing private companies.” I can tell you right now: We’re going to be talking about the twists and turns of that case on The Verge and here on Decoder in the months to come.

But today I wanted to take a moment and really dig in here on one very important element of this situation that’s not gotten enough attention as this has spiraled out of control: how the United States government does surveillance, the legal authority that allows that surveillance to occur, and why Anthropic was distrustful of the government saying it would follow the law when it comes to using AI to do even more surveillance.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

My guest today is Mike Masnick, the founder and CEO of Techdirt, the excellent and long-running tech policy website. Mike has been writing about government overreach, privacy in the digital age, and other related topics for decades now. He’s an expert on how the internet and the surveillance state have grown up in interconnected ways.

You see, there’s what the law says the government can do when it comes to surveiling us, and then what the government wants to do. And most importantly, there’s what the government says the law says it can do, which is often exactly the opposite of what any normal person simply reading the law would think.

You’ll hear Mike explain in great detail here in this episode that we cannot — and should not — take the US government at its word when it comes to surveillance. There’s just too much history of government lawyers twisting the interpretations of simple words like “target” to expand surveillance in complicated ways — ways that usually only cause concern in legal circles, and only bubble up when there are huge controversies like whistleblower Ed Snowden’s major NSA revelations more than a decade ago.

But there’s nothing subtle or sophisticated about policymaking in the Trump era — and so with Anthropic, we’re having a very loud, very public debate about technology and surveillance in real time, on the internet, in blog posts and X rants, and over press conference sound-bites. There’s positives and negatives to that, but to make sense of it all, you really have to know the history.

That’s what Mike and I set out to explain in this episode — whatever your views on AI and government, this episode will make it clear that both parties have let the surveillance state get bigger and bigger over time. Now, we’re on the cusp of the biggest expansion yet when it comes to AI.

Okay: Techdirt founder and CEO Mike Masnick on Anthropic, the Pentagon, and AI surveillance. Here we go.

This interview has been lightly edited for length and clarity.

Mike Masnick, you’re the founder and CEO of Techdirt. Welcome to Decoder.

I’m excited to have you on. I was just saying I am shocked that you’ve never been on the show before. You and I have been writing and posting around each other for a long time. A lot of The Verge policy coverage owes a debt to what you’ve done at Techdirt and then what’s going on with Anthropic is so complicated, but hits so many themes that you have covered for so long. I’m glad you’re finally here.

It is a complicated mess of a topic, but I’m excited to be digging in on it.

What I want to focus on with you is not the details of whether Anthropic is going to sign a contract with the government or whether OpenAI is going to get that contract. Instead, I’m confident between the time we record this and the time people listen to it, there will have been more tweets and more things will be different than they were before.

What I want to focus on is just one of the two red lines that Anthropic has really laid out. One of them is autonomous weapons, which is its own level of complication. The law there is a little bit more nascent whether or not the weapons even exist or have already been deployed by Russia in the Ukraine War.

There are a lot of ideas here that I just want to set that aside because I think that is going to come into more focus all on its own schedule. The other red line that I do want to spend a lot of time on is mass surveillance. And there’s quite a lot of law here about mass surveillance. There’s a lot of history, a lot of controversial history. The entire character of Edward Snowden exists because of controversies around mass surveillance.

It all comes down to—I think you are the one who posted this—the National Security Agency (NSA), which is part of the Department of Defense, which we have to call the Department of War now for some reason.

[Laughs] We don’t have to do anything.

[Laughs] We don’t. That’s true here in America. We don’t have to do anything. But the NSA has basically redefined what a lot of words mean out of colloquial English to mean, “We can just do surveillance.” And then every so often there’s a scandal when people discover that they’re just doing surveillance. So just set the stage there, and I don’t want to rewind you all the way, but it’s been quite a lot of time where this pattern has repeated itself.

It depends on how deep you want to go, but the short version is obviously in the post-9/11 world, the US passed the Patriot Act, which had some ability for the government to engage in surveillance, which was supposed to be for protecting us against future terrorist threats. Over time, that got interpreted in interesting ways and there were some limits on that. We also had the FISA court, which is a special court that is supposed to review the intelligence community and their activities, but has traditionally been a one-sided court. Only one side gets to plead their case to that court and it’s all done in secret.

There’s a lot of stuff that was not known. And then there was one other piece in all of this, which goes all the way back to Ronald Reagan, which is Executive Order 12333, which is supposedly about setting out the rules of the road for intelligence collection.

So you have these three sets of laws—well, a few sets of laws—and an executive order that to the public, the parts that you can read, seem to say certain things about what our government and the NSA in particular can do in terms of surveillance. When read with a plain English dictionary, the nature of which you and I probably have and understand, we would come away with a belief that the NSA’s ability to surveil Americans was very limited, in fact to the point that they’re supposed to, if they realize that they are surveilling a US person, that they’re supposed to immediately stop and cry foul and erase the data and all of this other stuff.

There were rumors for a while that that was not really happening and there were hints and in particular Senator Ron Wyden was very vocal about going on the floor of the Senate and saying, “Something is not right here and I can’t quite tell you what,” or in hearings he would ask intelligence officials, “Are you or are you not collecting mass data on Americans?”

Those officials would either deflect or in some cases outright lie. I believe it was one hearing in 2012 with James Clapper, who was the Director of National Intelligence at the time, where he was asked directly on this point. And he basically said, “No, we don’t collect data on Americans.” That was a big part of what inspired Ed Snowden to leak the data, the reports that he leaked to Glenn Greenwald and Barton Gellman and Laura Poitras as well. From all of that, what we began to discover was that the NSA has its own dictionary that is somewhat different than the dictionary that you and I use, such that they can interpret words in ways that are different than the plain English meaning of them, including words like “target,” which feels like a key word. A broad understanding of what this is is that, in theory, they’re only supposed to target people who are not US persons, I think is the phrase.

But the way it had been interpreted over time was that anything that mentions that person, anything that is about a foreign person is now fair game, even if that is the communications of a US person. So if you and I were to text each other and mention a foreign person, that is now fair game for the NSA to collect and to keep and to store.

There’s a second part of this. I mentioned first Executive Order 12333 from Ronald Reagan, which, as the technology changed over time and the internet grew, effectively allowed the NSA to tap into foreign communications, but that included any communications that may have left the US on route somewhere. So if I’m texting you and a message went from me in California through a fiber optic cable that happened to leave the US, the NSA could put a tap in the part once it’s outside the US and collect that information, even if it was just going to you within the US.

The NSA could then keep that information even if it was on US persons, and they could do specific searches on that later, sometimes referred to as “backdoor searches.” They collected this information that we believe they weren’t supposed to collect in the first place, but they could keep it. And they promised, they pinky swore, that they would keep it private, but if they did a search and found that you or I mentioned a foreign person, then suddenly it was fair game for them to do whatever they want with it.

In total, that has turned into a world in which the federal government can basically collect any information that happens to touch outside the US. Even if it is entirely between two US persons, if they mention or even hint at someone who is not a US person, suddenly it is fair game to be collected. And from that we’ve gotten what appears to be a form of mass surveillance of US persons by an NSA that claims and publicly states that it does not spy on US persons.

How did we get to this point? This is a lot of incremental baby steps. You mentioned James Clapper in 2012, that’s the Obama administration. You mentioned Ronald Reagan, that’s the 1980s. We’re going through Democrats and Republicans here.

The war on terror happened in the George W. Bush administration, and 9/11 and the Patriot Act happened in the George W. Bush administration. There are a lot of incremental bad things under presidents of both parties, under congresses of both parties. How did this happen?

The simplest form of it is just that nobody, and certainly no president, wants to be president during the time when there’s a big terrorist attack, because that makes them look bad. Obviously they also want to protect Americans, right? That’s part of their job. If you have an intelligence community that is basically operating in darkness because that’s what intelligence communities do and they keep coming to you and saying, “Hey, if we could just get access to this information, it’d be really helpful in preventing a terrorist attack.”

There may be cases where that’s true, that the intelligence community is able to use this information in a way that works well. But we also are, in theory, a society of laws with a US Constitution that we’re supposed to obey. But that allowed for the fact that administration after administration, again, Republican and Democrat, had lawyers who were very clever and who would look through and say, “Well, if we sort of position this way or we state this way or we interpret this, that way we can get what we want and not technically break the law or not technically violate the Fourth Amendment.”

The assumption was always, “We can sort of bend the law or bend our interpretation of the law and nobody’s really ever going to see this, or nobody who cares is really ever going to see this, and therefore we’ll get away with it.”

There are two things that really jump out at me. One, you and I both read a lot of court decisions — appellate court decisions and Supreme Court decisions. And there’s a fight in our Supreme Court about how to literally interpret the words in our statutes and our laws.

I won’t get too far into it, but I would say generally the idea that you should just read the words on the page and do what they say is the dominant strain of statutory interpretation in the United States. Left or right, they both say it. They argue about some very esoteric fine points of what that actually means. But that you should just be able to read these words and do what they say, that’s not up for grabs, right?

We’ve landed on at least that first pass of what you might call textualism. How do lawyers of both administrations get this far away from the dominant mode of legal decision-making in our country? Justices of both parties both agree that that is at least the first step.

I wish I knew the exact answer, but I think it is motivated reasoning, right? As a lawyer, you are there to defend your client and the success — if you can call it success — of our legal system tends to be based on having an adversarial situation where you have different sides arguing over these things, where the role of the adjudicator is to narrow in and figure out which side is actually correct.

One of the problems with the intelligence community and the setup of it is that you don’t have that adversarial situation. That makes it easier for one side to justify the argument that they’re making because nobody is really pushing back on it. You combine that with the overarching fear of another terrorist attack, anything related to national security, and even when you have situations where you have the FISA court — I mean the FISA court was somewhat famous for effectively being a rubber stamp for many years.

I forget the exact numbers, but it was something like over 99 percent of applications that went to the FISA court to allow for surveillance of certain situations were granted, and it’s easy to say 99 percent is obviously too much. Obviously those bringing claims to the court, they’re picking and choosing. They’re not, for the most part, bringing totally crazy claims. But without that adversarial aspect and with a very strongly motivated group of people who think, “We need to do this,” or are being told by an administration, “We need to do this,” they’ll find ways to do it. And that’s where you end up over time.

LS
Lauren Schafer

Technology Reporter

Lauren Schafer reports on artificial intelligence, cybersecurity, and the intersection of technology and society. With a background in software engineering, she brings technical expertise to her coverage of how emerging technologies are reshaping industries and daily life. Her AI reporting has been featured in industry publications.

Related Stories