OpenAI claims it has accomplished what Anthropic couldn’t: securing a Pentagon contract that won’t cross professed red lines against dragnet domestic spying and the use of artificial intelligence to order lethal military strikes. Just don’t expect any proof.

Sam Altman, OpenAI’s CEO, announced the company’s big win with the Defense Department in a post on X on February 27.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. The Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

The deal came after the very public implosion of what was to be a similar contract between the U.S. military and Anthropic, one of OpenAI’s chief rivals. Anthropic had said negotiations collapsed because it could not enshrine prohibitions against killer robots and domestic spying in its contract. The company’s insistence on these two points earned it the wrath of the Pentagon and President Donald Trump, who ordered the government to phase out use of Anthropic’s tools within six months.

But if the government booted Anthropic for refusing mass surveillance and autonomous weapons, how could OpenAI take over the contract without having the same problem?

OpenAI has attempted to square this circle through a string of posts to X by company executives and researchers, including Katrina Mulligan, its national security chief, and a claim by Altman that the company negotiated stricter protections around domestic surveillance.

The company and the government, however, are not releasing the only proof that matters: the contract itself.

The Department of Defense did not respond to a request for comment.

OpenAI and company personnel contacted by The Intercept did not respond when asked for specific contract language. Company spokesperson Kate Waters did not respond to questions, sending The Intercept only links to prior public statements from Altman.

(In 2024, The Intercept sued OpenAI in federal court over the company’s use of copyrighted articles to train its chatbot ChatGPT. The case is ongoing.)

So far, OpenAI has released only snippets of the deal’s language loaded with PR-speak and national security jargon. Without being able to verify the company’s claims, Altman’s pitch to the world comes down to one premise: Trust me — along with Trump and Defense Secretary Pete Hegseth — to do the right thing.

Following widespread criticism of these vagaries, Altman said earlier this week that the firm was able to quickly negotiate into its contract stricter terms with the Pentagon. These additions, Altman said, include language the company claims will stop domestic spying and collaboration with the National Security Agency.

But the company’s muddled messaging throughout the week only raised more questions about OpenAI’s willingness to do the federal government’s bidding.

“We have been working with the DoW to make some additions in our agreement to make our principles very clear,” Altman posted on Monday, using Trump’s preferred name for the Department of Defense.

“The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA),” Altman continued. “Any services to those agencies would require a follow-on modification to our contract.”

Since OpenAI has not released the contract, it’s unclear if the Pentagon’s affirmation is actually reflected in binding contract language.

Mulligan at first responded to criticism of the company’s deal with a pledge to release a “clear and more comprehensive explanation” of the relevant terms of the contract. On Tuesday, having failed to deliver such an explanation, she told one concerned X user, “I do not agree that I’m obligated to share contract language with you.”

She added, “For the record, I would want to work with NSA if the right safeguards were in place,” but did not specify what these safeguards might be.

Former military officials told The Intercept they had grave concerns about the arrangement based on what’s been made public. “I’m not confident in the language at all. And in some parts I don’t even believe it,” said Brad Carson, who previously served as under secretary of the Army during the Obama administration. Carson noted that blocking Pentagon spy agencies like the NSA or National Geospatial-Intelligence Agency would ostensibly prevent usage of OpenAI’s tools in pressing intelligence analysis contexts, like the ongoing war against Iran. “I don’t believe that provision is in the contract. I say that reluctantly, but I don’t,” Carson added.

A former Pentagon official who worked on military artificial intelligence applications told The Intercept the caveats around “intentional” surveillance are worryingly unclear. “That’s the get out of jail free card right there,” this source, who spoke on the condition of anonymity, said in an interview. “The language gives them enough flexibility to still do whatever the fuck they want, more or less, and then say, whoops, sorry, didn’t mean to.”

“There is nothing OpenAI can do to clarify this except release the contract.”

“There is nothing OpenAI can do to clarify this except release the contract,” former Department of Justice National Security Division attorney Alan Rozenshtein said. Rozenshtein described OpenAI’s attempt to sell its contract to the public without letting the public read the contract as “not sustainable” and “bizarre.” If OpenAI will restrict its tools from the NSA, with its long-documented history of extra-constitutional dragnet domestic surveillance, this would be memorialized in the contract, not a tweet, he said. But if OpenAI has indeed come to any such agreement with the government, it is asking the world to take it as an article of faith.

“It’s quite possible that OpenAI understands that these red lines are fake, but has written a contract to give them some PR coverage. That would be bad because that feels pretty dishonest,” Rozenshtein added. “Or it’s possible that OpenAI has a different understanding of its own contract than what DOD understands the contract to be. Which is a bad position to be in, and suggests that this contract negotiation has not been done skillfully.”

Potentially undermining OpenAI’s credibility is that some of its public outreach has been simply untrue. Asked by an X user whether the contract would permit the Pentagon “[g]etting and/or analyzing commercially available data at scale,” Mulligan replied, “The Pentagon has no legal authority to do this.” This is false, at least according to the Pentagon. A declassified 2022 report by the Office of the Director of National Intelligence provided an overview of the collection of commercially available data by the government, including the Department of Defense — exactly the activity Mulligan was asked about.

The Pentagon’s domestic surveillance has been further established in news reports. In 2021, Motherboard reported a letter sent from Sen. Ron Wyden to the Department of Defense in which he urged then-Secretary Lloyd Austin “to release to the public information about the Department of Defense’s (DoD) warrantless surveillance of Americans.” A New York Times report on a related investigation by Wyden’s office that same year showed that the Defense Intelligence Agency had spied on Americans’ precise movements and locations without a warrant by simply buying access to their GPS coordinates. In a letter responding to Wyden, the Pentagon said the DIA’s lawyers had blessed the surveillance.

“It is a fact that the Pentagon has both purchased and analyzed vast amounts of Americans’ location, web browsing, and other data, for years,” Wyden wrote in a statement to The Intercept. “I’ve personally revealed several of those programs, with the help of brave whistleblowers. Anyone who claims that isn’t happening simply doesn’t know what they’re talking about.”

OpenAI’s rhetoric fails to reckon with the way the national security state has secured both secrecy and operational latitude through relying on misleading interpretation or radical ambiguity of words.

For instance, Altman shared on Monday evening a purportedly updated clause stating: “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

The phrase “Consistent with applicable laws” sounds promising until one reflects on the fact that the government claims consistency with applicable laws in every dragnet surveillance program, drone strike, kidnapping, assassination, or invasion. “I’m saying that the programs are legal, obviously,” White House spokesperson Jay Carney told reporters in the early days after whistleblower Edward Snowden revealed the existence of the NSA. (Ironically, Mulligan was part of this public relations deflection effort during her stint in the Obama National Security Council.)

The word “intentionally” provides a miles-wide wall of plausible deniability that has helped cover for decades of domestic spying. In a March 2013 Senate hearing, Wyden asked then-Director of National Intelligence James Clapper, under oath, “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?” Clapper replied “No, sir.” When pressed, he added “Not wittingly.” A few months later, NSA materials disclosed by Snowden would reveal this was entirely false: The agency routinely collected vast quantities of information on Americans as a routine practice.

The Clapper episode revealed the peril of public reliance on commonsense words like “wittingly” or “intentionally” in the context of national security. Offices like the NSA or ODNI are staffed by sharp legal minds, brilliant mathematicians, accomplished engineers, and funded with billions of dollars. They do little by accident. Altman’s invocation of “intentionally” spying on Americans, like Clapper’s dodge behind the term “wittingly,” reflects what’s known in the intelligence field as “incidental collection”: a euphemism that camouflages the fact that the government historically asserts spying on Americans is legal. In this case, incidental doesn’t mean by mistake, but rather secondary; while vacuuming up unfathomably large quantities of data to surveil foreigners, for whatever reasons deemed necessary, the government has asserted its legal right to catch Americans in the process, even if they are not the actual the target.

Altman’s other revised assurances come with similar linguistic escape hatches. “For the avoidance of doubt,” he wrote on X, “the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, the word “deliberate” is load-bearing, while crucial terms like “tracking,” “surveillance,” and “monitoring” are left undefined.

“The word surveillance doesn’t even include the kind of activities that people are most concerned about,” Carson, former general counsel of the Army, said. He doubted the Pentagon, for instance, would consider using an OpenAI large language model to build intelligence dossiers on private citizens with data pulled from federal and commercial databases as an act of “surveillance.”

“They’re trying to blind you with complicated legal terms that ordinary people think mean something different entirely,” Carson said of OpenAI’s rhetoric. “But the lawyers know what it means. And the lawyers know that this is no guardrail at all.”

One’s ultimate comfort with and confidence in this occluded contract will likely be reduced to one’s opinion of the integrity of the involved parties. How one of the most secretive institutions in the world will use the technology of similarly opaque corporation will remain the stuff of trade secrecy and classified records.

Altman and Mulligan say that OpenAI engineers will make sure the Pentagon doesn’t break its commitments: “Our contract offers additional layered safeguards including our safety stack and OpenAI technical experts in the loop,” a company statement says, without explaining what its “safety stack” is or how its “technical experts” could apply oversight to the country’s single largest bureaucracy, comprised of a litany of sub-agencies and components employing over 2 million service members and nearly 800,000 civilian personnel. Indeed, in an employee all-hands meeting held Tuesday, Altman told staff that Hegseth would hold ultimate authority over how the Pentagon makes use of the contract, according to CNBC.

When it comes to honesty and a respect for the law from Altman, Trump, and Hegseth, there is good reason for skepticism.

Altman has been repeatedly accused of false statements by the people he works with. In a 2025 court filing submitted as part of an ongoing lawsuit by Elon Musk against Altman alleging OpenAI betrayed its original nonprofit mission, former OpenAI researcher Todor Markov — who now works at Anthropic — described Altman as a “person of low integrity who had directly lied to employees.” In a memo that surfaced after Altman was briefly ousted as CEO, OpenAI co-founder Ilya Sutskever alleged he had engaged in a “consistent pattern of lying” leading up to his firing.

Nor is it always easy to pin down Altman’s ideological commitments or ethical boundaries. “Honestly, I’m scared for the lives of all of us,” Altman wrote in an October 2016 tweet. “My #1 fear w/Trump is war.” Ten years later, Altman announced his company would sell services to the Trump administration hours after it launched a new war in the Middle East. OpenAI itself was originally founded to benefit all of humanity, and the company officially prohibited the use of its technologies for warfare — until it silently deleted this prohibition from its terms of service.

The tenure of Hegseth, might prompt similar wariness. He has overseen the assassination of Iran’s leader, the kidnapping of Venezuela’s head of state, and the killing of more than 150 men either blown apart or left to die in the ocean in boat strikes, all without congressional authorization.

Trump, meanwhile, as part of a broad disregard for legal statutes or the Constitution, has refashioned the Department of Justice into his personal firm and directed his Department of Homeland Security to brutalize and warrantlessly surveil Americans across the country. Without the text of the contract in sunlight, it is ultimately these three men — and whoever succeeds them in years to come — that the world is being asked to trust. An appeal to “applicable laws” or the sanctity of contract language is only as meaningful as the people in charge want it to be.

The former Pentagon AI official said that ceding this power to Hegseth is cause for alarm even with the most diligently crafted contract. Will anyone feel they are able to speak up should someone in the military use or be ordered to abuse OpenAI’s systems in contravention of the law or the contract? “Is the one-star general going to be able to escalate — ‘Hey, this is a huge fucking national security problem’ — appropriately without the Defense Secretary moving them around?”

“My presumption is always to trust people in what they say,” said Carson, speaking of OpenAI. But following days of what he described as “change, backtracking, a bit of deception, [and] outright deception, I’m afraid I don’t really trust you on this one anymore.”

The former Pentagon official agreed: “If you trust the cabal of Sam Altman, Donald Trump, and Pete Hegseth, there’s nothing I can do for you.”