OpenAI CEO Sam Altman unveiled a reworked agreement with the Pentagon Monday night governing the Defense Department’s use of its AI services, which he says provides stronger guarantees that the military won’t use OpenAI’s systems for domestic surveillance.
The new agreement states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” according to a post on OpenAI’s website. OpenAI had faced some backlash as news of an initial agreement between the leading AI company and the Pentagon emerged on Friday. Many observers claimed the original language shared on OpenAI’s website provided ample loopholes for the government to surveil Americans.
The move comes after weeks of intense debates between rival AI company Anthropic and the Pentagon over how the military can use advanced AI systems. While the Defense Department had wanted Anthropic to agree to use its systems for “any lawful purpose,” Anthropic maintained its systems could not be used for domestic surveillance or to control deadly autonomous weapons. Until last week, Anthropic was the only major AI company whose services were actively used on classified networks.
Researchers argue that without guardrails, AI could allow authorities to monitor individuals with unprecedented speed and accuracy, combing through mountains of digital data to track peoples’ movement and behavior.
“It is critical to protect the civil liberties of Americans,” Altman wrote in a post on X Monday night announcing the new contract language that he said better limits domestic surveillance. “The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA).”
Katrina Mulligan, head of national security partnerships for OpenAI, added in another post on X Tuesday morning that “defense intelligence components are excluded from this contract,” stipulating that she would be open to future work with the NSA “if the right safeguards were in place.”
OpenAI did not respond to a request for comment.
Many observers remained unswayed Tuesday, concerned that the snippets of OpenAI’s contract with the Pentagon published by the company remained purposefully vague and provided carveouts for domestic surveillance by various intelligence agencies within the Defense Department. The full text of the contract has not been released publicly.
“OpenAI has said that the Department of War contractually agreed not to use ChatGPT in agencies that surveil American people,” said Brad Carson, a former congressman and general counsel of the Army who now leads the Washington D.C. policy group Americans for Responsible Innovation. “They have been happy to show contract language when it benefitted them, but they refuse to release to the public this contractual provision.”
“I’ve reluctantly come to the conclusion that this provision doesn’t really exist, and they are just trying to fake it,” Carson told NBC News. Carson recently founded an AI-focused super PAC which has received $20 million from OpenAI rival Anthropic.
Several legal experts agreed that greater transparency about the entire contract and any other key clauses is necessary to properly evaluate the company’s claims.
“We still need to see the whole contract to say anything with a reasonable level of confidence,” said Brian McGrail, senior counsel at the Center for AI Safety, a nonprofit research and advocacy group “It’s definitely a step in the right direction, and I do want to give OpenAI some credit.”
OpenAI’s agreement with the Pentagon was announced shortly after Defense Secretary Pete Hegseth said he would label rival AI company Anthropic, which had long been in contract negotiations with the Pentagon, a supply chain risk to national security. Anthropic said the designation, which would force the Pentagon and contractors to stop using Anthropic’s services for defense purposes, has never before been publicly applied to an American company.
At an event in Sausalito, California, on Monday, retired Gen. Paul Nakasone, the former director of the National Security Agency and U.S. Cyber Command, said that the Pentagon should work to incorporate all leading American AI companies’ technology into national defense.
“We need Anthropic, we need OpenAI, we need all of our large language model companies to be partnering with our government,” Nakasone, who is a member of OpenAI’s board of directors, said at a conference sponsored by the Aspen Institute. “I think the supply chain piece is not good. The discussions over the weekend and the tenor of those discussions were tough for me to listen to. As an American citizen, someone who served in government, I just think that it’s not right, okay? This is not a supply chain risk.”

Anthropic had long maintained that the Defense Department could not use its AI systems for domestic mass surveillance or for direct use in autonomous weapons, though it added concessions for the military to use its systems for cyber and missile defense purposes in December. After a meeting between Anthropic CEO Dario Amodei and Hegseth last Tuesday, the Defense Department issued an ultimatum for Anthropic to reach an agreement by 5 p.m. ET Friday.
However, on Thursday, an Anthropic spokesperson told NBC News the Defense Department’s latest “language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”
But as Anthropic’s relationship with the Defense Department broke down, OpenAI’s deepened, with Friday’s announcement of a contract adding a fresh round of intrigue to a story that had already captivated much of the tech and defense community. In his post Monday night, Altman said the rush to ink a deal made the negotiations look “opportunistic and sloppy” even though OpenAI was “genuinely trying to de-escalate things and avoid a much worse outcome.”
Throughout the weekend and early this week, an army of legal experts have examined the latest public contract language from OpenAI, trying to determine whether the company’s terms had actually added any substantive protections beyond the Defense Department’s “any lawful use” standard.
“I am confused about why the Pentagon would accept this language when they just tried to nuke Anthropic for asking for something very similar to this,” Charlie Bullock, senior research fellow at the Institute for Law and AI think tank, wrote on X after the updated language emerged.
Many legal experts argue that each word in the contract carries significant weight, as they say the government will take the widest possible reading of the contract’s terms.
“The pattern we’ve seen play out time and again in these surveillance debates is that the intelligence and national security community ends up interpreting exceptions in an extremely broad fashion, far more broadly than any normal reasonable person,” McGrail said. “And because so much of it is secret, there’s limited visibility for the public to push back.”
“So could there be some new loophole to be exploited here that we aren’t predicting? It’s totally possible,” McGrail added.
Experts have also focused on whether the contract is permanently anchored in today’s notions of legality, as they worry the government could alter the boundaries of “any lawful use” by issuing new executive orders or legal opinions.
The recent debate over military use of AI for domestic surveillance has particularly focused on the government’s ability to use commercially available data in its operations, as other methods for surveilling Americans can prove more difficult to gain legal approval.
For years, companies providing or displaying ads on phones or laptops have been able to compile targeted data about users, including precise location data, and sell that information to various government agencies to identify individuals’ travel and behavioral patterns.
Mulligan, OpenAI’s national security leader, said in a Monday night X post that the contract’s “new language reinforces that domestic surveillance is disallowed under this agreement, including involving commercially acquired information.”
Sen. Ron Wyden, D-Ore., who has in recent years has repeatedly warned that the federal government buys commercially available data on Americans for surveillance purposes, criticized the Pentagon for not acquiescing to Anthropic’s privacy concerns.
“The Defense Department is throwing a fit over Anthropic asking for the bare minimum ethical guardrails on how DOD uses its product,” Wyden said in an emailed statement. “That’s serious cause for alarm, given AI’s ability to turn disparate pieces of public or commercial data into highly revealing profiles of Americans. Location data, web browsing records, and information about mental health, political activities and religious affiliations are all available for pennies on the open market and could make Americans targets for doing things that are completely legal.”
“Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed, regardless of what the current, outdated laws on the books say.”
Amodei, the Anthropic CEO, has repeatedly remarked that firmer commitments from the Defense Department to not use AI to surveil Americans are necessary because the law has not caught up to AI’s increasingly powerful capability to analyze or parse vast troves of data. Recent research has also shown that individuals can be identified by today’s AI systems, even if the underlying data has purportedly been anonymized.
Protestors of OpenAI’s initial deal with the Pentagon surrounded OpenAI’s San Francisco headquarters this weekend with chalk messages encouraging employees to remain skeptical of the company’s terms, while uninstalls of OpenAI’s ChatGPT app surged following news of the agreement.
Michael Horowitz, former deputy assistant secretary of defense for emerging capabilities and current professor of political science at the University of Pennsylvania, told NBC News that the dispute between the Pentagon and Anthropic went beyond the simple contract terms.
“This dispute reflects a breakdown in trust between Anthropic and the Pentagon, where Anthropic does not trust that the Pentagon will use their tech responsibly, and the Pentagon doesn’t trust that Anthropic will allow its tech to be used for what the Pentagon views as important national security use cases,” Horowitz said. “Part of that is cultural differences, part of that is politics, part of that is personalities.”
















Leave a Reply