OpenAI said it will be modifying a contract with the Pentagon so that its models would not be used for "domestic surveillance," following criticism it was giving too much power to military officials without oversight.
"Consistent with applicable laws, including the Fourth Amendment...the AI system shall not be intentionally used for domestic surveillance of US persons and nationals," chief executive Sam Altman posted on X.
OpenAI secured the defense contract on Friday hours after its rival Anthropic refused to agree to unconditional military use of its Claude AI models, prompting backlash from US officials.
Anthropic insisted it needed guarantees against AI use for surveillance as well as autonomous weapons, and Altman said the Defense Department had agreed to "technical safeguards" for those areas.
It was reported that some 700,000 users canceled their ChatGPT subscriptions following the deal, with many of them switching to Anthropic’s Claude AI, pushing the latter to the top spot on Apple's US App Store.
In response to growing skepticism about the value of such safeguards, Altman posted Monday that "to make our principles very clear...we are going to amend our deal."
In addition to surveillance, he said the Pentagon agreed that OpenAI models would not be used by its intelligence agencies, including the National Security Agency.
"We shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication," Altman said.
"There are many things the technology just isn't ready for, and many areas we don't yet understand the tradeoffs required for safety."
He also criticized the Pentagon's punishment of Anthropic by declaring it a supply chain risk -- meaning any company that works with the US military can have no dealings with Anthropic.
Anthropic has vowed to fight the order in court, calling it a "dangerous precedent for any American company that negotiates with the government."
AFP and staff reporter