OpenAI's Sam Altman reacts to agreement with the Department of Defense.

OpenAI's Sam Altman reacts to agreement with the Department of Defense.
Summary
OpenAI partnered with the U.S. Department of War to provide AI tools for military use.
OpenAI's contract includes guardrails against domestic surveillance and autonomous weapon usage.
Concerns exist over loopholes in the contract and ethical implications of the partnership.

Share

Bookmark

Newsletter

Amanda Yeo serves as an Assistant Editor at Mashable, where she focuses on a range of topics including entertainment, culture, technology, science, and social responsibility. Based in Australia, she delves into various subjects from video games and K-pop to films and tech gadgets.

Recently, OpenAI announced a partnership with the U.S. Department of War (DOW), aiming to deploy its artificial intelligence solutions in "classified environments." This collaboration was publicized on Saturday, with the company asserting that the agreement includes specific safeguards against utilizing its technology for mass surveillance of citizens or developing autonomous weapons. However, excerpts from the contract shared by OpenAI suggest there may be significant ambiguities within these protections.

The revelation of OpenAI's contract with the DOW came shortly after President Donald Trump disclosed that the U.S. government would cease using technologies from its competitor, Anthropic, specifically its AI model, Claude. Trump expressed his grievances regarding Anthropic's insistence that the DOW adhere to its terms of service through a post on Truth Social.

Anthony Amodei, CEO of Anthropic, later elucidated the conditions that conflicted with the DOW's demands. He stated that the DOW sought the removal of protections that prevented the misuse of their technology for domestic mass surveillance and fully autonomous weaponry. Although Amodei noted that such usages could be deemed lawful, he criticized the current legal landscape for lagging behind the rapid advancements in AI technology.

He added, "In certain circumstances, AI can potentially undermine democratic principles rather than uphold them," and pointed out that some applications are beyond the safe and reliable capabilities of today’s technology.

OpenAI's terms seem to be more appealing to the Trump administration, thus filling the void left by Anthropic’s exit. OpenAI maintains that its agreement with DOW incorporates similar safeguards against mass surveillance and autonomous weaponry, in addition to a new restriction against utilizing its technology for critical automated decisions, such as those linked to "social credit" systems.

"Our safety protocols remain entirely under our control; deployment occurs via the cloud, ensuring that authorized OpenAI personnel are closely monitored throughout the process, supported by robust contractual obligations," declared OpenAI in their announcement. They also highlighted that their safeguards are more effectively enforced, as technology is only supplied through cloud solutions, allowing for greater oversight.

Responding to questions regarding Anthropic's inability to finalize a similar deal, OpenAI expressed hope that Anthropic and other organizations might consider partnerships under these terms.

However, parts of the contract shared by OpenAI indicated that while its technology is restricted from applications in autonomous weaponry and domestic surveillance where illegal, there are provisions under which its AI could be utilized for these purposes in compliance with DOW policies. The contract specifies that the DOW may employ the AI system for "all lawful purposes" that adhere to operational requirements and established safety protocols, asserting that any use in autonomous systems must pass thorough validation and testing.

Katrina Mulligan, leading OpenAI's national security partnerships, addressed concerns on LinkedIn, emphasizing that usage policies are not the sole safeguards, reiterating the importance of cloud-based deployment and personnel involvement.

The DOW's position was summarized by Mulligan, stating that they wished for OpenAI to develop its model independently while allowing the DOW to govern operational decisions without being constrained by external usage policies.

Despite OpenAI's reassurances, skepticism remains regarding the effectiveness of these supposed safeguards. Concerns were voiced by users when OpenAI CEO Sam Altman conducted a Q&A session on X, during which he attempted to mitigate apprehensions related to the DOW partnership. He acknowledged that the deal came together quickly and may not present the best optics, asserting that fostering a strong relationship between the government and technology companies is crucial in the coming years.

While this partnership seems to tighten the relationship between OpenAI and the U.S. government, it also appears to have distanced the company from its civilian user base. On inquiries about lawful applications potentially enabling mass surveillance, Altman referred to a statement by U.S. Under Secretary of War Emil Michael, who claimed that the DOW does not engage in domestic surveillance of U.S. citizens, stating that such actions would be illegal and against American values.

Skepticism persists among the public, particularly given past revelations by whistleblower Edward Snowden about unlawful mass surveillance conducted by the National Security Agency (NSA), which falls under the Department of War's predecessors. The persistence of such doubts is evident in reactions from social media users who criticized the government’s reliability on these matters.

While Altman affirmed that he would not permit OpenAI's technology to contribute to mass domestic surveillance, citing constitutional concerns, many users expressed disbelief based on past instances where his commitments were questioned.

Altman further commented that as a private entity, OpenAI lacks the authority to make ethical determinations, endorsing the idea that governance should be determined through democratic processes. However, this stance received criticism, emphasizing that compliance should not excuse unethical behavior.

Following this news, numerous ChatGPT users have reportedly started canceling their subscriptions, with many turning to Anthropic's Claude, which has now surpassed ChatGPT in downloads in the U.S. Apple App Store. One user on Reddit lamented, "OpenAI just made a deal with the devil," expressing disappointment that the originally non-profit organization has shifted towards military contracts, suggesting that revenue has overshadowed its founding principles.

Disclosure: In April 2025, Ziff Davis, the parent company of Mashable, filed a lawsuit against OpenAI, alleging copyright infringement in the training and operation of its AI systems.

Loading comments...