OpenAI's Sam Altman justifies Pentagon agreement, acknowledges ‘optics aren’t favorable’

OpenAI's Sam Altman justifies Pentagon agreement, acknowledges ‘optics aren’t favorable’
Summary
OpenAI's deal with the Pentagon allows AI use in classified military networks amid backlash.
Critics, including OpenAI employees, argue that the deal lacks robust safeguards against misuse.
Altman emphasized the need for strong government and industry relations to de-escalate tensions.

Share

Bookmark

Newsletter

Over the weekend, OpenAI's CEO, Sam Altman, along with other top executives, took to social media to clarify the company's recent agreement with the Department of Defense, which permits the utilization of OpenAI's models within classified military networks. This announcement came shortly after Anthropic, OpenAI's competitor, declined a similar contract with the Pentagon, resulting in the Trump administration branding Anthropic as a "supply-chain risk."

OpenAI has faced considerable criticism for its decision to partner with the Pentagon, especially since Altman had previously expressed support for Anthropic's stance against accepting government contracts that lack explicit safeguards prohibiting the use of AI technology for mass surveillance of American citizens or for autonomous weaponry capable of making strikes without human intervention.

Some dissenters have initiated a campaign encouraging ChatGPT users to transition to Anthropic’s Claude chatbot, and indications show this effort may be yielding results; Claude recently overtook ChatGPT to become the most downloaded free app on the Apple App Store. Additionally, protests occurred outside OpenAI’s San Francisco office, where supporters of Anthropic's decision to refuse the Pentagon contract expressed their sentiments through chalk graffiti.

In response to the uproar, Altman and OpenAI's leadership directed some of their social media activity at alleviating concerns among their own workforce regarding the Pentagon contract. Notably, many OpenAI employees had previously signed an open letter endorsing Anthropic's decision to reject the Pentagon's demands and opposing the classification of Anthropic as a supply-chain risk. Altman also stated he opposes this designation.

Moreover, Leo Gao, an OpenAI employee who focuses on ensuring that AI models align with user intent and ethical considerations, publicly raised concerns about the Pentagon contract's safeguards. He criticized the agreement, suggesting that it allowed the Department of War too much leeway in how the AI could be employed, while accusing the company of engaging in superficial measures to appear compliant with ethical guidelines.

During a social media "Ask Me Anything" session on Saturday evening, Altman acknowledged that the deal with the Pentagon felt hurried and had poor optics. He insisted, however, that the swift action was necessary to de-escalate tensions between the military and Anthropic, which could potentially harm the broader AI sector by raising fears of government overreach, including nationalizing an AI lab or coercing private companies to deliver technology on governmental terms.

“If this indeed leads to de-escalation, we will be seen as visionaries who braved challenges for the good of the industry," Altman said. "If not, we may be viewed as reckless and hasty.” He emphasized the importance of fostering a constructive relationship between the government and AI developers in the coming years.

Furthermore, Altman expressed his disagreement with the supply-chain risk designation for Anthropic, asserting that enforcing it would negatively impact the industry and the country. He stated, “I believe this designation is misguided, and I hope it gets reversed, even if it means we face backlash for speaking out against it.”

OpenAI claimed it found a middle ground in the contract that maintained limitations while satisfying the military's request for no contractual constraints on AI application. The company stated that the responsibilities for AI usage would be grounded in existing laws and technical limitations designed to restrict its models’ applications, specifically prohibiting any prompts that could breach OpenAI's ethical standards.

In detailing the contract, OpenAI shared an excerpt clarifying that its technology could be utilized for “all lawful purposes,” alongside specific references to U.S. laws and Department of War policies that limit the surveillance of citizens and outline guidelines for deploying autonomous weapons. Katrina Mulligan, OpenAI's head of national security partnerships, argued during the AMA session that these legal references provided more assurance against potential violations than critics suggested.

However, some legal scholars challenged Mulligan's claims, particularly concerning autonomous weapons policies. Charles Bullock from the Institute for Law & AI remarked that the Department of War could alter its policies at any time, meaning that the contract's stipulations might not guarantee perpetual adherence. Nonetheless, he acknowledged that the agreement did appear to bind the Department of War to current laws regarding mass surveillance.

The ambiguity surrounding the term "mass surveillance" raised concerns among dissenters, who questioned how OpenAI would address scenarios where military intelligence agencies might use its AI to process commercially available data for surveillance purposes. Mulligan stated that while government purchases of commercial datasets couldn't be entirely mitigated, the contract explicitly prohibits mass domestic surveillance as a binding condition for use.

She emphasized that OpenAI's approach, which includes multiple technical measures designed to restrict Pentagon capabilities, offers a more robust solution compared to contractual language alone, which seemed to be the primary focus for Anthropic. She asserted that effective AI deployment in classified settings necessitates layered safeguards and expert involvement.

Boaz Barak, another OpenAI executive, echoed this sentiment, critiquing Anthropic's concentration on contractual terms over broader safeguards. He noted that tech companies often face uncertainty regarding how the Department of War interprets the contract language.

Altman reflected on the discussions during the AMA, hinting at the larger question of whether AI should be a government undertaking. He remarked that while he sometimes thinks it might be better for AGI development to be state-run, it seems unlikely under current circumstances. He also expressed concern over critics who appeared to trust private tech leaders more than elected officials in managing AI's appropriate use.

“I strongly believe in the democratic process, and our elected officials have the authority to uphold our constitutional values. A world where AI firms act with more power than the government is alarming,” Altman stated. “Likewise, it would be distressing if our government deemed mass domestic surveillance permissible.”

Loading comments...