OpenAI CEO Sam Altman discusses new Pentagon agreement: 'This technology is critically important'

OpenAI CEO Sam Altman discusses new Pentagon agreement: 'This technology is critically important'
Summary
OpenAI CEO Sam Altman defended the new Pentagon deal following Trump's directive against Anthropic.
The agreement allows OpenAI's AI models on classified networks while maintaining key safety principles.
Altman expressed concerns over potential supply-chain risks related to OpenAI's agreement with the Department of War.

Share

Bookmark

Newsletter

On Saturday, Sam Altman, CEO of OpenAI, defended the company’s recent partnership with the Pentagon, a day after President Trump ordered federal agencies to sever ties with its competitor, Anthropic. This development occurred shortly after a joint military action by the U.S. and Israel against Iran. Altman addressed questions about the agreement on social media platform X, emphasizing the rationale behind collaborating with the Department of War (DoW) for deploying OpenAI’s artificial intelligence models within its classified networks.

In his announcement, Altman highlighted the mission of OpenAI, stating, “AI safety and broad distribution of benefits are at the heart of what we do. Our principles include prohibiting domestic mass surveillance and ensuring human accountability in the use of force, especially concerning autonomous weapon systems. The DoW has aligned with these principles, reflecting them in their legislation and policies, which we incorporated into our agreement.”

This partnership comes amid Trump’s directive for federal agencies to halt the use of Anthropic technology, implementing a six-month phase-out and escalating debates about the appropriate use of AI in military contexts. Secretary of War Pete Hegseth identified Anthropic as a “supply-chain risk to National Security.”

Anthropic's CEO, Dario Amodei, declined the DoW's requests to utilize its AI for “all lawful purposes,” voicing concerns over issues like mass surveillance and fully autonomous weapons. When asked why OpenAI was chosen while Anthropic was not, Altman noted, "Anthropic seemed more focused on specific prohibitions in the contract rather than on the relevant laws, which we were comfortable addressing. They might have desired more operational control than what we were looking for."

Altman clarified that the DoW did not exert any pressure prior to the agreement, remarking that Pentagon officials were pleasantly surprised by OpenAI’s willingness to engage in classified projects. He revealed that OpenAI initially intended to limit its work with the Pentagon to non-classified areas, but talks accelerated when it became apparent that the DoW needed AI support.

“We recognized that the DoW was seeking an AI partner, and we have previously declined to pursue classified projects that Anthropic accepted. Our discussions started months ago regarding non-classified efforts, but this week saw a shift towards classified work. The DoW was accommodating to our needs, and we want to aid them in their critical mission,” Altman explained.

In response to accusations that the agreement was hastily arranged, Altman indicated that OpenAI acted swiftly to alleviate tensions. He expressed concern that the current situation could pose risks for Anthropic, fair competition, and the U.S. as a whole. “We made sure to negotiate terms that would be available to other AI labs as well,” he stated.

Altman also acknowledged the possibility of a future legal challenge that could impose the same supply-chain risk designation on OpenAI as it did on Anthropic, saying, "If we must confront that challenge, we will, albeit it presents certain risks. However, I remain optimistic about a resolution and wanted to act quickly to bolster those chances."

Anthropic previously stated that the designation by Hegseth followed prolonged negotiations that reached a standstill over requests to exempt their AI model, Claude, from mass domestic surveillance and autonomous weapon use.

The discussion also touched on the potential for the federal government to nationalize OpenAI or other AI enterprises. Altman commented, "While I can't predict the future, I don't think it’s likely under the current circumstances, but I do see the value in a close partnership between government entities and the innovators creating this technology."

He expressed concern about surveillance practices conducted by the military, particularly towards foreign nations, stating, "While I accept that some level of surveillance is inevitable, it’s crucial that society contemplates the implications of such actions. A central principle I prioritize for AI is its democratization, and I fear that surveillance could undermine that ideal." However, he added that he respects the democratic process and does not believe it is solely his decision to make.

Loading comments...