On Friday night, in the wake of tensions between the Department of Defense and Anthropic, Sam Altman, CEO of OpenAI, revealed that his company had reached a new agreement with the Pentagon. This development comes after the U.S. government opted to blacklist Anthropic for refusing to compromise on crucial ethical principles regarding military applications: specifically, a rejection of mass surveillance on American citizens and the use of lethal autonomous weapons that operate without human oversight. Altman suggested that OpenAI had managed to secure similar protections within its own contractual terms.
“Our commitment to safety includes strong prohibitions against domestic mass surveillance and ensuring human accountability for any use of force, even with autonomous weapons,” Altman stated. He noted that the Defense Department aligns with these principles, which are embedded in their contract. He referred to the Department of Defense using its former name, the Department of War, a nod to the administration's language preferences.
However, many in the AI and tech communities quickly raised questions about Altman's assertions. They were puzzled as to why the Pentagon would concede on these critical issues, particularly when it had previously asserted it would continue to collect data on Americans without hesitation.
Sources close to the Pentagon informed The Verge that, contrary to Altman's claims, the Defense Department did not alter its stance. Instead, OpenAI agreed to adhere to longstanding laws that have historically allowed for mass surveillance, insisting that it would still maintain its ethical boundaries.
An insider familiar with the negotiations mentioned that OpenAI's contract is significantly more permissive than what Anthropic had proposed, primarily due to a crucial phrase: "any lawful use." According to this individual, the Pentagon was unwavering in its ambition to gather and analyze extensive data on Americans. A meticulous examination of OpenAI's contract reveals that it ultimately allows for the use of OpenAI’s technology to facilitate any actions deemed technically legal by the U.S. military, which has often utilized a broad interpretation of "legality" to justify extensive surveillance programs.
Miles Brundage, a former head of policy research at OpenAI, suggested on social media that, given insights from legal experts and Pentagon officials, OpenAI employees might conclude that the company had compromised its principles while portraying the situation as maintaining its boundaries, consequently undermining Anthropic's position.
A spokesperson for OpenAI, Kate Waters, countered that the Pentagon had not sought mass surveillance capabilities and asserted that the agreement does not allow for breaches of privacy. “The system is not to be used for bulk, open-ended, or generalized data collection on Americans,” Waters emphasized.
The capabilities of AI systems could enable military or other government departments to conduct surveillance with unprecedented precision. AI excels in pattern recognition, meaning it could integrate a multitude of data points about individuals — from location data, online activity, and personal finance information to public records and CCTV footage. As Amodei from Anthropic pointed out, leveraging such technologies for extensive domestic surveillance contradicts the values of a democracy, as powerful AI could compile extensive personal profiles with ease.
While Anthropic has been adamant about prohibiting mass surveillance in its contracts, OpenAI's agreement appears firmly rooted in existing legal frameworks. The company claims its contract asserts that any intel activities involving private data must comply with numerous legal provisions, including the Fourth Amendment and various acts regulating national security and surveillance.
Nevertheless, skepticism abounds. The post-9/11 era saw intelligence agencies initiate surveillance systems that purportedly adhered to legal standards, which were later exposed as highly invasive. Notable revelations by Edward Snowden in 2013 underscored the scale of domestic spying, encompassing practices like the daily collection of Verizon customer data and bulk data gathering from major tech firms through secret programs such as PRISM. Despite commitments to reform, tangible restrictions on these practices have been minimal. Mike Masnick from Techdirt remarked that OpenAI’s agreement indeed permits domestic surveillance, branding EO 12333 as the legal means by which the NSA conceals its domestic operations.
Waters reiterated that the Pentagon had not requested permission for such data collection and that OpenAI’s agreement prohibits broad monitoring of U.S. residents’ private information, insisting that intelligence operations must adhere strictly to U.S. laws.
Amodei has expressed publicly that current laws have not evolved sufficiently to address the surveillance capabilities that AI can deliver. Altman stressed that OpenAI’s contract “incorporates our principles into existing laws and policies,” emphasizing that adherence to the present legal framework is paramount, even as laws evolve.
The ambiguity in the language used by OpenAI has raised concerns among experts. Sarah Shoker, a senior research scholar at UC Berkeley, pointed out that the wording in OpenAI’s responses lacks clarity on what specific actions are restricted. Terms like "unconstrained," "generalized," and "open-ended" suggest flexibility that could allow leadership to adapt to situations without a clear prohibition.
Given the current terms, it appears that OpenAI’s technology could be utilized by the Pentagon for extensive intelligence operations against Americans, including large-scale data mining and profiling based on public records and purchased data.
OpenAI's stance on lethal autonomous weapons has also drawn scrutiny. The company claims its Pentagon contract aligns with existing Department of Defense regulations, which require human oversight in the use of such weaponry. However, critics note that this does not prohibit the use of autonomous weapons overall; it merely delineates circumstances under which human control is mandatory, leaving ambiguity in the practical application of the technology.
Significantly, the majority of the terms in OpenAI’s agreement are not novel; they reflect deals made previously by other AI firms working with the Pentagon. The technical safeguards outlined by OpenAI are not unique and their effectiveness remains questionable.
Altman argued that OpenAI was introducing measures to ensure compliance with its red lines, including providing security clearances to some employees and implementing classifiers to monitor AI systems. However, detractors argue that these safeguards have limited effectiveness and will not guarantee adherence to human oversight protocols during critical decisions.
Even as OpenAI maintains it has set boundaries on its technology's use, the flexibility of the Pentagon's interpretation of what constitutes "legal" may erode these safeguards. Historical patterns demonstrate that intelligence practices can evolve to circumvent initial restrictions.
OpenAI has urged the Pentagon to extend similar contractual terms to all AI companies, suggesting that compliance with these agreements should be the standard across the industry.
In their discussions, figures such as Defense Secretary Pete Hegseth and former President Trump condemned any potential for a tech company's influence over military operations, affirming that such decisions must reside solely with the military leadership and elected officials.
The contrast in approaches between OpenAI and Anthropic is stark, with the latter rejecting the terms presented by the Pentagon, resulting in severe sanctions that include being designated as a "supply chain risk.” Despite the repercussions, support for Anthropic has surged within the tech community, as many employees rally behind the company's principled stance. Notably, Anthropic has recently gained traction on app platforms, surpassing OpenAI's offerings in downloads, reflecting the public's response to its commitment.
While Amodei has been portrayed as a champion for ethical AI, his stance on lethal autonomous weapons indicates openness to their future use, contingent upon advancements in their reliability. He has suggested that developing more dependable autonomous systems is essential for national defense capabilities, highlighting a complex landscape where the ethical considerations of AI in warfare are continually evolving.