OC

Knowledge OS
鹦鹉螺口语
OpenAI changes deal with US military after backlash
2026-03-03 19:37:28 · 英文原文

OpenAI changes deal with US military after backlash

Chris Vallanceand

Laura Cress,technology reporters

AFP The OpenAI CEO Sam Altman is wearing a dark blue suit with a white shirt and blue and white tie. He is holding and speaking into a microphone while sitting in front of the American flagAFP

OpenAI says it has agreed changes to the "opportunistic and sloppy" deal it struck with the US government over the use of its technology in classified military operations.

But it has raised questions over how AI is used in war and how much power rests with government and private companies.

A statement made on Saturday by OpenAI claimed its agreement with the Pentagon had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's".

But on Monday, Altman posted on X to say further changes were being made, including making sure its system would not be "intentionally used for domestic surveillance of U.S. persons and nationals".

As part of the new amendments, intelligence agencies such as the National Security Agency would also not be able to use OpenAI's system without a "follow-on modification" to the contract.

Altman added the company had made a mistake by rushing "to get this out on Friday".

"The issues are super complex, and demand clear communication," he said.

"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

OpenAI has faced backlash from users following its announcement it was working with the Pentagon.

According to data from Sensor Tower, the number of people uninstalling ChatGPT has surged since the news of OpenAI's partnership with the Department of Defense was announced on Friday.

The market intelligence firm said the daily average uninstall rate was up by 200% compared to normal rates.

Meanwhile, Anthropic's Claude rose to the top of Apple's App Store ranking, where it still remains on Tuesday.

The AI model was blacklisted by the Trump adminstration following Anthropic's refusal to drop a corporate "red-line" principle that its technology should not be used to create fully autonomous weapons.

Despite this, the use of Claude in the US-Israel war with Iran has since emerged, with BBC News' US partner CBS News reporting it was still in use on Tuesday.

The Pentagon has declined to comment on its dealings with Anthropic.

How AI is used by the military

AI is used in a number of ways in the military, for example streamlining logistics or quickly processing large amounts of information.

The US, Ukraine, and Nato all use tech from Palantir, an American company which provides data analytics tools to government customers for intelligence gathering, surveillance, counterterrorism and military purposes.

The UK Ministry of Defence recently signed a £240m contract with the firm.

At the end of last year, the BBC spoke to some of those involved in integrating Palantir's AI-powered defence platform Maven into Nato.

The software brings together a huge range of military information, from satellite data to intelligence reports, which can then be analysed by commercial AI systems such as Claude to help make "faster, more efficient, and ultimately more lethal decisions where that's appropriate"said Louis Mosley, head of Palantir's UK operations.

BBC/Palantir A picture showing a satellite image of military vehicles each covered with a purple box labelled "enemy asset".BBC/Palantir

A screen shot from a demo of Palantir's AI system

But AI large language models can make mistakes, or even make things up - known as "hallucinating".

Lieutenant Colonel Amanda Gustave, chief data officer for Nato's Task Force Maven, stressed there was human oversight, adding that they were "always introducing a human in the loop" and that it "would never be the case" that an AI would "make a decision for us".

Palantir, unlike Anthropic, does not support a blanket ban on autonomous weapons, but says there should be a "human in the loop".

But Professor Mariarosaria Taddeo of Oxford University told the BBC that with Anthropic out of the Pentagon, "the most safety-conscious actor" was now "out from the room".

"That is a real problem," she added.

This week across the BBC we have a special focus on AI as part of the BBC's AI Unpacked week. To find out more about AI and what it means for you head to AI Unpacked

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

关于《OpenAI changes deal with US military after backlash》的评论

暂无评论

发表评论

摘要

OpenAI has agreed to amend its deal with the US government regarding the use of its technology in classified military operations, citing initial agreements as "opportunistic and sloppy." The company aims to add safeguards against domestic surveillance and restrict intelligence agencies' access without further modifications. Following criticism over collaborating with the Pentagon, OpenAI faced a surge in user uninstalls, while competitor Anthropic saw an increase in downloads. Concerns persist about AI's role in military operations, including issues of oversight and potential misuse.