Leading artificial intelligence (AI) company Anthropic has said it “cannot in good conscience” comply with a demand from the United States Department of War (DoW) to remove safety precautions from its AI models.
In a statement on February 26, Anthropic’s CEO, Dario Amodei, said the company would not allow its products to be used for mass surveillance or for fully autonomous weapons. This comes after the DoW threatened to cancel its contract with Anthropic and deem it a “supply chain risk”—a label usually reserved for U.S. adversaries that would carry significant financial implications—if it did not comply with the request to drop the safeguards by Friday.
“These threats do not change our position: we cannot in good conscience accede to their request,” said Amodei. “Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.”
Anthropic has reportedly secured a $200 million contract with the U.S. military to use its technology within the Pentagon’s classified networks. However, this relationship recently began to fracture when suggestions arose that Anthropic might be unhappy with the alleged use of its AI model Claude in the abduction by U.S. military forces of Venezuelan President Nicolás Maduro in January.
According to a February 22 Washington Post report, Anthropic asked how its model was used in the operation, prompting the DoW to doubt whether the company was a reliable and trusted partner.
“They expressed concern over the Maduro raid, which is a huge problem for the department,” one administration official reportedly said.
In a January 9 memorandum, U.S. Secretary of War Pete Hegseth stated that the U.S. will only contract with AI companies that accede to “any lawful use” and remove safeguards against use in mass surveillance and autonomous weapons, setting a deadline of the end of February to fall in line.
In his statement last week rejecting the DoW’s demands, the Anthropic CEO was keen to emphasize that the company has previously acted to defend the U.S.’s lead in AI, “even when it is against the company’s short-term interest.”
Specifically, he pointed out that Anthropic “chose to forgo several hundred million dollars in revenue” to cut off the use of Claude by firms linked to the Chinese Communist Party (CCP); shut down CCP-sponsored cyberattacks that attempted to abuse Claude; and advocated for strong export controls on chips to ensure a democratic advantage.
In addition, the company highlighted its strong track record with the U.S. public sector, government, and military.
“[Anthropic was] the first frontier AI company to deploy [its] models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers,” said Amodei. “Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.”
Explaining the company’s red lines, the Anthropic chief said it supported the use of AI for lawful foreign intelligence and counterintelligence missions, but that “using these systems for mass domestic surveillance is incompatible with democratic values” and “presents serious, novel risks to our fundamental liberties.”
Equally, Amodei said the company couldn’t support fully autonomous weapons—those that take humans out of the loop entirely and automate selecting and engaging targets—because frontier AI systems “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” said Amodei.
The DoW has yet to respond to Anthropic’s refusal on these two safeguards, but should Hegseth follow through with his threat to cancel the lucrative contract and designate the company a supply chain risk—which would likely mean that defense contractors and subcontractors would not be allowed to use its products—Anthropic’s morally commendable stance may prove a costly one.
“It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” said Amodei, in his closing plea. “We remain ready to continue our work to support the national security of the United States.”
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Can we trust AI? How blockchain and IPv6 could fix accountability
Source: https://coingeek.com/anthropic-resists-us-military-pressure-to-remove-ai-safeguards/