AI, Pentagon and Anthropic
Digest more
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
American AI company Anthropic has until 5:01 pm ET to give in to the Pentagon’s demands or face being labeled a “supply chain risk,” a type of designation usually reserved for companies thought to be extensions of foreign adversaries.
After the downing of a Customs and Border Protection drone, the F.A.A. closed the airspace above Fort Hancock, Texas.
The Defense Department has been feuding with Anthropic over military uses of its artificial intelligence tools. At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI on the planet.
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
1don MSN
Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.
Anthropic's negotiations with the Pentagon regarding AI safeguards have stagnated, with CEO Dario Amodei expressing concerns over the Department of Defense's final offer. The company is unwilling to accept terms that could allow its Claude model to be used for mass surveillance or autonomous weapons.