Pentagon, Anthropic
Digest more
Defense Secretary Pete Hegseth gave the AI company Anthropic an ultimatum about the military's use of its technology, known as Claude.
The Defense Department has been feuding with Anthropic over military uses of its artificial intelligence tools. At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI on the planet.
If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.
Defense officials in the Trump administration also warned they could designate Anthropic, which makes the AI chatbot Claude, as a supply chain risk — or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.
Tensions between the Pentagon and Anthropic (ANTH.PVT) are escalating. Axios tech policy reporter Maria Curi joins Market Domination host Josh Lipton to break down the conflict: how it began, its historical precedence,
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
Anthropic insists on limits on how its technology is used and could be labeled a supply chain risk if it fails to accept the military’s demands.
Anthropic CEO Dario Amodei says the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology.
1don MSN
Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.