Pentagon, Anthropic and Defense Department
Digest more
Chief Pentagon Spokesman Sean Parnell said Thursday that the DoD has “no interest” in using Anthropic’s models for fully autonomous weapons or to conduct mass surveillance of Americans, which he noted is illegal. He emphasized that the agency wants the company to agree to allow its models to be used for “all lawful purposes.”
The decision comes ahead of a Friday deadline to reach an agreement or face tough government measures.
The Pentagon previously requested Anthropic, OpenAI, Google, and xAI allow the use of their AI models for “all lawful purposes,” to which Anthropic put up the most resistance over fears its AI models could be used for autonomous weapons systems and mass domestic surveillance.
Company refuses to remove safeguards from its AI model over concerns about surveillance, autonomous weapons - Anadolu Ajansı
Anthropic CEO Dario Amodei says the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow wider use of its technology.
Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline. The
Anthropic has reached a familiar crossroads for a growing tech company: how to scale without compromising the principles that set it apart.
The company's Claude chatbot is one of the few AI systems cleared for use in classified settings. But a standoff between Anthropic and the Trump administration is putting its government work at risk.
Anthropic CEO Dario Amodei said on Thursday the company "cannot in good conscience accede" to the military's terms over the use of Claude.