“Human soldiers can disobey unconstitutional orders, but “with fully autonomous weapons, we don’t necessarily have those protections,” Anthropic CEO Dario Amodei told Ross Douthat in a recent interview. Amodei also worried that AI could help the government track protesters and political opponents and “make a mockery of the Fourth Amendment.”
…
While not explicitly expressing a desire to use AI for those purposes, the Pentagon has insisted that Anthropic setting any limits on the military’s use will not do. It wants Anthropic to grant the government the right to employ its products for “all lawful use,” according to CNN.
…
This refusal hasn’t gone over well with the Trump administration. Hegseth has reportedly demanded that Anthropic remove its restrictions on certain military uses or else face consequences.
These consequences could include the Defense Department ending its business relationship with Anthropic as soon as Friday—which, OK, fine.
While not reassuring that the government won’t respect these limits around robot death machines and mass spying, it’s sadly not surprising. Ending its relationship with Anthropic’s contract in response would be a disappointing but not outrageous or beyond bounds.
What pushes this above and beyond normal government villainy are the other potential consequences that Hegseth has been floating, including using the Defense Production Act to compel compliance or declaring Anthropic a “supply chain risk”—possibly both. An anonymous senior official reportedly told Axios that severing ties with Anthropic would be “an enormous pain in the ass” for which Anthropic would have to “pay a price.”
Declaring Anthropic a supply chain risk would mean anyone who wants to work with the U.S. military in any capacity must sever ties with the AI company.
“Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling,” writes Dean Ball, a senior fellow at the Foundation for American Innovation. “Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic.”
Declaring an entity a supply chain risk is usually a move reserved for risky dealings with foreign companies. Deploying this designation against a U.S. company just because its leaders have some morals and some backbone is highly undemocratic—the sort of move one would traditionally expect from the Chinese Communist Party, not a U.S. administration.
…
But it gets worse. Hegseth is also threatening to “invoke the Defense Production Act to force the company to tailor its model to the military’s needs” and remove all safeguards, per Axios.
So, here we have an AI company trying to act ethically and prevent government abuse of this technology and the government threatening to seize the company’s property and do with it whatever the Pentagon wants. If that’s allowed, it means no limits on what abuses the government can force private companies to participate in.”