“Human soldiers can disobey unconstitutional orders, but “with fully autonomous weapons, we don’t necessarily have those protections,” Anthropic CEO Dario Amodei told Ross Douthat in a recent interview. Amodei also worried that AI could help the government track protesters and political opponents and “make a mockery of the Fourth Amendment.”
…
While not explicitly expressing a desire to use AI for those purposes, the Pentagon has insisted that Anthropic setting any limits on the military’s use will not do. It wants Anthropic to grant the government the right to employ its products for “all lawful use,” according to CNN.
…
This refusal hasn’t gone over well with the Trump administration. Hegseth has reportedly demanded that Anthropic remove its restrictions on certain military uses or else face consequences.
These consequences could include the Defense Department ending its business relationship with Anthropic as soon as Friday—which, OK, fine.
While not reassuring that the government won’t respect these limits around robot death machines and mass spying, it’s sadly not surprising. Ending its relationship with Anthropic’s contract in response would be a disappointing but not outrageous or beyond bounds.
What pushes this above and beyond normal government villainy are the other potential consequences that Hegseth has been floating, including using the Defense Production Act to compel compliance or declaring Anthropic a “supply chain risk”—possibly both. An anonymous senior official reportedly told Axios that severing ties with Anthropic would be “an enormous pain in the ass” for which Anthropic would have to “pay a price.”
Declaring Anthropic a supply chain risk would mean anyone who wants to work with the U.S. military in any capacity must sever ties with the AI company.
“Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling,” writes Dean Ball, a senior fellow at the Foundation for American Innovation. “Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic.”
Declaring an entity a supply chain risk is usually a move reserved for risky dealings with foreign companies. Deploying this designation against a U.S. company just because its leaders have some morals and some backbone is highly undemocratic—the sort of move one would traditionally expect from the Chinese Communist Party, not a U.S. administration.
…
But it gets worse. Hegseth is also threatening to “invoke the Defense Production Act to force the company to tailor its model to the military’s needs” and remove all safeguards, per Axios.
So, here we have an AI company trying to act ethically and prevent government abuse of this technology and the government threatening to seize the company’s property and do with it whatever the Pentagon wants. If that’s allowed, it means no limits on what abuses the government can force private companies to participate in.”
It’s not clear how many layoffs are caused by AI. Companies may want to have layoffs or not hire for business reasons, and just blame it on AI so they don’t look bad.
“National labor unions are pushing AI regulations as a top policy priority amid polls showing growing and bipartisan majorities fear the technology’s potential impacts. Those include AI-fueled layoffs, youth suicides allegedly linked to AI chatbots and increasing use of high-tech surveillance technologies in workplaces. Layoffs have been particularly acute in California and other tech hubs as giants like Amazon and Meta shed staff to compete for AI dominance.
…
Newsom has defended his AI stance as striking a balance between curbing safety concerns associated with technology and promoting its innovation to boost California’s budget, which is heavily reliant on tax income from Silicon Valley and the ultra-rich. In 2025, he signed an internationally-watched AI safety bill from Democratic state Sen. Scott Wiener among a slate of other rules for chatbots and AI-generated deepfakes, despite vetoing a labor priority.
The California Labor Fed began unveiling its latest AI agenda this week. Proposed measures include SB 951, which would require employers laying off workers due to AI to give advance notice, as well as SB 947, which would again attempt to require human oversight over algorithms used to make discipline or firing decisions.
Gonzalez also vowed to continue work on a bill introduced in 2025, AB 1331, which would ban the use of surveillance tools in bathrooms and public spaces in the workplace. She said the rest of the bills will largely fall under addressing surveillance issues, safety concerns related to AI and combatting joblessness.”
“instituting a “duty of care” for AI developers to “prevent and mitigate foreseeable harm to users” (per Blackburn’s summary of the bill). This duty would be enforced by the Federal Trade Commission (FTC).
…
“It’s basically just an invitation for lawyers to sue any time anything bad happens and someone involved in the bad thing that happened somehow used an AI tool at some point.
And then you have to go through a big expensive legal process to explain “no, this thing was not because of AI” or whatever. It’s just a massive invitation to sue everyone, meaning that in the end you have just a few giant companies providing AI because they’ll be the only ones who can afford the lawsuits.”
…
Section 11 of Blackburn’s bill is promoted as combating “the consistent pattern of bias against conservative figures demonstrated by Big Tech and AI systems.” But, in practice, it could require AI systems to have a pro-conservative slant—at least as long as President Donald Trump or other Republicans are in power.
The bill would set up “audits of high-risk AI systems to undergo regular bias evaluations to prevent discrimination based on protected characteristics, including political affiliation.”
…
“Right now, 230 lets platforms get frivolous lawsuits dismissed quickly at the motion to dismiss stage. This change would force every platform to go through lengthy, expensive litigation to prove they weren’t “facilitating” (an incredibly vague term) or “soliciting” third-party content that violates federal criminal law.
That’s gutting the main reason Section 230 exists. Instead of quick dismissals, you get discovery, depositions, and trials, all while someone argues that because your algorithm showed someone a post, you were “facilitating” whatever criminal content they claim to find.””
Chip restrictions on China appear to work because China is obsessed with asking US administrations to lift the restrictions, and because the Chinese companies say the restrictions slow their progress.
“The first giant leap backward has been a dangerous weakening of public data, the raw material required to train AI models. The federal government collects troves of data that families and businesses use every day — traffic patterns and census information, nutritional assessments and air quality reports, soil data and economic measures.
…
the administration has spent months ordering agency after agency to delete or hide data that’s politically inconvenient, and indiscriminately firing employees including those who manage valuable datasets.
…
Initial research shows the eye-popping potential for AI weather forecasts that could be precise down to a city block or accurate as far ahead as a month. But that’s only possible with the sensor data that the National Oceanic and Atmospheric Administration (NOAA) collects and curates from weather stations, ships, balloons, aircraft, satellites and buoys. The Trump administration has reduced weather balloon launches and removed hundreds of agency staff. It plans to cut back on NOAA satellites and shutter more than a dozen facilities that gather and curate data.
…
The Trump administration has also disrupted the collection of important health data. One example is data the Centers for Disease Control and Prevention gathered for nearly four decades from a representative sample of volunteers to understand risks in pregnancy. That valuable data now remains scattered and hard to access, because the CDC first shuttered the database to avoid collecting data on race and ethnicity in line with the administration’s executive order against “DEI,” and then placed the staff on administrative leave. That makes it harder to learn why Black maternal mortality is more than twice the national average, or how to protect all mothers and newborns. Data on vaccine safety, farm labor, hunger, greenhouse gas reporting and international development have also been deleted or degraded.
For AI to be effective against these immensely complex challenges, the smarter move would have been to expand data collection and support the agency staff who make sure datasets are robust and accessible.
…
With steady support from Congress over successive administrations, eight decades of federal research funding made it possible to start new industries, prevent and cure diseases, deter potential adversaries, understand and start to manage environmental risks and expand the boundaries of human knowledge. This research base is where AI itself came from, and to harness AI for the next generation of advances, federal support is essential.
Instead, the Trump administration has frozen grants, attacked leading research universities, curtailed high-talent immigration, ousted thousands of research agency staff and proposed a $44 billion reduction in federally funded research and development — the largest single-year cut in history.
While some take solace in the administration’s cuts sparing specific budget lines for AI research and the new executive order for Energy Department research using AI, that’s like buying more tractors while you kill off your crops. AI is a tool, not the goal itself. The federal government needs to fund not just AI researchers, but researchers in the full range of promising fields that need AI to advance”
Trump’s Saudi Arabia deal may be a lie. Multiple countries have promised to invest big money in the U.S. in deals with Trump, and many have not materialized. Is this a real deal, or a misleading press release?
Even if they were real people, it would be misleading to cherry pick certain people out of the many losing benefits and present this like it is representative of who is losing benefits. 2/2