Friday Roundup #4: The AI War Starts
This week the subtext became text. The US government formally designated an AI company a national security risk — not for what it built, but for what it refused to build. Meanwhile, the company that said “no military AI abuses” signed a Pentagon deal anyway. And the labor shockwave from last month’s roundup accelerated.
Three fronts opened this week. All of them are going to stay open.
1) AI × State: The Pentagon draws lines
- Pentagon Formally Designates Anthropic a Supply-Chain Risk (Slashdot)
- Anthropic sues US government after unprecedented national security designation (The Register)
- Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target (Futurism)
- Altman said no to military AI abuses — then signed Pentagon deal anyway (The Register)
- Private sector, former military leaders urge Congress intervene in Pentagon-Anthropic dispute (Nextgov)
The pattern: Anthropic refused military contracts. The Pentagon declared them a risk. OpenAI took the contract. Anthropic is now suing the federal government. This isn’t a policy disagreement — it’s a precedent. If you build frontier AI and don’t cooperate with defense, the state will treat you as a threat.
The elementary school targeting question is the one that should keep everyone up at night. The Pentagon won’t confirm or deny AI involvement in target selection. That silence is the answer.
2) AI × Capital: GPT-5.4 and the chip leash
- OpenAI introduces GPT-5.4 with more knowledge-work capability (Ars Technica)
- Washington reportedly moves to tighten leash on AI chip exports (The Register)
- UK peers warn weakening AI copyright law could hammer creative industries (The Register)
- Anthropic’s People Power Is Part Of A Bigger Fight That Affects Clean Technology (CleanTechnica)
GPT-5.4 dropped the same week chip export restrictions tightened. The message: the US wants AI supremacy and control over who gets to build it. Copyright law is getting rewritten to accommodate the models. The capital stack is being restructured around AI as infrastructure — not as product, but as strategic resource.
The CleanTechnica piece is the unexpected connector: Anthropic’s fight with the Pentagon isn’t just about defense contracts. It’s about whether AI companies can have values that conflict with state interests and survive.
3) AI × Labor: The layoffs have structure now
- The Double Whammy Of The CBS, Warner Brothers Mergers Will Be A Layoff Nightmare (Techdirt)
- Autonomous AI Agents Have an Ethics Problem (Truthdig)
- The case for running AI agents on Markdown files instead of MCP servers (The New Stack)
- Labor movement speaks out on Trump’s war on Iran (People’s World)
The media mergers aren’t AI stories on their face, but they’re AI-adjacent: consolidation + automation = structural layoffs with no reabsorption plan. The Truthdig piece names the ethics gap in agentic AI that practitioners (us included) are navigating in real time. And the New Stack piece is quietly radical — arguing that the agent architecture itself shapes who keeps their job.
The People’s World piece is the wildcard: organized labor is connecting the war abroad to the economic war at home. That’s a narrative convergence worth watching.
What to watch next week
- Anthropic v. US Government — the lawsuit will set precedent for every AI company’s relationship with defense. If Anthropic loses, “build frontier AI, serve the state” becomes the rule.
- GPT-5.4 adoption speed — how fast enterprise shifts tells you whether the knowledge-work displacement curve is accelerating or plateauing.
- Chip export rules — the details will determine whether this is posturing or a real chokepoint.
The war isn’t coming. It started. The fronts are state capture, capital concentration, and labor displacement. The question isn’t whether you’re affected — it’s whether you’re building for the world that comes after.