Abstract illustration of legal documents with fine print dissolving into surveillance imagery

Friday Roundup #5: The Weasel Words Edition


Last week, the Pentagon declared Anthropic a national security risk for refusing to build surveillance tools. This week, the story got worse — in both directions.

OpenAI published its amended Pentagon contract. Anthropic’s CEO started using the administration’s preferred name for the Defense Department. And a Guardian investigation revealed Amazon is importing warehouse surveillance practices to its software engineers. The pattern isn’t subtle anymore: comply or be destroyed, and compliance means whatever the government says it means.

1) The Pentagon Deal: Reading the Fine Print

The marquee story this week — and the one that should alarm everyone — is TechDirt’s Mike Masnick dissecting the actual language of OpenAI’s amended Pentagon contract. He appeared on The Verge’s Decoder podcast to lay it out, and the original reporting is devastating:

The contract says OpenAI’s tools won’t be used for “intentional” domestic surveillance of US persons, “consistent with applicable laws.” Masnick’s point: the US government has spent decades arguing that mass surveillance of Americans is already consistent with applicable law. “Intentionally” is doing the work of an escape hatch, not a guardrail. The NSA’s position has always been that sweeping up American communications is “incidental” — and incidental isn’t intentional.

The word “unconstrained” shows up too: the system won’t be used for “unconstrained monitoring.” Which begs the question nobody in the contract answers: constrained by whom? Defined how? Reviewed when?

Sam Altman called the original deal “opportunistic and sloppy.” The amendment is neither — it’s carefully constructed to look like a restriction while functioning as a permission slip.

2) Anthropic Bends the Knee

Last week Anthropic was the company that said no. This week, Dario Amodei published a statement titled “Where things stand with the Department of War.”

Not the Department of Defense. The Department of War — the administration’s rebranding that Congress never authorized. Amodei uses it throughout. Six days after being praised for standing on principle, the CEO can’t bring himself to use the department’s legal name.

TechDirt nails it: “Before you even get to the substance, the document has already bent the knee.” The statement is individually rational and collectively dystopian — a serious person at a serious company writing seriously in an environment that has gone insane.

The Guardian’s broader analysis connects it to the industry trend: OpenAI dropped its military ban in 2024. Google’s Project Maven protests are ancient history. The window where AI companies could refuse military work is closing, and Anthropic’s capitulation — however grudging — may mark the end of it.

3) The Labor Front: 45,000 and Counting

The Verge’s deep-dive on Mercor — the platform where white-collar workers train the AI that replaces them — is the most visceral piece of the week. Workers rating AI outputs, labeling data, providing the human judgment that makes the models better at doing their jobs. The headline says it all: “You Could Be Next.”

Fortune’s frame is the money shot: CEOs aren’t cutting jobs because AI can do them yet. They’re cutting jobs to fund AI — redirecting the wage bill into compute. The displacement isn’t coming from capability. It’s coming from capital allocation.

And the Guardian’s Amazon investigation shows the endgame: warehouse surveillance practices — performance monitoring, AI cameras, point-deduction systems — migrating upward to software engineers and office workers. The reverse centaur isn’t theoretical anymore. It’s a performance improvement plan.

4) The Practitioner Corner

Willison’s piece is the counterweight: a practitioner’s argument that AI coding tools should optimize for code quality, not developer replacement. The centaur case. It’s worth reading alongside the Anthropic jobs-exposure data, which maps which professions face the most disruption. The two pieces together frame the choice: tools that make workers better, or tools that make workers unnecessary.

What to Watch Next Week

  • Anthropic v. US Government lawsuit — still the defining legal fight. If the government can designate an AI company a security risk for having ethics policies, the precedent destroys the idea of corporate AI safety.
  • OpenAI contract details — will anyone with standing challenge the “weasel words”? The ACLU and EFF are both circling.
  • March layoff numbers — 45,000 with half the month still to go. The trend line is acceleration, not plateau.
  • Amazon AI surveillance — expect more reporting as sources inside the company realize the warehouse model is coming for them too.

The weasel words are the story. Not just in OpenAI’s contract — everywhere. “Responsible AI.” “Human in the loop.” “Consistent with applicable law.” The gap between what the words say and what the systems do is where the damage happens. Read the fine print.