In 2017, Daron Acemoglu and Pascual Restrepo published a paper with an uncomfortable title: “Robots and Jobs: Evidence from US Labor Markets.” The paper was dry, methodical, and damning. For every robot introduced per thousand workers, employment fell by 0.2 percentage points and wages fell by 0.42 percent. That’s not a projections paper. That’s a measurement paper. They measured it happening.
This was before the current AI wave. Before GPT-3, before ChatGPT, before the agent explosion. They were tracking industrial robots — the automation of physical labor in manufacturing. And they found what economists had long debated but rarely quantified: productivity gains from technology don’t automatically flow to workers. They can flow exclusively to capital, leaving workers demonstrably worse off even as the aggregate economy grows.
Acemoglu has been the most rigorous voice in economics arguing what many technologists and executives prefer not to engage with: that the standard “technology creates jobs” story is historically conditional, not inevitable. That you can get productivity gains and wage depression simultaneously. That the question isn’t whether AI is powerful — it obviously is — but whether its power gets distributed or captured.
This is the Acemoglu Problem. And it’s the intellectual foundation of everything Neoteric is arguing.
Power and Progress
In 2023, Acemoglu and Simon Johnson published Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. The thesis is long-form and historical: across a millennium of technological transformation, productivity gains have repeatedly failed to translate into shared prosperity. Not just once. Not just in edge cases. Repeatedly, as a pattern, as a structural feature of how new technologies interact with power.
The agricultural revolution. The printing press. The steam engine. Railways. Electrification. Computing. The internet. In each case, the technology produced genuine productivity gains — more output per unit of input — and in each case, the initial distribution of those gains was heavily skewed toward whoever controlled the technology.
Sometimes the gains eventually diffused. Sometimes they didn’t. The key variable wasn’t the technology itself. It was the political and institutional context around it: who had leverage, who set the rules, and whether workers had sufficient countervailing power to claim a share of the gains they were producing.
The industrial revolution is the canonical example. The steam engine made factories dramatically more productive. Real wages in Britain during the first century of industrialization were roughly flat, while capital returns were not. Workers died younger, worked longer hours, lived in worse conditions than they had under the guild system that was being displaced. The story of Dickens isn’t the exception to industrialization — it’s the baseline, for generations, before countervailing forces (labor unions, workplace safety laws, eventually the welfare state) shifted the distribution.
Acemoglu’s point isn’t that technology is bad. It’s that technology is power, and power, left to its own gravity, concentrates.
The Mechanism
The specific mechanism Acemoglu identifies is worth understanding precisely, because it explains why the optimist case — “AI will create new jobs to replace the ones it destroys” — is not guaranteed.
When a new technology increases productivity, it does so by making some existing task cheaper or better. That’s good. But whether it also creates new demand for labor depends on the nature of the productivity gain and how the gains are deployed.
If a robot replaces a factory worker, and the factory owner keeps the efficiency gains as profit rather than reinvesting in labor-intensive activities, the productivity gain is real but the labor demand is gone. The aggregate economy grew; the worker’s situation did not.
The optimist case says: lower production costs lead to lower consumer prices, which leaves consumers with more purchasing power, which creates demand for new goods and services, which requires labor to produce. This has historically been true — eventually. The timeline and the distribution are the problem. “Eventually” may mean a generation. The workers displaced by industrialization did not automatically transition into a booming service economy. Many of them died in poverty.
For AI specifically, Acemoglu flags a compounding problem: unlike previous automation waves, which mostly replaced physical or routine cognitive labor, AI is beginning to target expertise. Legal research. Medical diagnosis. Financial analysis. Code generation. The social contract of the information economy was: invest in education, develop expertise, and that expertise becomes your leverage. AI is beginning to commoditize expertise the same way industrial machinery commoditized physical skill.
“The possibility that technological improvements that increase productivity can actually reduce the wage of all workers,” he wrote, “is an important point to emphasize because it is often downplayed or ignored.”
Not some workers. All workers.
The Counter-Arguments
There are real counter-arguments to Acemoglu’s thesis, and they’re worth taking seriously rather than dismissing.
The demographics argument: We’re going to run out of workers before we run out of jobs. The population wave that produced the 20th century’s labor surplus is reversing. Birth rates are falling across the developed world. The workers who are turning 30 in 2053 are already born — and there aren’t many of them. Japan, South Korea, much of Europe are already in demographic compression. An AI that displaces workers into a labor-scarce economy looks different from an AI that displaces workers into a labor-surplus economy.
This is a real point. It’s also time-limited and geographically uneven. Sub-Saharan Africa has a very different demographic trajectory than Japan. And even if we eventually run short of workers, the transition period — where AI productivity gains hit faster than demographics create scarcity — could be brutal for a generation.
The equalizer argument: Mark Andreessen said it in 2005, before much of this was real: “A 14-year-old in Romania has all the information, all the tools and software to apply knowledge however they want.” The phone in your pocket gives you access to more computing power, more information, more communication tools than a Fortune 500 company had in 1985. That’s real democratization of capability.
I use this argument myself. My home AI setup gives me leverage that would have required an enterprise IT department a decade ago. A small business owner using AI today can compete with capabilities that used to be reserved for companies with entire analytics departments.
But the equalizer argument proves too much if you’re not careful. The 14-year-old in Romania has access to the same tools as a Fortune 500, yes — and the Fortune 500 has an eight-figure AI budget, a team of engineers, preferential API access, proprietary training data, and the institutional relationships to turn AI outputs into real economic value. The gap between access and leverage is not trivial.
The productivity dividend argument: If AI dramatically raises productivity, the resulting wealth could fund universal basic income, robust public services, retraining programs, reduced working hours. The productivity is real; the question is distribution. Policy can address distribution even if markets don’t.
This is technically correct and practically undercooked. The political economy of redistribution is not simple. The industries that benefit most from AI concentration have significant political power. The workers most affected by AI displacement have less. Assuming the productivity dividend will be democratically distributed requires assuming a political system capable of redistributing it, which requires addressing power imbalances that are themselves downstream of economic concentration.
Why This Is Different This Time
Every technology transition produces “this time is different” arguments — on both sides. Usually they’re both partially right. I want to name three specific features of this wave that I think do make it structurally unusual.
Speed. The transition from agricultural to industrial labor took a century. The transition from industrial to information labor took roughly two generations. The current AI wave is compressing timelines in ways that existing social institutions — education systems, retraining programs, social safety nets — were not designed to handle. A worker who invested twenty years in developing legal expertise faces a fundamentally different situation when their expertise is commoditized in five years than when the same thing happened over fifty.
Cognitive scope. Previous automation waves targeted physical or routine cognitive labor. This one targets non-routine cognitive labor — the work that information-economy workers were told was safe. When the automation frontier reaches knowledge work, the remaining human value proposition becomes less clear. What is expertise for, economically, if expertise can be replicated on demand?
Ownership concentration. The infrastructure of this AI wave — the compute, the training data, the foundational models — is concentrated in a small number of very large entities. Google, Microsoft, Amazon, Meta, and a handful of others control the stack. The open-source movement (Llama, Mistral, Gemma, DeepSeek) is a real counterforce, but it operates at the margin of an infrastructure concentration that is historically unusual. Previous transformative technologies — the printing press, the steam engine, electrification — eventually became commoditized infrastructure. Whether AI foundational models become genuine infrastructure or remain proprietary leverage is one of the central strategic questions of the next decade.
What This Means for Us
I want to be direct about why this matters for Neoteric specifically.
The Neoteric thesis — democratize AI capabilities, empower workers, dismantle exploitative structures — is a response to the Acemoglu Problem. Not an answer to it, exactly. A direction.
The MTP says: the most important thing we can do is put powerful AI tools in the hands of individual workers and worker collectives, so that the productivity gains don’t just flow to the companies that employ them. A worker using AI to increase their own output has a stronger negotiating position. A worker cooperative using AI to eliminate the need for outside capital altogether has an even stronger one.
This isn’t utopian. It’s structural. If AI concentrates capital because AI is controlled by capital, then distributing AI control distributes the gains. Not perfectly, not automatically, but directionally.
The AFGE members that Rosie serves are federal workers facing an employer that has extraordinary informational advantages, institutional leverage, and the ability to make contract negotiations expensive and opaque. AI tools that help those workers organize better, communicate faster, and process information they couldn’t otherwise process shift that balance slightly. They don’t eliminate the power asymmetry; they erode it.
That’s the Neoteric argument at ground level. Not “AI will solve inequality” — Acemoglu’s historical record is too discouraging for that. But: who controls the AI, and what does it do for the people who don’t already have power?
The question isn’t whether AI is transformative. Obviously it is. The question is whether you’re on the inside of the transformation or the outside — and whether you’re doing anything to move people from the outside in.
Acemoglu ends Power and Progress with a conditional optimism: the historical pattern is not destiny. It has been possible for societies to redirect technological trajectories toward broader prosperity, when organized labor, democratic institutions, and progressive policy created sufficient countervailing force. The pattern is real; the pattern can be bent.
That’s the job.
Next in the series: Post 04 — Digital Cultures. Not “societal norms in tech world” but: the emergence of human-AI culture in real-time. New norms forming right now between agents and their operators. What “digital native” means when the digital entity has agency.
Sources: Daron Acemoglu & Pascual Restrepo, “Robots and Jobs” (2017/2019); Daron Acemoglu & Simon Johnson, “Power and Progress” (2023) — epub in References/; HBR “AI Is Making Economists Rethink the Story of Automation” (2024); NBER Economics of AI Agenda; Michael Wade voice memo recordings 4-6 (May 29, 2024) — notes in research/voice-memo-analysis.md; Mark Andreessen quote (2005); Neoteric SERIES-PLAN.md and MTP document