In 2005, Marc Andreessen made an observation that sounded like optimism at the time but reads like prophecy now: “A 14-year-old kid in Romania or Brazil or India has access to all the information, all the tools, all the software to apply knowledge however they want.”

He was talking about the internet. But he could have been describing what’s happening right now with AI — except the tools available to that 14-year-old have gotten incomprehensibly more powerful, and nobody in the professional class seems ready to talk about what that actually means.

Here’s what it means: the primary source of labor’s value in the knowledge economy has been expertise. Knowing things other people don’t. Having skills that are scarce. Employers have been the gatekeepers of the infrastructure needed to apply that expertise at scale — the servers, the software, the data pipelines, the distribution channels, the brand. You could be brilliant in isolation, but you needed a company to turn your brilliance into output that reaches anyone.

AI is dissolving that gatekeeping function. Not slowly. Not theoretically. Right now, today, in ways that most employers haven’t fully internalized and most workers haven’t fully exploited.

This is the ninth post in the Neoteric series, and it’s where the threads of the previous eight converge on the question that matters most: what does AI mean for the people who work for a living?

The Acemoglu Problem, Revisited

I wrote about Daron Acemoglu’s work in the third post of this series, but it’s worth revisiting the core argument here because it sets up the tension that everything in this post exists to resolve.

Acemoglu’s thesis, developed across two decades of research and crystallized in Power and Progress (co-authored with Simon Johnson), is deceptively simple: technological improvements that increase productivity can actually reduce the wages of all workers. Not some workers. All of them. This happens because productivity gains accrue to capital, not labor, when the technology automates tasks rather than augmenting the people who perform them.

This isn’t speculation. It’s the documented pattern of every major technological revolution. Steam power increased output per worker dramatically — and then crushed the wages of the workers it didn’t eliminate entirely. Mechanized agriculture made farming vastly more efficient — and then depopulated rural economies. Computing automated middle-tier cognitive work — and then hollowed out the middle class.

The mechanism is consistent across centuries: new technology increases the supply of output while decreasing the demand for the labor that previously produced it. Capital captures the productivity surplus. Wages stagnate or decline. Inequality widens. Eventually — sometimes decades later, sometimes generations — new categories of work emerge that absorb displaced labor. But the transition period is brutal, and it doesn’t distribute evenly.

If you’re a knowledge worker in 2026, you’re watching this happen in real time. AI can now write competent code, draft legal briefs, generate marketing copy, analyze data sets, create visual designs, manage projects, and synthesize research. Every one of those capabilities represents a task that previously required a human with expensive training and scarce expertise. Every one of them is now available to anyone with an internet connection and forty dollars a month.

The Acemoglu prediction would be: this crushes wages in knowledge work. Capital captures the surplus. The people who own the AI platforms get richer; the people who used to sell cognitive labor get poorer.

And he’s right — if the pattern holds. If AI follows the path of steam and computing. If the technology is deployed primarily by employers to replace employee tasks. If the gatekeeping function of organizations remains intact.

But here’s the thing: I don’t think any of those conditions are inevitable this time. And the reason is something that Acemoglu’s framework doesn’t fully account for.

The Means of Production Are in the App Store

Every previous technological revolution had a common structural feature: the technology was expensive, large-scale, and institutionally controlled. A steam engine required a factory. A mainframe required a corporation. A server farm required venture capital. Individuals couldn’t own the means of production because the means of production were too expensive, too complex, and too physically large for individuals to deploy.

AI is different in a way that matters fundamentally. The means of production are, for the first time, genuinely available at individual scale.

I’m not making an abstract point. I’m describing my lived experience. I run a personal AI agent on a Mac mini that costs less than a year of cable TV. It manages my email, tracks my projects, drafts my writing, monitors my fitness data, handles my family’s scheduling, and interfaces with a dozen other systems. It runs 24/7. It holds more context about my work than any manager I’ve ever had. And the total cost — hardware, API credits, everything — is less than what most professionals spend on coffee.

This isn’t a toy. This is infrastructure that, five years ago, would have required a team of engineers and six figures in cloud spend. I know because I’ve worked in organizations that built exactly this kind of thing, at exactly that scale, with exactly those resources.

The same pattern is repeating everywhere I look. A freelance designer uses AI to produce work that previously required a three-person studio. A solo consultant builds data analysis pipelines that compete with McKinsey deliverables. A writer publishes and distributes a newsletter with production quality that would have required an editorial staff. An independent developer ships products that would have needed a startup’s worth of headcount.

The 14-year-old in Romania that Andreessen described in 2005? She doesn’t just have access to information anymore. She has access to capabilities. She can generate, analyze, create, and deploy at a level that was previously gated behind institutional infrastructure. And that changes the equation that Acemoglu describes in a way that no previous technology has.

Leverage Without Employers

Here’s the shift, stated plainly: AI gives individuals the leverage that previously only employers could provide.

Leverage, in the economic sense, is the ability to amplify your output beyond what your direct labor can produce. A factory worker’s leverage comes from the machinery they operate — but the employer owns the machinery. A knowledge worker’s leverage comes from the organizational infrastructure around them — the data systems, the distribution channels, the brand, the collective expertise of colleagues. But the employer controls that infrastructure.

What AI does — especially AI running on hardware you own, configured to your workflows, holding your context — is provide individual-scale leverage without the employer dependency. You don’t need a company’s servers when you have your own. You don’t need a team’s collective knowledge when your agent holds your project context across months. You don’t need a marketing department when AI can help you create, distribute, and analyze content.

This doesn’t mean companies become irrelevant. Coordination at scale still requires organizations. Large physical infrastructure still requires capital concentration. There are categories of work — chip fabrication, pharmaceutical development, aerospace engineering — where individual leverage doesn’t substitute for institutional capability.

But for the enormous category of work that we loosely call “knowledge work” — consulting, writing, design, software, analysis, marketing, project management, strategy — the balance of leverage is shifting from organizations to individuals in a way that is historically unprecedented.

And that means Acemoglu’s mechanism might not play out the way it has before.

The Two AI Economies

What’s actually emerging is not one AI economy but two, running simultaneously and in tension with each other.

The first AI economy looks exactly like what Acemoglu predicts. Large corporations deploy AI to automate employee tasks, reduce headcount, and capture productivity gains as profit. This is happening now. Every major tech company has announced AI-driven efficiency initiatives. “Doing more with less” is the polite framing. “Replacing people with software” is the honest one.

In this economy, AI is a tool of capital. It follows the historical pattern precisely. Productivity goes up, employment goes down, wages stagnate, shareholders prosper. If this were the only AI economy, Acemoglu would be straightforwardly correct, and the policy response would be straightforward too: regulation, redistribution, retraining programs. The usual toolkit for managing technological displacement.

The second AI economy is something new. Individuals deploy AI to build independent capabilities, bypass employer gatekeeping, and capture their own productivity gains directly. A developer who ships a SaaS product alone. A consultant who delivers enterprise-grade analysis without a firm behind them. A creator who produces and distributes content at professional quality without a studio or publisher.

In this economy, AI is a tool of labor. Not labor as traditionally understood — wage labor performed within an employment relationship — but labor as independent productive capacity. The individual captures the surplus because the individual owns the means of production.

These two economies coexist right now, and the question of which one dominates will determine whether AI becomes another cycle of capital concentration or something genuinely different.

What “AI for Good” Actually Means

The original framing of “AI for Good” — the one you hear at conferences and read in corporate social responsibility reports — focuses on healthcare, education, environmental monitoring, disaster response. These are real applications and they matter. AI diagnosing diseases in under-resourced clinics is genuinely important. AI optimizing food distribution in refugee camps saves lives.

But this framing has a fundamental limitation: it treats AI as a tool that powerful institutions deploy for the benefit of less powerful people. It’s charity with algorithms. It preserves the existing power structure and just makes it slightly more benevolent.

The more radical — and I’d argue more important — version of AI for good is AI that restructures power itself. Not a corporation using AI to help workers, but workers using AI to not need the corporation.

Consider what this looks like in practice:

A union organizer uses AI to analyze contract language, model bargaining scenarios, and draft communications that would previously have required a law firm on retainer. The information asymmetry between management and labor — which has always been one of management’s structural advantages — collapses.

A small-town journalist uses AI to do the investigative data work that only major newspapers could afford. FOIA request analysis, pattern detection in public records, financial document review — tasks that require expensive specialists or massive staff, now available to a reporter with a laptop.

A home healthcare aide uses AI to manage their own scheduling, client communications, credentialing, and billing — the administrative overhead that agencies charge 40% premiums to handle. They can go independent without losing operational competence.

A tradesperson uses AI to handle estimates, permitting research, material sourcing, and client management — the business operations that keep skilled workers dependent on contractors who take a cut of their labor.

In every one of these cases, AI isn’t replacing the worker. It’s replacing the employer’s leverage over the worker. The worker’s core skill — organizing, reporting, caregiving, building — remains human. What changes is their need for an institution to make that skill economically viable.

That’s not “AI for good” in the conference-keynote sense. It’s AI for power — specifically, the redistribution of economic power from organizations to individuals.

The Expertise Paradox

There’s a counterargument here that deserves honest engagement: if AI makes expertise widely available, doesn’t that devalue the experts?

Yes. And that’s not entirely a bad thing.

The expert class — lawyers, consultants, analysts, engineers — has derived its economic power from scarcity. Expertise was expensive to acquire (years of education, decades of experience) and expensive to access (high hourly rates, retainers, salaries that reflect the scarcity premium). This created a class of professionals who were well-compensated precisely because their knowledge was rare.

AI makes that knowledge less rare. A first-year associate can now produce legal research that previously required a fifth-year associate. A junior analyst can generate insights that previously required a senior partner’s experience. The scarcity premium erodes.

If you’re an expert whose value proposition is “I know things you don’t,” this is threatening. And you should feel threatened, because that value proposition is dissolving.

But if you’re an expert whose value proposition is “I can apply judgment in complex situations” — if your expertise is less about information and more about wisdom, less about knowing the answer and more about knowing which questions to ask — then AI is an amplifier, not a replacement.

The paradox resolves like this: AI devalues expertise-as-information and amplifies expertise-as-judgment. The experts who thrive are the ones who always understood that their real value wasn’t what they knew but how they thought. The ones who struggle are the ones who were selling information access in a world where information just became free.

This is uncomfortable for the professional class, and I say that as a member of it. But it’s structurally liberating for everyone else. When a small business owner can get competent legal analysis without a $500/hour attorney, when a community organization can get strategic consulting without a Big Four engagement, when a worker can get financial planning without a wealth advisor’s minimum — the knowledge economy becomes more accessible to the people who’ve been priced out of it.

The experts don’t disappear. But their role shifts from gatekeeper to guide. And that shift benefits the many at the expense of the few.

The Infrastructure Gap

I need to be honest about what’s missing from the optimistic version of this story, because intellectual honesty is more important than narrative convenience.

The second AI economy — the one where individuals deploy AI as personal leverage — requires infrastructure that not everyone has. It requires hardware, connectivity, technical literacy, and the economic stability to invest time in learning new tools before they pay off.

A Mac mini is cheap by professional infrastructure standards. It’s not cheap if you’re working two jobs to make rent. Open-source AI models are free to download. They’re not free to run if you don’t have the hardware or the knowledge to set them up. API credits are affordable. They’re not affordable if your budget is measured in dollars, not hundreds of dollars.

The digital divide isn’t just about internet access anymore — though that’s still a problem in too many places. It’s about capability access. The tools exist. The knowledge to use them exists. But the combination of time, money, hardware, connectivity, and technical baseline that you need to deploy AI as personal infrastructure — that’s not universally available. Not even close.

This is where the “AI for Good” framing and the “AI for Workers” framing actually need each other. Institutional AI-for-good efforts — the ones I was mildly dismissive of earlier — matter enormously when they focus on closing the infrastructure gap. Libraries that provide AI access. Community organizations that offer training. Open-source projects that lower the technical barrier. Policy that ensures affordable connectivity and hardware.

The goal isn’t for everyone to run their own AI agent on their own hardware. That’s my setup, and it works for me, but it’s not a universal prescription. The goal is for everyone to have access to AI-powered leverage that isn’t controlled by their employer. That might mean personal infrastructure, or it might mean cooperatively owned platforms, or community-managed services, or public utilities. The form matters less than the principle: the leverage should belong to the worker, not the institution.

The AFGE Example

I want to get concrete here, because abstractions are comfortable and reality is messy.

I’m involved with AFGE Local 2328 — the American Federation of Government Employees, representing federal workers. I’m not going to get into the politics of the current moment except to say that federal workers are facing extraordinary pressure right now, and their union is one of the few institutions standing between them and outcomes that would be, in a word, bad.

Here’s what AI looks like in that context, practically:

The local needs to communicate with 800+ members across multiple agencies, shifts, and locations. Previously, this required a communications chair with professional writing skills and significant time. Now, an AI agent can draft communications that are clear, accurate, and on-message, adapted to different audiences and channels. The volunteer who runs comms doesn’t need to be a professional writer — they need to be someone who knows what needs to be said and can review what the AI produces.

Contract analysis. Federal labor agreements are dense, technical documents. Understanding the implications of proposed changes — what management is really asking for, where the precedents are, what the grievance history shows — traditionally required a labor attorney or a steward with decades of experience. AI doesn’t replace that experience, but it makes it accessible to newer stewards who are learning the role. It levels up the whole organization.

Meeting prep, event logistics, compliance tracking, membership outreach — the operational overhead that eats volunteer hours. AI handles the administrative machinery so that human volunteers can focus on the human work: talking to members, building solidarity, showing up at hearings.

None of this is hypothetical. I’ve watched it happen. And the pattern is exactly what I described above: AI isn’t replacing the union. It’s replacing the resource constraints that have historically limited what a volunteer-run local can accomplish. It’s giving a small organization the operational capabilities of a much larger one.

That’s AI for workers in the most literal sense: AI in service of organized labor. And it works.

The Portfolio Worker

There’s a broader shift happening that the AI-for-workers story fits into, and it’s worth naming explicitly: the emergence of the portfolio worker.

A portfolio worker doesn’t have a single employer. They have a portfolio of income streams, clients, projects, and ventures, managed as a unified practice rather than a collection of gigs. This isn’t the same as the “gig economy,” which mostly means doing the same work with fewer protections. Portfolio work means deploying a coherent set of skills and capabilities across multiple contexts, retaining ownership of the output, and building cumulative value rather than renting yourself by the hour.

AI makes portfolio work viable at a scale that wasn’t possible before. The overhead of managing multiple streams — different clients, different schedules, different deliverables, different invoicing — is exactly the kind of operational complexity that AI handles well. A personal agent that holds context across all your projects, manages your calendar, tracks your deliverables, handles your communications — that’s not a luxury for a portfolio worker. It’s infrastructure.

I’m living this. I write, I consult, I build tools, I manage a local’s communications, I’m developing a service business around AI deployment. Each of those would be a part-time job’s worth of operational overhead if managed manually. With AI handling the connective tissue — the scheduling, the context-switching, the administrative machinery — I can maintain coherent attention across all of them without drowning in logistics.

The portfolio model isn’t for everyone. Some people want the stability and community of traditional employment, and that’s legitimate. But for the people who want independence — who want to capture the full value of their labor rather than selling it at a discount to an employer who provides leverage — AI is what makes it possible.

The Policy Question

Individual action matters, but it’s not sufficient. The structural question — which AI economy wins — depends partly on policy.

The policies that matter aren’t the ones that dominate the current discourse. AI safety regulation, model licensing, liability frameworks — these are important for the first AI economy, the corporate one. But they do almost nothing for the second AI economy, the individual one.

What the second AI economy needs:

Universal broadband. Not as a consumer convenience but as economic infrastructure. If AI-powered leverage is the new means of production, internet access is the new factory floor. Areas without reliable, affordable broadband are areas where workers can’t participate in the second AI economy.

Open-source AI as public good. The current open-source AI ecosystem is healthy but fragile, dependent on the strategic decisions of a handful of companies (Meta, Mistral, Google) to release models openly. Public funding for open-source model development — the way we publicly fund roads and bridges — would ensure that AI leverage isn’t dependent on corporate goodwill.

Portable benefits. Health insurance, retirement savings, disability coverage — these are currently tied to employment in the US. Portfolio workers and independents either go without or pay inflated individual-market rates. Decoupling benefits from employment is a prerequisite for the second AI economy to be accessible to anyone who isn’t already wealthy enough to self-insure.

Education that teaches tool-building, not just tool-using. The current rush to teach “AI literacy” mostly means teaching people to use ChatGPT. That’s like teaching computer literacy by teaching people to use Microsoft Word. What matters is understanding the architecture — how models work, how to deploy them, how to evaluate their output, how to build systems that incorporate them. The difference between being a user and being a builder is the difference between renting and owning.

None of these are radical proposals. Universal broadband is already bipartisan consensus (if not bipartisan action). Open-source funding has precedent in every other domain of critical infrastructure. Portable benefits have been proposed by think tanks across the political spectrum. Technical education is something everyone claims to support.

The barrier isn’t ideas. It’s the same barrier it’s always been: incumbent institutions that benefit from the existing arrangement and have the political power to preserve it. Companies that profit from being the gatekeepers of leverage don’t want workers to have their own leverage. Insurance companies that profit from employer-tied benefits don’t want portable alternatives. ISPs that profit from artificial scarcity don’t want universal broadband.

This is Acemoglu’s thesis applied to policy: the institutions that control the technology shape the rules to ensure the technology benefits them. Breaking that pattern requires organized political action — which, circling back to the beginning, is exactly the kind of thing that AI can help workers do more effectively.

The Compounding Effect

I want to close with the observation that matters most, because it’s the one that gives me genuine optimism about how this plays out.

Every previous technological revolution had a characteristic that limited its democratizing potential: the returns to scale were institutional. A bigger factory was more efficient than a smaller one. A larger corporation could amortize technology costs across more revenue. Scale advantages accrued to organizations, not individuals.

AI reverses this. The returns to scale are personal. The longer you work with an AI system, the more context it holds, the more effective it becomes, the more leverage it provides. My agent is more useful to me after six months than it was after six days — not because the underlying model improved, but because the accumulated context of my projects, preferences, patterns, and decisions makes every interaction more productive.

This is compounding leverage. It doesn’t require institutional scale. It doesn’t require capital concentration. It requires one thing: time with the tools. And time is the one resource that’s distributed democratically.

The Acemoglu pattern breaks when the technology compounds at individual scale instead of institutional scale. When a worker’s AI-powered capabilities grow with use, the worker accumulates leverage independently. They don’t need an employer to provide it. They build it themselves, over time, through the work itself.

That’s not a guarantee that everything works out. The infrastructure gap is real. The policy barriers are real. The incumbent resistance is real. The first AI economy — the corporate, capital-concentrating one — has enormous structural advantages and will fight to remain dominant.

But the second AI economy has something the first one doesn’t: it gives people what they actually want. Not a job. Not a paycheck. Not the privilege of selling their labor at a discount to an institution that captures the surplus. What people want is agency — the ability to do meaningful work, capture its full value, and build something that compounds over time.

AI, deployed at individual scale, makes that possible. Not inevitable. Possible.

The difference between possible and actual is what we build next.


This is part 9 of a 12-part series. Previously: “Decentralized Futures” — the architecture of owning your own infrastructure. Next: “Empowering Workers in the Age of AI” — the full manifesto for democratizing AI capabilities and building decentralized worker collectives.

Working independently with AI? Building leverage outside of employment? I want to hear your story. Find me on Bluesky.