I’ve been building toward this post for nine essays now, and I still almost didn’t write it.
Not because the ideas aren’t ready. They’ve been ready since 2024, when I first put together a document I called a “Massive Transformative Purpose” — a term borrowed from Salim Ismail’s exponential organizations framework, meaning a goal so audacious that the prospect of achieving it is terrifying. The MTP I wrote then was blunt: democratize the capabilities of AI, enabling workers to independently orchestrate their livelihoods and effectively eliminate the need for traditional corporate structures that do not serve their interests.
Two years later, I haven’t softened it. If anything, I’ve hardened it — because everything I’ve built, studied, and written about since then has confirmed that the core analysis is correct. The question was never whether AI would transform labor. The question is who captures the transformation.
The reason I almost didn’t write this is because manifestos are embarrassing. They sound grandiose. They invite the kind of criticism that says “nice vision, what have you actually built?” And that criticism is often deserved — the tech industry is littered with manifestos that were written to attract funding rather than to guide action.
So let me be clear about what this is. This is not a startup pitch. This is not a whitepaper for a token launch. This is the distillation of everything this series has explored — the Acemoglu Problem, decentralized identity, DAOs, trust infrastructure, the compounding leverage of personal AI — into a statement of what I believe needs to happen, why, and how it could work. It’s addressed to the people I’ve been writing for all along: builders at the edges, workers who’ve started to see through the arrangement, and anyone who suspects that the current system’s promise of meritocratic advancement is a rigged game.
If that sounds grandiose, fine. Some problems require audacity.
The Diagnosis
Let me state the problem plainly, because clarity matters more than nuance here.
The modern employment relationship is a structure in which workers trade time, skill, and autonomy for wages, while the surplus value of their labor accrues to shareholders. This is not a conspiracy theory. It’s the explicitly stated purpose of the corporation. Maximizing shareholder value — the doctrine that Milton Friedman articulated and that became economic orthodoxy — means, definitionally, minimizing what goes to everyone else. Workers, communities, the environment — these are costs to be optimized, not stakeholders to be served.
This wasn’t always the deal. In the post-war period, a combination of strong unions, regulatory pressure, and social norms created a version of capitalism where productivity gains were shared more broadly. From 1948 to 1973, median worker compensation grew roughly in lockstep with productivity. After 1973, the lines diverged. Productivity kept climbing. Wages flatlined. The gap is now a canyon.
Acemoglu’s work, which I explored at length in Post 3, explains the mechanism: when new technology automates tasks rather than augmenting the workers who perform them, the productivity gains flow to the owners of the technology rather than the operators. This is the pattern of every industrial revolution. Steam, electricity, computing — each one made work more productive and workers relatively less powerful.
AI is the latest iteration. But it has a characteristic that previous technologies didn’t: it’s available at individual scale.
A loom required a factory. A mainframe required a corporation. Even early personal computing required institutional infrastructure — networks, servers, support — to be economically productive. AI doesn’t. A laptop, an API key, and time are sufficient to build capabilities that would have required a department a decade ago. I built a personal AI agent that manages my infrastructure, tracks projects, processes voice memos, and holds institutional memory — running on a Mac mini in my house. Not a data center. Not a cloud deployment. A consumer computer on a shelf.
This is the crack in the Acemoglu pattern. When the technology is accessible at individual scale, it can augment workers rather than just automating their tasks. Whether it will depends on choices we make now — which is what the rest of this manifesto is about.
The Five Pillars
When I wrote the original MTP document, I organized the strategic objectives into five categories. They hold up. Let me expand each one from a bullet point into an argument.
1. Innovation in AI Tools: Building for Humans, Not Enterprises
The current AI tooling landscape has a fundamental design problem: it’s built for enterprises, not individuals.
OpenAI’s pricing model rewards scale. Anthropic’s Claude works best inside organizational workflows. Google’s Gemini is welded to the Google ecosystem. These are extraordinary technologies, but their economic models are designed to serve the same institutional customers that every technology wave has served. The individual user gets the consumer tier — the free version, the rate-limited API, the product that you are the product of.
What workers need is different. They need AI tools that are:
Locally deployable. Not “cloud with a local cache” but genuinely local — running on hardware the user owns, processing data that never leaves their machine. Open-source models like Llama, Mistral, Gemma, and DeepSeek make this possible today. The performance gap between local and cloud models is closing fast. For many tasks — summarization, classification, code generation, data analysis — a 4B parameter model running on consumer hardware is sufficient.
Composable. Not monolithic platforms but modular tools that pipe into each other. The Unix philosophy — small programs that do one thing well, connected through standard interfaces — is exactly right for personal AI. I use a transcription tool (mlx-whisper) piped into a summarizer piped into a note-filing system piped into a task extractor. Each piece can be swapped independently. That’s not a product. It’s infrastructure.
Context-accumulating. The most important feature of a personal AI agent isn’t its model quality. It’s its memory. An agent that knows your projects, your preferences, your communication patterns, your decision history becomes exponentially more useful over time. This is the compounding leverage I described in Post 9 — and it’s something that enterprise tools, designed for interchangeable employees, are structurally incapable of providing.
Owned, not rented. When your AI capabilities live on someone else’s server, behind someone else’s API, subject to someone else’s terms of service, you don’t have tools — you have a dependency. OpenClaw, the platform I use, is open-source and runs on my hardware. If the company behind it disappears tomorrow, my agent still works. That’s not a nice-to-have. It’s the difference between empowerment and a more sophisticated form of the same old dependence.
Building these tools is happening. The open-source AI movement — which I’ll explore more in Post 11 — is producing an extraordinary volume of capable, deployable, modifiable models and frameworks. What’s missing is the connective tissue: the tooling that makes it easy for a non-developer to assemble a personal AI stack. That’s the innovation gap that matters most, and it’s where the most impactful building is happening right now.
2. Education and Training: Knowledge as Infrastructure
Here’s an uncomfortable truth about the “learn to code” narrative that’s dominated discussions about workforce adaptation: it’s a displacement strategy. It shifts responsibility for structural economic change from institutions to individuals. “We automated your job, but don’t worry — here’s a Coursera subscription.”
Real education for the AI era looks different. It has three layers:
Literacy: Understanding what AI can and can’t do. Not at the technical level — at the practical level. What kinds of tasks benefit from AI assistance? Where does it fail? How do you evaluate whether an AI-generated output is trustworthy? This is the equivalent of digital literacy in the 1990s — a baseline competence that everyone needs and almost nobody has.
Application: Using AI tools to augment your existing skills. A carpenter who uses AI to estimate materials, generate client proposals, and manage scheduling isn’t learning to code. They’re applying AI to their domain expertise. This is where the real leverage lives — not in becoming a technologist, but in becoming better at what you already do by having a capable assistant.
Infrastructure: For those who want to go deeper — building, modifying, and maintaining personal AI systems. This is the equivalent of the system administrator in the early internet era: someone who understands the plumbing well enough to keep it running and customize it. Not everyone needs this skill, but every community needs people who have it.
The current educational establishment is catastrophically bad at all three layers. Universities are debating whether to allow AI in classrooms while their students are already using it to write papers, plan projects, and learn subjects the university doesn’t teach. Corporate training programs are “introduction to ChatGPT” seminars that teach people to use a product rather than understand a paradigm.
What works instead: peer-to-peer learning communities. I’ve seen this firsthand. When I demoed my AI agent to friends, two of them set up their own within a week. Not because I gave them a curriculum — because they saw a working example and had access to someone who could answer questions when they got stuck. That’s the education model that scales: practitioners teaching practitioners, anchored in lived experience rather than theoretical frameworks.
3. Community Building: Collectives, Not Platforms
The gig economy was supposed to be liberation. Instead, it became piecework with an app — workers stripped of benefits, stability, and bargaining power, managed by algorithms optimized for platform profit rather than worker welfare. Uber, DoorDash, Fiverr, Upwork — these are not communities. They’re marketplaces where individuals compete against each other for the privilege of accepting below-market rates, while the platform extracts a percentage for the service of connecting them to customers they could have found themselves.
AI makes a different model possible: worker collectives with shared infrastructure.
Imagine a collective of freelance consultants who share a pool of AI tools — research agents, proposal generators, market analyzers, scheduling systems. Each member’s individual work feeds the collective’s knowledge base. The collective negotiates with clients as a unit, providing the reliability and breadth of a firm without the overhead of management hierarchy. Revenue splits are governed by smart contracts (the kind of transparent, deterministic governance I discussed in Post 7). Reputation is portable — built on verifiable credentials rather than platform ratings that vanish when you leave.
This isn’t hypothetical. The building blocks exist: DAOs for governance, decentralized identity for portable reputation, AI agents for operational leverage, open-source infrastructure for independence. What’s missing is assembly — putting the pieces together in ways that non-technical workers can use.
The key insight is that communities should be organized around shared capability, not shared platforms. A platform owns the relationship between worker and client. A collective owns its own infrastructure. When the tools are open-source and self-hosted, the collective can’t be enshittified — there’s no platform operator to degrade the experience in pursuit of shareholder returns.
4. Advocacy and Policy: Making the Rules Match the Reality
Technology doesn’t operate in a policy vacuum. The rules that govern labor, ownership, intellectual property, and taxation were designed for an industrial economy. They assume that work happens inside corporations, that intellectual property is produced by employees, and that income flows through wages. AI breaks all of these assumptions.
Three policy areas matter most:
Data rights. Workers who use AI tools generate valuable data — interaction patterns, domain expertise encoded in prompts, workflow optimizations. Under current law, if you use your employer’s AI tools, your employer owns that data. If you use a platform’s AI tools, the platform owns it (or at least has broad license to it). Workers need legal frameworks that recognize data generated through their labor as their property. The EU’s data portability provisions in GDPR are a starting point, but they’re designed for consumers, not workers.
Portable benefits. The American benefits system is an artifact of World War II wage controls — employers provided health insurance because they couldn’t raise wages. This historical accident has yoked benefits to employment, making independent work financially perilous. Decoupling benefits from employment — through universal healthcare, portable retirement accounts, and cooperative insurance pools — is a precondition for worker independence. You can’t democratize AI capabilities if workers can’t afford to leave their employers without losing healthcare.
Algorithmic transparency. When an AI system makes decisions that affect workers — hiring, performance evaluation, task allocation, compensation — workers have a right to understand how those decisions are made. This isn’t about opening the black box of neural networks. It’s about basic accountability: if an algorithm determines your pay, you should be able to audit the algorithm. The EU AI Act moves in this direction. The US is nowhere close.
Advocacy here isn’t optional. It’s the difference between a future where AI empowers workers and one where it simply creates more efficient systems of exploitation. As I explored in Post 3, technology’s distributional effects are determined by institutional choices, not by the technology itself. The institutional choices are made through policy. Policy is shaped by advocacy. This is why I’m involved with organized labor — not because unions are perfect, but because collective advocacy is the mechanism through which workers influence the rules.
5. Sustainable Business Models: Revenue Without Extraction
The venture capital model of technology development is incompatible with worker empowerment. VC requires exponential returns, which requires either monopolistic market capture or the kind of platform dynamics that extract value from users. Every VC-funded platform follows the same trajectory: subsidize adoption, achieve lock-in, monetize the captive user base. It’s a colonization pattern with better branding.
Worker-empowering AI needs different economic models:
Cooperative ownership. AI infrastructure owned and governed by the workers who use it. Not a co-op in the feel-good nonprofit sense — a co-op in the Mondragon sense: a commercially competitive enterprise where workers are owners and governance is democratic. AI makes this more viable than ever because the core infrastructure (models, tooling, compute) is increasingly open-source. You don’t need VC to fund a billion-dollar model training run when capable models are freely available.
Service models built on open-source. Charge for expertise, customization, and support — not for access to the technology itself. This is the Red Hat model applied to AI: the code is free, the knowledge to deploy it effectively is valuable. I’m experimenting with this myself — deploying personalized AI agents for clients using open-source infrastructure, charging for the setup, configuration, and ongoing support rather than licensing proprietary technology.
Community-funded development. Open Collective, GitHub Sponsors, Patreon — these platforms enable direct funding of open-source development by the communities that benefit from it. It’s not a replacement for large-scale investment, but it’s sufficient for the kind of infrastructure that individual workers need: CLI tools, integration libraries, deployment scripts, documentation. The less glamorous the tool, the more likely it is to be funded by the people who actually use it.
The point isn’t to reject profit. It’s to reject the specific model of profit that requires extracting value from the people the technology is supposed to serve. Profitable and extractive are not synonyms. Sustainable business models exist — they’re just not the ones that get TechCrunch headlines.
The Objections
I can hear them already, so let me address the three most serious.
“This is naive. Capital always wins.” Maybe. The historical record is certainly on capital’s side. But the historical record also shows that the relationship between capital and labor is renegotiated after every major technological revolution — sometimes violently, sometimes through institutional reform. The post-war social contract didn’t emerge from capitalist benevolence. It emerged from organized labor having enough power to demand a share. AI shifts the balance of power in ways that are genuinely novel. For the first time, the means of production in the knowledge economy can be owned by individuals at consumer hardware prices. That doesn’t guarantee a good outcome, but it changes the game.
“Most workers can’t or won’t learn to use these tools.” This was said about personal computers. It was said about the internet. It was said about smartphones. Each time, the tools got easier to use, the learning curve flattened, and adoption became ubiquitous within a decade. AI is following the same trajectory. The interface layer between people and AI capabilities is improving rapidly. The person who couldn’t use a command line in 2024 can talk to an AI in natural language in 2026. The access problem is real but temporary. The structural problem — who controls the infrastructure — is permanent.
“You’re just describing freelancing with extra steps.” No. Freelancing, as currently practiced, is atomized labor with no collective bargaining power, no shared infrastructure, and no portable reputation. What I’m describing is organized independent work — collectives with shared AI capabilities, governance structures, and economic models that capture the value of collaboration. The difference between a freelancer on Upwork and a member of an AI-powered worker collective is the same as the difference between a day laborer standing outside Home Depot and a member of a trade union. Both are selling their labor. Only one has structural power.
The Uncomfortable Part
Here’s the thing about manifestos: they’re easy to agree with in the abstract and terrifying in the specific.
When I say “eliminate the need for traditional corporate structures that do not serve workers’ interests,” I mean it. I mean that the default path for a talented knowledge worker should not be “get hired by a company, trade your best years for a salary that’s a fraction of the value you create, and hope you save enough to retire before your skills become obsolete.” I mean that the technology exists — today, not in some theoretical future — for workers to collectively own the means of their economic production. I mean that the only thing standing between the current arrangement and a better one is organization, tooling, and will.
That’s uncomfortable because it implies action. Not retweeting. Not agreeing in principle. Action: building the tools, joining the collectives, learning the skills, funding the infrastructure, doing the advocacy. The gap between “this sounds right” and “I’m going to do something about it” is where most manifestos go to die.
I don’t have a solution for that gap. I can’t motivate anyone. What I can do is build the thing and show that it works. An AI agent running in my house, managing real work, compounding over time. A blog publishing substantial analysis weekly. A consulting practice deploying personal agents for clients. An open-source project that anyone can fork. These aren’t revolutionary acts. They’re boring, incremental infrastructure decisions that happen to point in a different direction than the prevailing current.
What Comes Next
This manifesto is post ten of twelve. Two remain: The New Renaissance, which surveys the open-source AI movement that makes all of this technically possible, and Finding the Others, which is about exactly what it sounds like — connecting the people who are already building at the edges.
But manifestos don’t culminate in blog posts. They culminate in work.
Here’s what I’m actually building:
- A personal AI agent framework that any technically inclined person can deploy and customize
- A consulting practice that helps individuals and small teams set up their own AI infrastructure
- Content — this series, this blog, these ideas — shared freely because the network effects of knowledge are more valuable than paywalls
- Connections — finding the builders, the organizers, the practitioners who are doing this work in their own contexts and creating the conditions for collaboration
The MTP that started this whole project still stands: democratize the capabilities of AI, enabling workers to independently orchestrate their livelihoods and effectively eliminate the need for traditional corporate structures that do not serve their interests.
It’s audacious. It’s supposed to be. The purpose of an MTP is to be so bold that the prospect of success is terrifying.
I’m not terrified. I’m building.
This is part 10 of a 12-part series. Previously: “AI for Workers” — how individuals are using AI to replace employer leverage. Next: “The New Renaissance” — the open-source AI movement as the printing press of our moment.
If you’re building worker-owned AI infrastructure, organizing independent collectives, or just figuring out how to use these tools to take control of your own work — I want to hear from you. Find me on Bluesky.