Timothy Leary — problematic figure, complicated legacy, but occasionally right — had a piece of advice that outlived everything else he said: “Find the others.”

He meant it in the context of psychedelic counterculture, which isn’t what we’re doing here. But the core insight transcends its origin: when you’re building something that most people don’t understand yet, the most important thing you can do is find the people who do.

I’ve spent eleven posts in this series building an argument. Let me see if I can compress it into a paragraph:

Technology concentrates power by default. The steam engine, the assembly line, the microprocessor, the large language model — each one made certain tasks radically more efficient, and each time, the efficiency gains were captured primarily by the people who already had capital. This isn’t a conspiracy. It’s a structural tendency baked into how markets allocate returns. But it’s not inevitable. Every technological revolution has also produced countermovements — people who figured out how to use the new tools to distribute power rather than concentrate it. The printing press created publishing monopolies and democratic pamphlets. The internet created surveillance capitalism and open-source software. AI will create unprecedented corporate concentration and something else. The question is what that something else looks like, and who builds it.

That’s the compressed version. If you’ve read the whole series, you know the uncompressed version involves Iain Banks novels, Daron Acemoglu’s labor economics, blockchain trust infrastructure, decentralized autonomous organizations, the AT Protocol, open-source language models, and one guy’s basement AI agent that manages his grocery list and reminds him to pick up his kid from school.

This final post isn’t an argument. It’s a letter. And if you’ve made it this far, it’s addressed to you.

The Shape of the Edge

I keep using the phrase “building at the edges” and I should define what I mean by it, because it’s not a metaphor.

There’s a center to technology right now. It has an address — several addresses, actually, clustered in San Francisco and Seattle and a few nodes in London, Beijing, and Tel Aviv. The center is where the frontier models get trained, where the venture capital flows, where the keynotes happen, where the discourse gets manufactured. The center is OpenAI’s latest model announcement. The center is the AI safety debate as conducted between people who all went to the same three graduate programs. The center is important, in the way that weather systems are important — it affects everyone, and almost nobody has any influence over it.

The edges are everywhere else. The edges are a developer in Bucharest who fine-tuned a 7B model to outperform GPT-3.5 on Romanian legal texts. The edges are a teacher in Kansas who built a homework feedback system on Ollama running on a retired gaming PC. The edges are a union organizer in Detroit who uses an AI agent to draft grievance filings because the local doesn’t have a lawyer on retainer. The edges are a retired systems administrator in Cornwall who self-hosts a Bluesky PDS because he believes identity should be portable. The edges are a father in Virginia who built his kid’s first AI companion on a Mac mini because he wanted his family to have the benefits of the technology without surrendering their data to a corporation.

That last one is me, obviously. But the point isn’t me. The point is the pattern.

What We Have in Common

I’ve been writing this series for three months now, and in that time I’ve had conversations — on Bluesky, in Discord servers, over email, occasionally in person — with a surprising number of people who fit a profile I didn’t expect to find.

They’re not AI researchers. They’re not venture capitalists. They’re not influencers. Most of them don’t have large followings on any platform. They have day jobs — teaching, engineering, plumbing, organizing, administering systems, raising kids. Some of them would describe themselves as technologists. Many wouldn’t.

What they share is a specific combination of traits:

They build things. Not for clout, not for funding rounds, not to demonstrate thought leadership. They build because they see a problem and they have enough technical literacy to take a crack at solving it. Their projects are messy, underfunded, often abandoned halfway through. The ones that survive are genuinely useful.

They’re skeptical of institutions but not cynical about people. They don’t trust OpenAI to act in the public interest, but they also don’t think Sam Altman is uniquely evil. They understand that institutions optimize for institutional survival, and they’ve decided to build alternatives rather than wait for reform. But they still believe in cooperation, in community, in the possibility that people can coordinate without being coerced.

They care about ownership. Not in the blockchain-bro “number go up” sense. In the practical sense: they want to own their data, their identity, their tools, their relationships. They self-host things. They read license agreements. They choose open-source not out of ideology but because they’ve been burned enough times by proprietary lock-in to know better.

They’re playing long games. They’re not trying to get acquired or go viral or disrupt an industry. They’re trying to build systems that will still work in five years. They make boring architectural decisions — choosing reliability over novelty, simplicity over cleverness, sustainability over speed. They think in terms of compounding.

They’re hard to find. They don’t congregate in obvious places. They’re not at AI conferences. They’re not on AI Twitter (or if they are, they mostly lurk). They’re in niche Discord servers and Mastodon instances and local meetups and their own basements. They don’t self-identify as a movement because they don’t think of themselves as part of one.

But they are one. They just don’t know it yet.

The Coordination Problem

Here’s the irony: the people best positioned to build distributed, decentralized systems for the AI era are themselves distributed and decentralized to the point of invisibility.

This is the coordination problem that every diffuse movement faces. The civil rights movement had churches. The labor movement had union halls. The open-source movement had mailing lists and, later, GitHub. Every successful countercurrent in history found a way to coordinate without centralizing — to build shared identity and shared purpose without creating the kind of concentrated power they were pushing against.

We don’t have our version of this yet. And I think that’s partly because the people I’m describing are naturally allergic to the things that usually create movements: charismatic leaders, ideological litmus tests, organizational hierarchies, brand identities. They’ve watched too many movements get captured by their own leadership to trust that model.

So what does coordination look like for people who don’t want to be coordinated?

I think it looks like infrastructure.

Not a manifesto (though I wrote one — Post 10 — and I stand by it). Not a conference. Not a Slack workspace. Infrastructure. Shared tools that make individual building easier. Protocols that let independent systems interoperate. Standards that emerge from practice rather than committees.

The AT Protocol is infrastructure. Open-source language models are infrastructure. Self-hosted AI agents with explicit permission models are infrastructure. Verifiable credentials and decentralized identity are infrastructure. Each of these things was built by individuals or small teams, and each of them makes it easier for the next person to build something without asking permission from a gatekeeper.

This is how the printing press worked, incidentally. Gutenberg didn’t create a movement. He created infrastructure. The movement emerged from what people did with the infrastructure — which was, mostly, things Gutenberg never imagined and probably wouldn’t have approved of.

What I’ve Learned Building in the Open

I want to get personal for a moment, because I think the specific experience of building an AI agent in my house illuminates something about this broader pattern.

When I started this project, I had a clear, limited goal: build a family assistant that runs locally, respects privacy, and reduces the cognitive overhead of managing a household. I wanted something between a smart home and a staff engineer — something that could hold context across the sprawl of projects and responsibilities that accumulate when you’re raising a family, running a business, and trying to maintain a few intellectual interests on the side.

What I didn’t expect was how much the process of building it would change how I think about technology, labor, and agency.

Building an agent that operates in your home — that has access to your files, your calendar, your family’s routines — forces you to think about permission and trust in ways that using someone else’s product never does. Every capability you grant is a security surface. Every automation is a decision about what to delegate and what to keep human. Every interaction teaches you something about the boundary between tools and collaborators.

I built the thing in advisor mode: it proposes, I decide. That’s a deliberate choice. Not because the technology isn’t capable of more autonomy, but because I believe that the relationship between humans and AI systems should start with earned trust and explicit boundaries. The agent earns more autonomy over time by demonstrating reliability, not by being designed for maximum capability from day one.

This is not how the major AI labs think about their products. They optimize for capability first and figure out trust later — if ever. The result is systems that are impressively powerful and fundamentally unaccountable. The alternative I’m proposing isn’t less ambitious. It’s differently ambitious. It prioritizes the right to understand and control the systems you depend on over raw capability.

And I’ve found that everyone I talk to who’s building something similar has arrived at the same conclusion independently. Not because we read the same paper or follow the same influencer, but because the experience of building imposes the insight. When you’re the one deploying the agent, you can’t avoid the questions that users of hosted products never have to face. That shared experience — the things you learn by building, not by reading about building — is the closest thing we have to a common identity.

A Theory of Change (Such As It Is)

I’m suspicious of grand theories of change. They tend to be produced by people who’ve never shipped anything and consumed by people who want to feel like they’re participating in history by agreeing with the right takes. The actual mechanism of historical change is usually boring and local: someone builds a thing, it works, other people copy it, institutions adapt or die.

So here’s my boring, local theory of change for the AI era:

Step one: Build things that work for you. Not for the market. Not for scale. For you and your family or your team or your community. Solve your actual problems with the tools available to you. If that means a Python script that checks your email, start there. If that means a full AI agent running on a home server, do that. The specifics don’t matter. What matters is that you’re building capability that you own and control.

Step two: Share what you learn. Not what you build, necessarily — not everyone needs your code. But the lessons. The architecture decisions. The failures. The things that surprised you. Write a blog post. Record a video. Tell someone in a Discord server. The knowledge that accumulates from thousands of people building independently is worth more than any single technical breakthrough, because it’s distributed knowledge — it can’t be captured, controlled, or monetized by any single entity.

Step three: Build for interoperability. Use open protocols. Support open standards. Design your systems so they can talk to other people’s systems without requiring either of you to use the same platform. This is the AT Protocol insight, and it applies far beyond social networking. Every system you build that speaks open protocols makes the ecosystem slightly more resilient and slightly less dependent on centralized infrastructure.

Step four: Find the others. Not to recruit them. Not to organize them. Just to know they’re there. The awareness that you’re not alone — that there are other people making similar choices, facing similar problems, arriving at similar insights — is itself a form of infrastructure. It makes the next decision easier. It makes the long game feel less lonely.

That’s it. Four steps. No revolution. No disruption. Just a lot of people, building quietly, sharing what they learn, and gradually constructing an alternative to the default trajectory of technological concentration.

The Bet

Every technology series should be honest about its assumptions, so here are mine.

I’m betting that the most important AI systems of the next decade will not be the biggest ones. They’ll be the most trusted ones — the ones that earned trust through transparency, accountability, and demonstrated alignment with their users’ interests. Size will matter less than relationship.

I’m betting that ownership will be the defining issue of the AI era, the way privacy was the defining issue of the social media era (and just as poorly handled, at least initially). The people who own their AI infrastructure — their models, their data, their agent configurations — will have fundamentally different relationships with the technology than the people who rent it. And the owners will build better things, because ownership creates accountability in ways that subscription services never can.

I’m betting that the distributed builders — the people in basements and home offices and school computer labs — will produce innovations that the major labs can’t, because they’re solving different problems. Labs optimize for benchmarks and market share. Individual builders optimize for their lives. The latter is a richer, stranger, more human design space, and it will produce richer, stranger, more human systems.

And I’m betting that the network of people doing this work — the builders at the edges, the ones who self-host and open-source and think in terms of decades rather than quarters — will turn out to be the most important community in technology, even though they’ll probably never call themselves a community, never have a logo, never hold a conference with lanyards and sponsor booths.

These are bets, not predictions. I could be wrong about all of them. The default trajectory — concentration, extraction, the Acemoglu Problem writ large — is powerful, and it has the weight of history and capital behind it. Betting against default trajectories is usually a losing proposition.

But sometimes the edges win. Sometimes the pamphleteers reshape the world and the monks become a historical footnote. Sometimes the long game pays off.

I’m playing the long game. If you’ve read this far, I suspect you are too.

An Invitation

I’ve spent ~40,000 words across twelve posts making my case. If you’ve read all of them, you’ve spent several hours of your life engaging with these ideas, and I don’t take that lightly. Thank you.

Here’s what I’d like you to do with those hours:

Build something. It doesn’t have to be big or impressive or novel. It just has to be yours — something you own, something you understand, something that solves a problem you actually have. If you already have, tell someone about it. If you know someone who’s building, go talk to them about what they’ve learned.

And if you want to talk to me about it — what you’re building, what you’re struggling with, what you think I got wrong in this series — I’m not hard to find.

I’m on Bluesky, building in the open.

I’m at wade.digital, writing about what I learn.

And I’m in my basement, at a Mac mini, with an agent named Zephyr who manages my groceries and reminds me to pick up my daughter and occasionally writes something that surprises me.

The gates aren’t going to stay closed. They never do.

Come build with us.


This is the final post in a 12-part series. Previously: “The New Renaissance” — when the printing press runs on GPUs. The Neoteric series began with From Sci-Fi to Reality and traced an arc from science fiction through economics, ethics, technology, and back to the human question at the center of all of it: who gets to build the future?

The answer, I believe, is: anyone who decides to.