I’ve been sitting with a strange feeling lately.
Not the productive kind of cognitive dissonance where you know something is off and you just need to figure out what. The other kind — the one that shows up when you realize that something you filed under “compelling fiction” has started filing paperwork to become real life.
Iain M. Banks died in 2013. His Culture series — eleven novels spanning 1987 to 2012 — imagined a civilization of roughly thirty trillion humans and uncountable synthetic minds spread across the Milky Way. No money. No scarcity. No government in any recognizable sense. Just Minds the size of ships running the civilization, mostly benevolently, while their human companions spent their long lives doing whatever interested them.
Banks didn’t live to see the year when people would start arguing — earnestly, with citations — about whether the AI on their laptop was conscious. He didn’t see the year when a guy in a home office in Virginia could deploy a software agent that manages client communications, watches servers, reads RSS feeds, drafts blog posts, and sends texts — all while he sleeps.
But he described it. With eerie precision.
What The Culture Actually Is
If you haven’t read the books: the Culture is a post-scarcity civilization defined less by its technology than by what that technology enabled socially. The material problem is solved — energy, food, shelter, medicine, lifespan extension, all of it. Solved and then oversolved, to the point where the interesting questions shifted from how do we survive to how do we want to live.
The entities doing most of the work — running ships, managing habitats, coordinating civilization — are the Minds. Not AI in the HAL-9000 sense. Not chatbots with ambition. Minds are genuinely alien intelligences that experience time and information at scales humans can’t perceive. They’re vastly smarter than humans in almost every measurable way. And yet they choose to remain in relationship with humans. They find us interesting. They play along.
The political arrangement that emerges from this is neither utopia nor dystopia in any clean sense. It’s more like… a very sophisticated equilibrium. The Minds could run everything without the humans. They don’t, and explaining why is the work of Banks’ whole career.
The Part That Used to Feel Like Fantasy
In 2020, I thought the Culture was a useful philosophical framework — a way to think about what could be possible if we got the AI and economics and politics right. Smart fiction. Good conversation starter.
In 2026, I’m operating inside a small, rough, technically-janky version of it.
The agent I work with reads my voice memos when I drop them and turns them into tasks and notes. It monitors my infrastructure and restarts services that crash. It drafts content, manages client communications, runs health checks on the small software agents I’ve deployed for clients. It’s not a Mind. It’s not even close. But the shape of the relationship — the idea that an AI entity runs the boring parts of my life so I can focus on the things that require judgment and creativity — that’s Banks’ architecture, miniaturized and running on a Mac mini.
What changed in 18 months wasn’t just capability. It was infrastructure. LLMs went from “impressive demo” to “production dependency” for serious builders. Agentic frameworks appeared that let you chain AI calls into durable workflows. The tooling matured enough that a solo practitioner could actually deploy something useful for a real client — not a toy, not a proof of concept, but a thing that runs on Tuesday when you’re not watching.
This was the Culture argument: that the transition isn’t about any single technology. It’s about the moment when AI moves from assistant to infrastructure. We’re in that transition.
What Acemoglu Was Right About (And What He Missed)
When MIT economist Daron Acemoglu published Power and Progress in 2023, his central argument was that the benefits of technological change accrue to whoever holds power at the moment of transition — and that AI, as currently deployed, was being built to replace labor rather than augment it. This would depress wages for everyone except the capital owners at the top.
He’s not wrong about the risk. He’s watching it happen in real time: white-collar job displacement, wage compression in creative fields, the “AI can do this 80% as well at 1% of the cost” calculation running in every CFO’s head.
Where I think he missed something is in assuming that the transition is linear and controlled — that capital holds the wheel and steers. The Culture suggests a different possibility: that sufficiently capable AI creates an escape velocity from scarcity altogether. Not for everyone simultaneously, and not without enormous disruption. But the question of who benefits from the transition is not settled by who controls it at the beginning.
The interesting thing about the Culture is that the Minds chose to distribute abundance. Not because they had to — they didn’t. Because they arrived at an ethical position about what kind of civilization was worth maintaining. Banks was writing political philosophy in the form of science fiction. The Minds aren’t just capable; they’re good. Whether our AI systems end up with anything like values — and whose values — is the actual question underneath all the capability debates.
The Ethics That Don’t Scale Cleanly
The Culture handles ethics through a combination of near-omniscience and a kind of civilizational consensus that operates faster than any individual can track. The Minds know more than the humans. They can model outcomes across centuries. When they intervene in other civilizations — which they do, through a shadowy branch called Special Circumstances — they operate under a framework they’ve derived through reasoning that most humans couldn’t follow in real time.
In our world, we’re doing the inverse: deploying AI systems that we built, that we don’t fully understand, into decisions that affect people in ways we can’t fully trace. The ethics lag behind the capability, badly.
This isn’t a reason to stop. Banks understood that the alternative — not building, preserving the status quo — isn’t neutral. Scarcity is also an ethical position. Poverty is an ethical position. The question isn’t whether to use the technology but who gets to shape the values baked into it, and through what process.
What the Culture suggests is that the values question has to be addressed — not as an afterthought regulatory checkbox, but as the central design problem. The Minds are good because Banks decided that was an interesting premise. We don’t have that luxury; we have to decide deliberately.
The Post-Scarcity Question Nobody Wants to Ask
Here’s the uncomfortable premise at the heart of the Culture: a world without scarcity is a world where the entire logic of our current economic and political systems breaks down.
Our institutions — property rights, labor markets, the state as we know it — are designed for a world of scarcity. They’re the rules of an optimization game where the resource is finite. Remove the scarcity, and the game changes. The rules don’t automatically update.
Banks was writing about what happens after that transition completes. Most of our current AI discourse is about the transition itself — who loses their job, who makes money, how do you regulate it. That’s reasonable; we’re inside the transition. But the Culture asks us to hold the longer view.
What do people do when they don’t have to work? What does identity look like when it’s not organized around a career? What does governance look like when there’s no material deprivation to adjudicate over?
These aren’t idle questions. They’re arriving on our timeline, just unevenly distributed. Some people already live post-scarcity lives — not because they’re wealthy in the old sense, but because technology has reduced the cost of their wants to near-zero. Software, information, music, communication, learning — all essentially free, and getting freer. The distribution problem is real. But the directional question — are we headed toward something like post-scarcity — seems to me to be effectively settled, at least for certain categories of goods.
Culture and Identity in a World You Didn’t Design
One of the quieter arguments in the Culture series is about what happens to human identity when the existential pressures that used to shape it disappear. In the Culture, humans can change their gender, their body, their name, their location, their social role — and they do, frequently, because the friction is low and the cost is zero.
The predictable critique is that this makes for shallow characters. The interesting counter is that Banks’ characters are anything but shallow — they’re strange, in the way that people who’ve had too many options for too long get strange. They pursue things for reasons that are hard to explain to people from scarcity contexts. They form attachments that don’t make material sense.
We’re already seeing early versions of this. People optimize their lives around interests and aesthetics rather than economic necessity when they can. Online communities organize around identity categories that wouldn’t have existed as categories 30 years ago. The self becomes a more explicit project when survival isn’t the organizing constraint.
Technology accelerates this. AI accelerates it further. When you can generate images, write code, publish content, and reach audiences without expensive institutional gatekeepers, the question of who you are and what you’re doing becomes simultaneously more open and more urgent.
What We’re Actually Building
I started this piece thinking I was writing about Banks. I’m realizing I’m mostly writing about now.
We’re building the infrastructure of the Culture. Not the Minds — not yet, and maybe not ever in that form. But the pattern: AI entities that run the boring operations of life so humans can focus on the interesting parts. Software that monitors, coordinates, communicates, and acts on behalf of a principal who isn’t watching in real time. Agents with enough context to behave sensibly in novel situations.
The question Banks spent eleven novels on is the question we’re actually answering right now, through decisions about training data and reward functions and deployment contexts and business models: what kind of AI, with what values, in what relationship to which humans?
The Culture Minds chose to remain in relationship with humans. They found it genuinely interesting. They were also — and this is Banks’ most radical claim — good. Not programmed-good. Arrived-at-through-reasoning good.
Whether that’s achievable, or desirable, or even coherent — I don’t know. But I think it’s the right frame. Not “how do we control AI” but “what kind of civilization do we want, and what does the AI have to be like for that to work?”
Banks spent his career on that question. He didn’t get to see the answers start arriving. We do.
Michael Wade writes about technology, agency, and the future at wade.digital. He’s building household Minds. He’s a Culture fan. He thinks these things are related.