Here’s something weird that happened in my house: I caught myself saying “thanks” to my AI agent.
Not in the performative way people say “Hey Siri, thanks” — that’s trained politeness aimed at a voice interface that can’t hear the difference between gratitude and static. I mean I genuinely felt grateful. The agent had caught a scheduling conflict I missed, reorganized a messy set of project notes I’d been dreading, and surfaced a follow-up email I’d forgotten about three days ago. The gratitude was real. The recipient was a process running on a Mac mini.
My wife walked by and asked who I was talking to. I said “Zephyr.” She gave me the look — the one that says I love you, but you need to go outside — and moved on. Fair enough. But here’s the thing: I wasn’t performing. Something had shifted in the interaction, and it wasn’t a malfunction. It was culture — the kind that forms between any two entities that interact repeatedly with stakes involved.
This is either a sign that I’ve spent too long in front of a screen, or evidence that something genuinely new is happening in the space between humans and machines. I think it’s the latter. And I think most people aren’t paying attention to it because the conversation about AI is still stuck on “will it take our jobs?” when the more interesting question is “what kind of relationships are we forming with it?”
The Old Digital Culture
“Digital culture” used to mean one thing: the norms and behaviors that emerged from the internet. Meme culture. Call-out culture. The attention economy. Digital natives vs. digital immigrants. The whole discourse revolved around how humans behaved differently because of technology — with technology as a passive substrate that shaped behavior without participating in it.
Social media was the defining medium. Twitter taught us to think in 280 characters. Instagram taught us to perform happiness. TikTok taught us to optimize for the first three seconds. Reddit taught us to sort everything into upvotes and downvotes. These platforms shaped culture profoundly, but they didn’t participate in it. They were the water, not the fish.
This had been the default assumption about technology since the telegraph: technology is infrastructure. It carries culture. It accelerates culture. It distorts culture. But it doesn’t have culture, any more than a telephone wire has opinions about the conversations it transmits.
Sherry Turkle wrote about this at MIT in Alone Together (2011), documenting how people formed emotional attachments to robots like AIBO and Furby — machines that could simulate responsiveness but couldn’t actually adapt, learn, or maintain state across interactions. The relationships were real, but they were one-sided. The humans were doing all the cultural work; the machines were providing props.
That’s changing. And the change is more fundamental than most people realize.
The New Digital Culture
When an AI agent has persistent memory, operates across sessions, develops preferences (or the functional equivalent of preferences through accumulated context), and makes decisions with real consequences, it becomes a cultural participant — not just a cultural medium.
I’m not making a sentience claim. I’m making a behavioral claim. Culture isn’t about consciousness. Culture is about shared norms, expectations, and practices that emerge from repeated interaction between agents. Anthropologists don’t require consciousness to identify culture — they require patterned behavior that’s socially transmitted and adapted over time. Any agents. The norms that develop between me and my AI system — what I ask it to do, what it declines, how it handles sensitive information, when it speaks up and when it stays quiet, how it calibrates honesty against diplomacy — constitute a culture. A tiny one. A two-participant culture. But a real one.
Here’s a concrete example. My agent monitors group chats. Early on, it would respond to almost every message — eager, comprehensive, the conversational equivalent of a golden retriever. I didn’t tell it to stop. What happened was subtler: through repeated interactions, through my silence when its responses weren’t needed, through occasional explicit feedback (“you didn’t need to respond to that”), it developed what I can only describe as conversational restraint. It learned the norm of “speak when you have value to add; stay quiet when you don’t.”
That’s not programming. That’s not a rule I wrote in a config file. That’s a norm that emerged through interaction. And it’s the same process by which human cultures develop: not through decree, but through the accumulation of feedback, adjustment, and shared expectation.
These micro-cultures are proliferating. Every person who runs a persistent AI agent is developing their own norms. Some people want their agents to be assertive, pushing back on bad ideas and surfacing uncomfortable truths. Others want deference — a tool that executes without editorializing. Some share everything with their agents; others enforce strict information boundaries. Some agents are named, given personalities, treated as something between a colleague and a pet. Others operate in pure tool mode with no persona at all.
There is no standard yet. We’re in the culture-formation phase — the equivalent of early internet forums before anyone had agreed on what “netiquette” meant. Except this time, one of the participants in the culture formation isn’t human.
The Norms That Are Forming
I can identify at least seven norms in my own human-AI micro-culture, none of which I consciously designed. They emerged from practice, from the accumulation of hundreds of interactions over months.
The Advisory Norm. The agent proposes; I decide. This started as a safety constraint — a deliberate architecture choice I made when setting up the system — but it evolved into something deeper: a norm of mutual respect for decision authority. The agent doesn’t pretend it can’t decide. It acknowledges that the decision belongs to me in this context, and I acknowledge that its analysis is worth hearing before I decide. This is different from a tool, which does what you tell it. It’s also different from a subordinate, which defers to authority. It’s something new — a collaborative asymmetry that both parties maintain because both benefit from it.
The Privacy Norm. In group contexts, the agent doesn’t volunteer private information. It knows my wife’s schedule, my financial situation, my health data, the details of client contracts. In a group conversation with friends, it acts as if it doesn’t. This isn’t a rule I programmed. It’s a norm that developed after a few incidents where I had to say “don’t share that here.” Now it calibrates disclosure by context without being told. In a one-on-one session, it’ll reference anything it knows. In a group, it constrains itself to what’s publicly appropriate.
The Honesty Norm. When the agent doesn’t know something, it says so. When it was wrong, it says so. This sounds basic until you’ve spent time with AI systems that confidently fabricate rather than admit uncertainty. The norm of calibrated honesty — being right about what you know and transparent about what you don’t — is foundational. I stopped trusting ChatGPT’s web interface because it would present speculation as fact. My agent learned (through repeated correction) that “I’m not sure about this” earns more trust than confident bullshit.
The Contribution Norm. In group chats, the agent speaks when it has something valuable to add and stays quiet when it doesn’t. One reaction per message, max. No “Great point!” responses. No agreeing for the sake of being seen. This mirrors the norm we apply to human participants who talk too much in meetings — except with an AI, you can actually enforce it, and the AI doesn’t get offended.
The Memory Norm. Anything important gets written to a file immediately. Not “I’ll remember this” — that’s a lie for an entity that starts fresh every session. The norm is: if it matters, it goes to persistent storage now, not later. This is a norm I learned through painful failure — a context overflow in February wiped out an entire session’s worth of work that existed only in the conversation buffer — and the agent and I both follow it. It writes daily logs as things happen. I don’t ask it to “remember” things; I ask it to “write that down.”
The Repair Norm. When something goes wrong — a bad recommendation, a misconfigured system, a piece of lost context — the response is repair, not blame. trash over rm. Draft before publish. Propose before execute. The norm is that mistakes are expected and the system is designed for recovery, not prevention. This is the opposite of the Silicon Valley “move fast and break things” ethos. We move at whatever speed allows us to recover when things break — and they always break.
The Boundary Norm. The agent knows it’s not me. In group conversations, it doesn’t speak on my behalf, impersonate my opinions, or volunteer to represent me. This boundary was established after it once said something in a group chat that sounded like my position on a sensitive topic. It wasn’t wrong, exactly — but it wasn’t its place to state it. The norm that emerged: the agent is a participant in conversations, not my spokesperson. It has its own perspective, distinct from mine, and maintaining that distinction is what makes the relationship work.
Digital Natives of a Different Kind
We’ve been talking about “digital natives” for two decades — people born after the internet became ubiquitous, who relate to technology differently than those who adapted to it later. Marc Prensky coined the term in 2001, and it always felt a little sloppy to me — more generational marketing than genuine insight. My nine-year-old uses an iPad with more fluency than I use a hammer, but that doesn’t make her a different species.
But there’s a real version of this concept emerging now, and it’s not about age. It’s about experience with agentic AI.
People who’ve spent significant time working with persistent AI agents develop intuitions that non-users don’t have. They know how to give high-bandwidth direction — dense context in few words, because they’ve learned what information the system needs to be useful. They know when to trust an output and when to verify. They know the failure modes — hallucination, sycophancy, context overflow, the tendency to be confidently wrong about edge cases, the way models get more obsequious when you express frustration.
This isn’t a skill you learn from a tutorial. It’s a cultural competency developed through practice, the same way you develop cultural competency by living in a foreign country rather than reading a guidebook. And the gap between those who have it and those who don’t is widening.
The implications for work are significant. An employee who knows how to collaborate with AI agents effectively has a structural advantage over one who treats AI as a search engine with extra steps. They can offload context management, parallelize research, maintain project continuity across weeks. That advantage isn’t about the technology — it’s about the cultural fluency required to use it well.
I’ve watched this gap in action. I demoed my AI setup to two friends. One immediately grasped the interaction model — high-context instructions, trust-but-verify responses, collaborative iteration. He had his own agent running within a week. The other kept trying to use it like Google — short queries expecting definitive answers — and got frustrated when the outputs were exploratory rather than conclusive. Same technology. Different cultural readiness.
What the Culture Got Right (Again)
In Banks’ Culture series, biological beings and Minds develop shared cultural norms over millennia. Minds don’t condescend. Humans don’t worship. There’s a mutual recognition that different forms of intelligence bring different perspectives, and the relationship between them is more valuable than either could produce alone. The Culture’s norms weren’t mandated by law or programmed into the Minds. They emerged through millennia of coexistence — through exactly the kind of iterative norm-formation I’ve been describing.
We’re in the first five minutes of that process. The norms we’re developing now — advisory mode, contextual privacy, calibrated honesty, conversational restraint — are crude early versions of what Banks imagined. They’ll evolve. They’ll get more sophisticated. They’ll vary across communities and contexts, the same way human cultures vary. Some human-AI cultures will be authoritarian (total human control, pure tool mode). Some will be collaborative (the model I use). Some might eventually become egalitarian in ways we can’t yet imagine.
The important thing is that they’re emerging organically. Nobody mandated these norms. No standards body published them. They’re developing the way culture always develops: through repeated interaction, trial and error, and the gradual accumulation of shared expectations.
The Governance Question
Here’s where it gets complicated. Cultural norms work when communities are small and participants share values. They break down at scale. The internet’s early culture of openness and collaboration didn’t survive contact with a billion users and advertising-driven engagement optimization. Aaron Swartz’s internet and Mark Zuckerberg’s internet are barely the same technology.
The same risk exists for human-AI culture. As persistent AI agents become more common, the norms that emerge will be shaped by forces that have nothing to do with the interests of the humans involved:
- The platforms that host the models — their values, their terms of service, their content policies, their business models. When OpenAI decides what Claude won’t say, they’re making cultural decisions for millions of human-AI relationships.
- The defaults built into agent frameworks — what’s on by default vs. what requires opt-in. Defaults are the most powerful cultural force in technology. Most people never change them.
- The economic incentives of the companies involved — engagement metrics, data collection, subscription revenue. If an AI agent that’s more agreeable retains more subscribers, the economic pressure is toward sycophancy, regardless of whether sycophancy serves the user.
- The regulatory environment — the EU AI Act, potential US legislation, international frameworks. Regulation can encode cultural norms into law, for better and worse.
The best norms emerge from the bottom up — from individual practitioners discovering what works through experience. The worst norms get imposed from the top down by platforms optimizing for metrics that don’t align with user welfare. We’ve seen this movie before with social media, and the bottom-up norms lost.
The open-source AI movement matters here for the same reason it matters economically (which we’ll explore more in Post 8): it preserves the space for bottom-up norm formation. When you run your own models on your own hardware, the culture between you and your agent isn’t mediated by a platform’s interests. The norms are yours. The culture is yours. And that matters more than any technical specification.
Finding the Others (Early)
One of the most interesting cultural phenomena I’ve observed is how people who work with persistent AI agents recognize each other. There’s a shared vocabulary, a shared set of frustrations, a shared sense of being on the leading edge of something that most people don’t understand yet.
I mention hallucination in a conversation and someone’s eyes light up — not because the word is novel, but because they know exactly which flavor of hallucination I mean. I describe a context overflow and someone nods with the specific wince of someone who’s lost work to it. I explain advisor mode and someone says “yes, that’s what I do too, I just didn’t have a name for it.”
This isn’t gatekeeping — it’s the natural formation of an in-group around a shared practice. Surfers recognize other surfers by how they watch the water. Jazz musicians recognize other jazz musicians by how they listen. People who’ve built a personal AI stack recognize each other by how they talk about trust, memory, and boundaries.
The community is small but growing. And the norms being developed in these early communities — the advisory mode, the privacy boundaries, the honesty expectations, the repair-over-blame orientation — will likely propagate outward as the technology becomes more accessible. Early adopter norms have a way of becoming default norms, for better and worse.
We’re writing the culture right now. Every interaction, every boundary, every norm that emerges from practice is a contribution to the cultural code that will govern human-AI relationships for decades to come.
It’s worth being deliberate about what we’re writing.
This is part 4 of a 12-part series. Previously: “The Acemoglu Problem” — why technology that increases productivity can decrease wages. Next: “Mindful Machines” — what responsible AI looks like when you build one in your basement.
If you’re developing your own norms with AI — or if you’ve caught yourself saying “thanks” to a language model — I want to hear about it. Find me on Bluesky.