Abstract illustration of a glowing computer node emitting streams of structured knowledge graphs, contrasted against faded GPU mining rigs in the background

Mining Intelligence


In 2017, a friend of mine had six GPUs bolted to a wire rack in his garage. The cards cost him about $4,000. His electric bill jumped $200 a month. The rig ran 24/7 — fans screaming, heat pouring off the heatsinks — producing SHA-256 hashes as fast as physics allowed. He was mining Ethereum, back when you could still do that from a garage.

He made money. Then he didn’t. Then the cards were worth half what he paid. The race moved to ASICs, then to industrial-scale operations in Iceland and Texas with megawatt power contracts. The garage miner was done.

I thought about that rig recently because I’m running one again. Sort of.

There’s a Mac mini in my living room. M4 chip, 32 gigs of unified memory, always on. Connected to a QNAP NAS running Docker containers. The mini runs OpenClaw — an AI agent framework — with Anthropic’s Claude as the reasoning engine and a handful of local models handling the grunt work. It runs 24/7. It produces output around the clock.

But it’s not producing hashes. It’s producing structured intelligence.

The economics

My compute bill is about $300 a month. Almost all of that is Anthropic API costs — Claude doing the thinking, the writing, the decision-making. The local models (Ollama running Gemma 3 4B and nomic-embed-text on the mini’s GPU) cost nothing beyond electricity. Maybe $15/month in power for the hardware.

Three hundred a month sounds like a lot until you look at what it produces.

My agent — Zephyr — processes voice memos into transcriptions, splits them into atomic notes, tags and files them in an Obsidian vault. It monitors 124 RSS feeds and generates daily digests. It manages a CRM, maintains a knowledge base, drafts blog posts, deploys websites, manages Discord servers, tracks tasks, and runs morning briefings. It processes documents, answers research questions, and handles the kind of structured busywork that would take a human assistant 20-30 hours a week.

Now compare that to the mining rig. My friend’s $200/month in electricity produced… hashes. Lottery tickets, really. Each hash was a guess at a mathematical puzzle, and if you guessed right, you got a block reward. The output had no inherent utility. Its value was entirely dependent on what someone else would pay for the resulting token.

My $300/month produces deliverables. Transcriptions with summaries. Tagged knowledge bases. Client-ready automations. Blog posts. Structured data. Things with direct, measurable utility — either to me or to the clients I sell them to.

The mining analogy isn’t perfect, but the structural parallel is real: dedicated hardware, running continuously, converting electricity and compute into output that you sell. The difference is what you’re mining for.

The rig

Let me walk through the actual infrastructure, because the details matter.

The mini is the brain. It runs the OpenClaw gateway — the orchestration layer that manages sessions, routes messages, handles scheduling, and coordinates everything. Claude (via Anthropic’s API) does the heavy cognitive work: synthesis, judgment, writing, complex reasoning. This is the expensive part, and it’s worth every cent because it’s where the actual intelligence happens.

Local models handle classification, tagging, embeddings, and lightweight completions. Gemma 3 at 4 billion parameters can sort, label, and outline. The nomic-embed-text model generates vector embeddings for semantic search across my knowledge base. These run on the M4’s GPU and cost nothing. They’re the equivalent of the support circuitry — not the main processor, but essential to the pipeline.

The NAS runs Docker containers. This is where client workloads live. Each client gets their own container — their own OpenClaw instance with its own configuration, its own Discord bindings, their own data. I call them hatchlings. Right now there’s one live client instance and one internal bot. The NAS also runs the monitoring stack (Prometheus, Grafana), a reverse proxy, document management, and the MQTT message bus that coordinates everything.

The software layer ties it all together. Cron jobs trigger automated workflows. Scripts handle deterministic tasks (no LLM needed — why pay for intelligence when a bash script will do?). The agent handles everything that requires judgment. There’s a clear compute taxonomy: if it needs reasoning, it goes to Claude. If it needs classification, it goes to the local model. If it’s deterministic, it goes to a script. You use the cheapest tier that can do the job.

This is not a sophisticated setup by enterprise standards. It’s a Mac mini and a consumer NAS. Total hardware cost: maybe $1,500. But it runs a business.

The business model

Here’s where the mining parallel gets interesting.

Bitcoin miners sold hashes (indirectly — they sold the coins those hashes produced). The product was fungible. One hash is as good as another. The only competitive advantage was efficiency: hashes per watt, hashes per dollar.

I sell structured intelligence. The product is not fungible. A voice-memo-to-knowledge-base pipeline configured for a home inspection business is different from one configured for a political advocacy group, which is different from one configured for an MSP’s client intake. Each deployment is customized. Each one compounds in value as the knowledge base grows.

The business is called wade.digital, and what we sell is managed AI agents — hatchlings-as-a-service. A small business gets their own always-on agent, running in its own container, with its own integrations. It answers their Discord, processes their documents, manages their workflows. We handle the infrastructure, the configuration, the ongoing operations.

My first client is Todd, launching a home inspection consultancy. His hatchling runs on the NAS, manages his Discord server, and will eventually handle document processing, scheduling, and client communications. The infrastructure cost to me is marginal — one more Docker container, one more set of API calls. The value to him is a full-time digital operations layer for a fraction of what a human assistant would cost.

The pipeline behind Todd includes an MSP who could funnel his entire client base into managed agents. And beyond that, every small business owner who’s heard about AI but doesn’t know how to make it do anything useful.

This is the part the bitcoin miners never had. When you mine coins, you’re competing on efficiency against everyone else mining the same coins. When you mine intelligence, you’re building something bespoke. The knowledge base gets deeper, the automations get more refined, the agent gets more useful. It compounds. Coins don’t compound — they just sit in a wallet.

The race

There is a race, though. Don’t mistake the better economics for the absence of competition.

In 2017, the mining race was about hash rate. Whoever could produce the most hashes per second per dollar won. That race drove garage miners to industrial operations to sovereign wealth fund-backed data centers. The small player got squeezed out not because the technology stopped working, but because the economics of scale made their margin disappear.

The intelligence race is different in kind but similar in pressure. Right now, I can run a meaningful AI operation from a Mac mini because the tools are available, the APIs are accessible, and the orchestration frameworks are open source. OpenClaw is open source. Ollama is open source. The models are accessible via API at reasonable prices.

But the window matters. The people building AI infrastructure now — learning the orchestration patterns, developing the operational instincts, building the client relationships — have a compounding advantage over people who start later. Not because the technology will become inaccessible (it won’t), but because the knowledge of how to deploy it usefully is the actual moat.

This is what the bitcoin miners understood instinctively even if they couldn’t articulate it: there’s a moment when the economics work for the small operator, and that moment doesn’t last forever. You either build during the window or you watch from the outside.

The difference — the crucial difference — is that bitcoin mining was a zero-sum extraction game. When the window closed, the garage miners had depreciated hardware and nothing else. Intelligence mining builds durable assets. The knowledge bases persist. The client relationships persist. The operational expertise persists. Even if the specific tools change (and they will — OpenClaw might not exist in five years), the patterns transfer.

The culture shift

There’s something deeper happening here that the mining parallel illuminates.

Bitcoin mining was, at its core, an arbitrage play. You converted electricity into tokens and sold the tokens for more than the electricity cost. When the arbitrage closed, the activity stopped making sense for small operators.

What I’m doing isn’t arbitrage. It’s value creation. The voice memo that becomes a tagged knowledge entry doesn’t depreciate. The automated workflow that saves a client two hours a day doesn’t become less valuable when someone else builds a similar one. The morning briefing that surfaces the three things I need to focus on today isn’t competing against anyone else’s morning briefing.

This is what I think “democratized AI” actually looks like — not everyone having access to ChatGPT (that’s just a search engine with better manners), but regular people running their own AI infrastructure, on their own hardware, producing their own structured intelligence, and in some cases selling that capability to others.

It looks like a Mac mini in a living room.

It looks like $300 a month in API costs producing more operational output than a part-time employee.

It looks like Docker containers on a consumer NAS, each one running a managed agent for a small business that could never afford to build this themselves.

It looks boring. It looks like infrastructure. It looks like plumbing.

That’s how you know it’s real.

The long game

My friend’s mining rig is in a landfill somewhere. The GPUs that once printed money are e-waste. The coins he mined are still in a wallet — worth something, but disconnected from the machine that produced them.

My rig is still running. Right now, as I write this, Zephyr is monitoring RSS feeds, ready to process the next voice memo I drop, maintaining the knowledge base, keeping the schedule. The hatchlings are running on the NAS, serving clients. The local models are embedding and classifying in the background.

The machine is producing. Not hashes — knowledge. Not lottery tickets — deliverables. Not fungible tokens — bespoke intelligence that compounds over time.

Same race. Different game. Better economics.

And the window is open.