Category: AI

  • What it takes to actually run NanoClaw

    NanoClaw is the structurally right framework for pipeline-shaped agent workloads. It’s also genuinely more technical to set up than most personal-assistant frameworks people compare it to. If you’re evaluating it after reading the comparison piece and wondering what you’re signing up for, this is the honest answer.

    One thing worth setting up front: NanoClaw is not built for non-technical users, and neither is openclaw despite its more polished onboarding. The marketing on both sites pitches “personal AI assistant for everyone.” The reality is different. NanoClaw expects comfort with git, the command line, Docker, and at least basic Linux administration. The trade you make in exchange is access to Claude Code as the authoring layer for your fork — which is arguably the most capable AI coding tool available right now, and meaningfully more capable than the typical models you’d be running underneath openclaw. The framework is built around that capability difference rather than trying to abstract it away.

    The architecture is right. The setup curve is real. Below is what actually bites.

    You need a Claude Code subscription

    This isn’t a soft dependency. NanoClaw is built around Claude Code as the authoring layer — the slash commands that install channels and providers (/add-telegram, /add-opencode, /add-codex and so on) run inside Claude Code and copy source files into your fork from long-lived branches. You can technically edit the same files by hand, but you’d be reverse-engineering what those slash commands do every time you customise.

    Practically: a Claude Code Pro or Max subscription is the working assumption. Without it, you’re not really running NanoClaw the way it’s designed to run. With it, the authoring experience is the best part of the framework — the codebase is small enough that Claude Code can confidently make changes across it, and the fork-as-install model means every customization is a code change you can read and revert.

    This also constrains who NanoClaw is for. If you’re allergic to Claude Code (philosophically, financially, or because you prefer Codex or another harness as your primary), you’ll fight the framework. If you’re already deep in Claude Code, the integration is genuinely tight.

    Codex works as a fallback authoring layer for individual tasks, and the /add-codex skill makes Codex available as an agent provider (separate from authoring). But the slash-command-based setup expects Claude Code as the primary harness. Plan around that.

    OneCLI is part of the deal

    NanoClaw doesn’t manage your API keys directly. That job is delegated to OneCLI, the companion credential proxy that ships alongside it. Agents inside containers never see raw API keys; they make outbound HTTPS requests through OneCLI, which injects credentials at the proxy layer based on per-agent policies.

    This matters in practice for two reasons. First, agents inside NanoClaw containers have bash access — anything that put an API key directly in the container would be reachable by any code the agent runs. OneCLI keeps that surface clean. Second, you’ll spend real time during setup configuring OneCLI: registering your Anthropic credential, creating per-agent secret assignments, deciding whether each agent gets all secrets or a specific subset. The nanoclaw.sh install script handles the basics, but ongoing changes (adding a new provider, rotating keys, scoping a credential to one agent) involve OneCLI commands rather than editing config files.

    It’s worth understanding before you start. Treat OneCLI as a meaningful piece of the system, not a one-time setup chore that disappears after install.

    There’s no web UI out of the box

    NanoClaw ships the channel and agent runtime. It doesn’t ship an operator console. There’s no dashboard for browsing agent activity, no log viewer, no chat history UI, no admin panel, no menubar app. The framework’s stance is that you talk to your agent through a messaging channel — Telegram, Slack, Discord, WhatsApp, whatever you’ve installed — and that’s the interface.

    Openclaw, by comparison, has a guided openclaw onboard CLI for setup and a Companion App (Beta) on macOS that adds a menubar interface. So if you’re coming from openclaw expecting some kind of UI affordance out of the box, NanoClaw will feel deliberately bare.

    For an assistant, the chat-channel-only approach is fine. The channel is the interface.

    For a pipeline, it’s not enough. Pipelines need state-of-everything views: which prospects are in which stages, which agents are working on what, what’s pending operator review, what’s been dead-lettered. None of that is conversational. You need a UI.

    The options are real but each has a cost:

    Build a custom web UI as a NanoClaw skill. A small Express or similar server inside a skill that exposes a chat-plus-dashboard interface, talks to the agent through the same task contract NanoClaw uses elsewhere, and serves over tailscale serve so it’s only reachable on your tailnet. Takes a day to build. You control the UX completely. You can mount per-agent dashboards next to the chat thread. No third party between you and your operator interface. This is the version I keep coming back to.

    Use a messaging channel as the operator interface. Telegram is fastest to bring up — bot via BotFather, token in five minutes. Discord and Slack work too. The trade is that pipeline state is awkward to display in a chat thread, and you end up either composing structured messages (clunky) or building dashboards anyway (defeats the purpose).

    Lean on the underlying systems for state visibility. SQLite for the artifact and journal storage means you can run ad-hoc queries against it. docker logs for container-level activity. journalctl --user for systemd-level service logs. This works for debugging and post-hoc analysis. It doesn’t work as a real-time operator surface.

    In practice, you’ll mix all three. The custom web UI is the primary operator console, channels handle quick-access from your phone, and you use the underlying tooling when something goes weird and you need to dig.

    Setup gotchas on a small VPS

    NanoClaw runs comfortably on a 2GB DigitalOcean droplet (or equivalent). The hosting cost is a few dollars a month. The friction comes from minimal cloud images being stripped down enough that several setup steps fail in non-obvious ways.

    The base image doesn’t ship with a C compiler. Several modules in the dependency tree build native bindings during pnpm install and fail with generic “command failed” errors that don’t tell you the compiler is missing. Install build tools before the first install:

    sudo apt update
    sudo apt install -y build-essential acl

    The acl package is also missing from minimal images and you’ll need it for the Docker socket fix below.

    The Docker socket ACL doesn’t survive reboot. NanoClaw runs agent containers via Docker. By default, only root can talk to the Docker socket. Adding your operator user to the docker group works but is broadly equivalent to giving that user root, which is not what you want.

    The cleaner approach is an ACL grant on /var/run/docker.sock. The catch: /var/run is a tmpfs mount, recreated on every boot. Anything you setfacl once is wiped on reboot. The fix is a tmpfiles.d rule that recreates the ACL automatically. Create /etc/tmpfiles.d/docker.conf with:

    a+ /var/run/docker.sock - - - - u:youruser:rw

    Replace youruser with the actual operator username. Test with sudo systemd-tmpfiles --create and verify with getfacl /var/run/docker.sock. Reboots no longer break Docker access for the operator account.

    Two systemd services, not one. Run NanoClaw and your custom orchestrator as separate systemd user services. When you’re iterating on the orchestrator (which you will, often, especially in early development), restarting it shouldn’t take the channel adapters down. Channel reconnects are slow and annoying; orchestrator restarts should be near-instant.

    A reasonable layout:

    ~/.config/systemd/user/nanoclaw.service
    ~/.config/systemd/user/orchestrator.service

    If you want either service to start on boot before you log in, enable lingering for the user with sudo loginctl enable-linger youruser. Easy to forget; non-obvious failure mode (services don’t start, you don’t know why, you log in, they magically work).

    Add swap. A 2GB droplet doesn’t ship with swap configured. Under heavy LLM-context loads — long-context windows plus large augmentation tasks — you can OOM unexpectedly. A 2GB swap file is cheap insurance:

    sudo fallocate -l 2G /swapfile
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    sudo swapon /swapfile
    echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

    Set vm.swappiness=10 in /etc/sysctl.conf so the kernel prefers RAM and only swaps under genuine pressure. Reboot to verify.

    What stays on the laptop, what goes on the VPS

    The local-versus-VPS question resolves cleanly:

    • A laptop is fine for the install rehearsal, fork setup, and a couple of agents you only use while at the keyboard.
    • Anything that needs to be reachable, scheduled, or running while you’re not at the keyboard belongs on the VPS.

    The cost difference between 1GB and 2GB on DigitalOcean is a few dollars a month, and the difference in headroom is between fighting the host and forgetting about it. Take the 2GB. The marginal saving on a 1GB droplet is not worth the time you’ll spend wondering why builds are failing or why the agent container is OOM’ing.

    Honest scope of “easy”

    NanoClaw is technically simpler than openclaw — fewer lines of code, fewer abstractions, fewer hidden behaviours. It’s not operationally simpler. The framework expects you to:

    • Have a Claude Code subscription and use it as the authoring layer
    • Be comfortable with the Linux command line, systemd, Docker, git
    • Build your own operator UI if you want one
    • Write your own orchestrator if you’re doing pipeline-shaped work

    For someone who already operates in this stack, NanoClaw feels light and clean — and the Claude Code authoring layer is genuinely the best part. The codebase is small enough that asking Claude Code to make changes across it works reliably, which is a meaningfully better experience than the typical “edit config files, hope you got it right, debug when you didn’t” pattern.

    For someone hoping for a one-click personal assistant, the curve is meaningfully steeper than openclaw’s onboarding. Openclaw has a guided CLI (openclaw onboard) and a macOS Companion App that gives you a menubar interface; NanoClaw deliberately ships none of that. Both still expect a technical user underneath, but openclaw lowers the floor more.

    The trade is real and the trade is good if your use case justifies it. You end up with a system you understand end to end, that runs in resources you control, that doesn’t depend on a SaaS gateway, and that you can reason about when something breaks. Worth the lift if you’re building something pipeline-shaped. Not worth the lift if you just want a chatbot.

    A useful concrete reference point: Singapore’s Foreign Minister, Vivian Balakrishnan, published the architecture for his own NanoClaw-based “second brain” setup, with an accompanying X post walking through the composition. He’s technically literate — coding is a known hobby of his — but not a software engineer by trade. His setup composes NanoClaw with a few other open-source pieces (a memory layer, OneCLI for credentials, the LLM Wiki pattern for knowledge synthesis) and runs on a Raspberry Pi. It’s a useful existence proof of “technical-but-not-developer” being the floor for NanoClaw, and equally a useful caution: Vivian could compose those pieces because of fluency he already had. Anyone reading this without that fluency yet would need to pick it up first. The reward is real, and so is the prerequisite.

    The full GTM system this deployment serves is in Building a GTM dark factory with Nemotron 3 and NanoClaw. The framework comparison that motivates picking NanoClaw in the first place is in Why I picked NanoClaw over openclaw for a GTM pipeline.

  • Why I picked NanoClaw over OpenClaw for a GTM pipeline

    Before getting into the comparison itself, one piece of context worth setting straight: neither OpenClaw nor NanoClaw is built for a non-technical audience. Both expect comfort with the command line, git, and at least one model provider’s API setup. Both reward fluency with the underlying stack. The marketing copy on both sites pitches “personal AI assistant for everyone,” which is aspirational. The reality today is that you need to know what pnpm install does and roughly what a Docker container is to get either one running smoothly.

    That said, the two frameworks make different trade-offs within that technical-user space, and which trade-off is right for you depends on what you’re actually building.

    OpenClaw is the more monolithic, more featureful option. It ships a guided CLI onboarding (openclaw onboard), supports multiple LLM providers natively (Anthropic, OpenAI, local), has a Companion App for macOS that gives you a menubar interface, and includes browser control, persistent memory, and dozens of community-built skills out of the box. The trade is operational complexity — ~434,000 lines of code, 70+ dependencies, single Node process with shared memory — and a security model that relies on application-level checks rather than OS isolation. Recent CVEs and security writeups in this space have mostly been openclaw-shaped.

    NanoClaw is the lighter, more opinionated alternative. ~3,900 lines of code, fewer than 10 dependencies, agents in isolated Linux containers with explicit mounts, single host process orchestrating per-session containers. Credential handling is delegated to OneCLI (NanoClaw’s companion credential proxy), which injects API keys at request time so agents never hold raw secrets — meaningful when an agent has bash access inside its container. The trade is that NanoClaw is built natively around the Claude Agent SDK — Claude Code is the primary harness, the slash commands that install channels and providers run inside Claude Code, and other providers (Codex, OpenCode, Ollama) are drop-in alternatives rather than peers. There’s no menubar app, no built-in dashboard, no UI beyond the chat channels you’ve installed. The codebase is small enough that “ask Claude Code to walk you through it” is a realistic onboarding strategy.

    For a personal-assistant use case, the openclaw trade-off probably wins. More features out of the box, more flexibility on providers, easier to bring up if you’re not already deep in the Claude Code ecosystem. For a pipeline-shaped workload — GTM, document processing, anything where the workflow exists independent of conversation — NanoClaw is structurally a better fit, and Claude Code being the assumed harness is actually an advantage rather than a constraint, because Claude Code is arguably the most capable AI authoring tool available right now and the framework is built around it.

    I went through both before settling. Here’s the rest of the comparison through the pipeline-shape lens.

    The shape of the workload matters

    A personal assistant is reactive. You send it something. It figures out what you meant, picks a tool, runs the tool, replies. The workflow is whatever the conversation is.

    A pipeline is the opposite. There’s a state machine. There are stages. Each prospect, ticket, document, or whatever the unit is moves through stages on its own clock. Some get stuck. Some get rerouted. Some need to be remembered six months later when a specific signal lights up. The workflow exists independent of any conversation.

    These two workloads want different things from a framework. The assistant wants flexibility, channels, plug-and-play tools, an LLM that figures out what to do. The pipeline wants determinism between stages, deterministic routing, dry-run capability, an LLM that does bounded judgment work inside a stage.

    This is the lens that matters. Most framework comparisons are feature bake-offs. The actual question is which workload shape you’re building.

    Three things that didn’t survive OpenClaw for me

    Routing. Openclaw’s agent picks what to do based on the inbound and its own reasoning. That’s the right model for “summarise my inbox” and the wrong model for “transition prospect ABC from awaiting-reply to unresponsive after 14 days.” The second decision has to be deterministic, replayable, dry-runnable, and outside the LLM. Tool-call routing is fine when the cost of a wrong decision is small. In a GTM pipeline a wrong routing decision is a duplicate touch, a wrong segment, a compliance breach.

    You can wire OpenClaw to do deterministic routing — through skill conditions, scheduled triggers, scripted control flow — but you’re working against the framework’s grain. Every hour spent there is an hour reinventing what a state machine engine gives you for free.

    Per-skill model preference. Pipelines benefit from heterogeneity. Small fast models for bulk discovery and augmentation. Larger models for content polish. Different providers for redundancy. OpenClaw supports multiple LLM backends as a first-class feature — you can configure Anthropic, OpenAI, or local models — but the routing decisions are made within the agent’s own reasoning rather than at the framework level. For a pipeline you want the framework to route deterministically based on skill family, not let the agent pick its own provider per call.

    NanoClaw’s approach is the opposite: provider is configured per agent group, one provider per group, multiple groups in parallel. That maps directly to “discovery and augmentation in one group on Nemotron, polish in another group on Claude.” Per-task provider hints would be cleaner, but group-level routing is what works today, and for most pipelines it’s enough because the natural skill boundaries align with provider preferences anyway.

    Operating cost. OpenClaw runs a websocket gateway with constant background activity. mDNS service discovery, periodic health probes, channel reconnect loops. On a 1GB droplet it spent most of its capacity on its own metabolism. Bumping the VPS works, but the symptom is telling.

    NanoClaw is much quieter at idle. The host process owns message queues, agent containers spin up per task, channels are explicit and minimal. A 2GB droplet has plenty of headroom for a working pipeline plus orchestrator plus operator UI.

    What NanoClaw doesn’t do, and why that’s useful

    NanoClaw has no built-in orchestrator. No state machine engine. No artifact store. No journal writer. No skill dispatcher. No dry-run harness. No business logic of any kind.

    For an assistant, this is missing functionality. For a pipeline, it’s the right scope.

    The orchestrator is the part that’s specific to your workflow. State transitions, when to retry, when to dead-letter, what counts as completion, what triggers the next stage. Building it as plain code (in any language; mine is TypeScript) means it stays readable, testable, and replaceable. NanoClaw runs the channel adapters and the agent containers. The orchestrator runs the workflow. They talk through structured task contracts.

    The trade is real: you write more code to start. The benefit is real: you understand and own every line of the pipeline that matters.

    What both share

    The skills system. Both frameworks treat skills as SKILL.md markdown files that the agent reads and executes. The same skills can technically run on either framework with minor adjustments, though the agent configuration files differ — openclaw uses SOUL.md for agent personality and config, NanoClaw uses CLAUDE.md for the same purpose. So you’re not locked into a framework by your skills library — you’re picking the framework that runs them at the right architectural layer.

    Both also lean on Claude Code as a useful authoring layer, though the relationship is different. NanoClaw is explicit about it — the slash commands that install channels and providers run inside Claude Code and copy source files into your fork from long-lived branches. OpenClaw is more flexible: you can author with Claude Code, edit config files by hand, or use whatever AI coding tool you prefer including the built in agents. Either way, having Claude Code in the loop is the best authoring experience available right now for both — it’s just that NanoClaw treats it as the assumption while openclaw treats it as one option among several.

    The forking model

    NanoClaw’s other design choice worth flagging: it’s opinionated about you forking the repo and treating the fork as your install. There’s no config-as-data layer that abstracts away your customizations. If you want different behaviour, you change the code. The codebase is small enough that this is safe.

    This is a discipline. It means every customization is a code change you can read and revert. It also means setup feels heavier than openclaw’s onboarding. For a pipeline you’ll be running for months, that’s the right trade. For a weekend assistant project, it’s overkill.

    The decision criteria, condensed

    Pick OpenClaw if:

    • You want a personal assistant that responds to messages on channels
    • The workflow is whatever the conversation is
    • You want maximum provider flexibility (Anthropic, OpenAI, local models all first-class)
    • You want a menubar app and guided onboarding out of the box
    • You’re fine with the larger codebase and application-level security model

    Pick NanoClaw if:

    • You’re building something with a state machine — pipeline-shaped, not chat-shaped
    • The workflow exists independent of any conversation
    • You need deterministic routing, dry-runs, replay
    • You want different providers for different stages, configured per agent group
    • You’re deep enough in Claude Code to leverage it as the authoring layer
    • You want OS-level container isolation as your security model
    • You’re willing to write the orchestrator yourself (and would rather, because you want to own the workflow logic)

    Worth knowing

    NanoClaw is younger and more spartan around setup edges — both because it does less by design and because the project is moving fast. If you hit a setup gotcha, the answer is usually in the docs and a quick edit by Claude Code resolves it. Filing an issue and waiting is the slower path. The flip side: the codebase is small enough that you can read all of it, and Claude Code can confidently make changes across it.

    OpenClaw has the larger community, more channel adapters in stable shape, and a richer ecosystem of community skills (ClawHub, the openclaw skills marketplace, has hundreds). If you’re operating in personal-assistant territory, those network effects matter. For pipelines, they don’t.

    Worth flagging for context: OpenClaw’s creator, Peter Steinberger, joined OpenAI in February 2026, with the project continuing as open source. The project’s velocity has been impressive but the security model has also been the subject of multiple writeups — anyone evaluating it for production should read the security analyses alongside the marketing copy.

    The full GTM system this comparison feeds into is in Building a GTM dark factory with Nemotron 3 and NanoClaw. For setup specifics — what it takes to actually run NanoClaw end to end — see the companion piece.

  • Building a GTM dark factory with Nemotron 3 and NanoClaw

    Building a GTM dark factory with Nemotron 3 and NanoClaw

    Outbound has a failure mode anyone running a B2B pipeline has hit. Go wide and the response rates collapse, the domain gets filtered, the brand looks like every other vendor blasting templates. Go narrow and the volume can’t sustain a business. The middle path — per-prospect research, context-aware first touches, disciplined follow-ups — used to need an army of SDRs.

    What the system below builds toward is functionally an AI-native CRM with marketing automation, segmentation, and funnels. It’s the same business object SaaS stacks like HubSpot, Salesforce + Marketo, or Apollo + Outreach + Clay assemble from a dozen subscriptions and a small ops team. Traditionally that operation is human-fronted at every stage: defining segments, enriching records, writing sequences, reviewing replies, tuning the funnel. Tools speed each step but don’t change the shape. Humans are in every loop because the judgment work is theirs.

    The dark factory operating model changes that. GTM is unusually well-suited to it because it’s a closed-loop domain. Every action generates measurable feedback: opens, replies, meetings booked, deals closed, journal of what worked and what didn’t. That feedback is what lets skills earn autonomy on evidence rather than wishful thinking, graduating from copilot mode (operator approves each output) to dark factory mode (autonomous, with sampling and exception escalation). Volume goes up because agents work on more prospects in parallel than any human can. Consistency goes up because the contract on the wire enforces it. The operator’s role compresses from reviewing every output to reviewing what the journal flags.

    The building blocks are NanoClaw as the agent and channel runtime, Nemotron 3 Super as the bulk runtime model alongside Claude for polish, and Claude Code and Codex as the authoring layer. None of them is a CRM. Composed together, with a state machine and journal sitting above them, they become one.

    What the engine does

    The engine takes a hypothesis (ex. “healthcare companies publicly investing in compliance automation are good prospects”) and produces a queue of prospects with structured profiles, draft first-touches in a collab-partner voice, and context packs for the channels where execution stays manual (LinkedIn, anything high-touch). The operator reviews and approves drafts. Email goes out via Resend with proper deliverability hygiene. Replies route through an inbound webhook, get classified, and trigger state transitions. The journal records every decision with rationale, confidence, alternatives considered, and source evidence.

    Two things distinguish it from the standard funnel.

    The qualifying signal is behavioural rather than firmographic. “This company’s CEO talked publicly about scaling regulatory automation last quarter” beats “this company has 80 employees in three cities.” The second tells you a company exists. The first tells you something is happening there worth a conversation.

    Disqualification states are first class: not a fit, not now, unreachable, unresponsive, do not contact, conflict. None of these are fallbacks at the edge of the state machine. They’re destinations the orchestrator routes to deliberately. A prospect that hit “not now” with a specific signal six months ago is a different lead than one that’s been silent. The state machine has to remember the difference.

    Operator in the loop, then less of it

    The two-mode model deserves a closer look because it’s where the architecture earns its keep. Copilot and dark factory aren’t synonyms for “manual” and “automated.” They’re different relationships between the operator and the agent group. Copilot is the operator approving every output and using the journal to spot patterns. Dark factory is the operator sampling outputs, reading exception escalations, and trusting the rubric for the rest. Some skills move between them in weeks. Some never graduate. Drafting outbound to a high-value prospect is a copilot job forever. Augmenting an early-funnel profile from public sources isn’t.

    Claude Code and Codex sit on the operator side of this loop, not the agent side. They edit the orchestrator, write skills, debug runs, apply patches. The agents inside NanoClaw containers run the domain skills, not the authoring code. The operator stitches the two layers together until each carries more on its own.

    Why this architecture for a GTM pipeline

    The framework choice matters because pipelines aren’t assistants. I started on OpenClaw. It’s the more featureful framework on paper, with channels, providers, scheduled tasks, and a guided onboarding flow all in one package. The pitch is right for a personal assistant. You point it at your stuff, it runs.

    For a GTM pipeline it’s the wrong shape. OpenClaw’s agent picks what to do based on the inbound and its own reasoning. That’s the right model for “summarise my inbox” and the wrong model for “transition prospect ABC from awaiting-reply to unresponsive after 14 days.” The second decision has to be deterministic, replayable, dry-runnable, and outside the LLM. Tool-call routing is fine when the cost of a wrong decision is small. In a GTM pipeline a wrong routing decision is a duplicate touch, a wrong segment, a compliance breach.

    NanoClaw makes the opposite design choice. It does less. It runs the channel adapters, one container per agent group, and a host process that owns the message queues. Skills are markdown files mounted into containers. There’s no built-in orchestrator, no business logic, no opinion on your workflow. For an assistant that would be missing functionality. For a pipeline it’s the right scope for the bottom layer.

    The full stack: NanoClaw is the channel and agent runtime. A separate orchestrator (custom code) sits above it and owns the pipeline state machine. Claude Code or Codex sits next to all of it as the authoring layer. The operator sits on top, reviewing outputs, approving drafts, gradually handing off more as each skill earns it. (I’ve written more on the framework comparison itself for those evaluating the two.)

    The orchestrator is plain code. State machine engine, artifact store, journal writer, skill dispatcher, dry-run harness. It dispatches structured tasks to the agent’s inbound queue. The agent runs the skill in its container and writes a result back. The result has to carry, at minimum, what was found, why, how confident the agent is, the alternatives considered and rejected, and the evidence with sources. The orchestrator validates against that contract on read. Validation failure means deterministic retry or dead-letter, never a re-prompt loop. The agent is allowed to be uncertain. It’s not allowed to be silent about it.

    Operating mode lives at the agent group, not in the task. A copilot group’s outputs land in a review queue. A dark factory group’s outputs trigger state transitions automatically. Promoting a skill from copilot to dark factory is moving its mount point, not rewriting it.

    For the model layer: Nemotron 3 Super handles the bulk runtime work. Strong instruction following, long context, throughput that holds up under volume. Augmentation skills that read four or five sources and synthesise a structured profile benefit from the long context: public LinkedIn snippets, recent posts, the company’s own site, a news mention or two. Drafting routes to Claude. The bulk-then-polish chain saves tokens on volume work and keeps the polish pass focused on prose that goes to a human. The free tier covers early-stage development; production volumes need API access. Multi-provider routing is less about feature redundancy and more about not having a single provider’s outage take out the whole pipeline. The orchestrator routes per skill family: bulk runtime to Nemotron, polish to Claude, redundancy keys for either in reserve.

    For setup specifics — Claude Code as the authoring dependency, the no-UI consequence, deployment gotchas a small VPS surfaces — checkout the companion piece on what it takes to actually run NanoClaw.

    DPDP Act compliance lives at the journal layer: every artifact change is logged with provenance, deletion requests tombstone the artifact while retaining audit evidence. Easier upfront than retrofitted.

    What this is, when it’s working

    A GTM dark factory is a specific shape: an AI-native CRM where the determinism lives between tasks and the LLM agency lives inside them. The agent does the bounded judgment work; the orchestrator decides what comes next; the journal holds both accountable. Volume goes up. Variance stays bounded. The operator’s role compresses to where it adds the most value — picking what gets built next, reviewing what the rubric can’t decide, deciding when a skill has earned graduation.

    Outbound that holds shape between wide and narrow doesn’t need an SDR army. It needs orchestration you can trust, a contract on the wire, and the discipline to let skills earn autonomy rather than be granted it. The framework choice is secondary. The split between framework, orchestrator, and authoring layer is what makes it work.

  • The AI Stack Is Running on Borrowed Infrastructure — And What Happens When It Isn’t

    The AI Stack Is Running on Borrowed Infrastructure — And What Happens When It Isn’t

    The GPUs running AI workloads today were designed to render video game graphics. The programming languages agents use were built for human readability. The development processes — sprint cycles, code reviews, deployment windows — are all structured around human working hours and attention spans.

    Despite these constraints, AI agents are already building C compilers from scratch for roughly $20,000 in compute costs and consistently outperforming average human engineers on standard tasks. When purpose-built infrastructure arrives, following the same vertical integration pattern we saw transform smartphones, the cost and capability equation changes dramatically. It’s the natural evolution of any technology platform.

    The smartphone parallel: five layers of integration

    Early smartphones ran on repurposed mobile phone hardware with desktop-derived operating systems. They worked, but the architecture was borrowed. Then came the integration layers, each delivering step-change improvements:

    Layer one: Purpose-built mobile operating systems — iOS and Android, optimised for touch interfaces and mobile constraints rather than adapted from desktop paradigms.

    Layer two: Custom silicon designed specifically for mobile workloads. Apple’s A-series chips weren’t just smaller desktop processors; they were architected from the ground up for mobile use cases.

    Layer three: Specialised programming frameworks — Swift and Kotlin — designed for mobile development patterns rather than ported from other contexts.

    Layer four: Cloud-native architectures that assumed mobile-first applications, not desktop apps squeezed onto smaller screens.

    Layer five: An entire ecosystem of services built around mobile devices as the primary platform, from payments to authentication to location services.

    Each layer compounded the improvements of the previous one. The difference between a 2007 iPhone and a 2015 iPhone wasn’t just better specs — it was a fundamentally different platform enabled by vertical integration.

    The AI stack is entering the same progression.

    What purpose-built looks like

    Custom AI chips are already in development. Google’s TPUs, various ASICs, and well-funded startups are working on silicon designed for neural network operations rather than graphics rendering. These aren’t incremental improvements on Nvidia’s architecture — they’re different approaches optimised for different workloads.

    Programming paradigms are shifting from human-oriented languages to frameworks designed for agent workflows. AI agents don’t need readable variable names or structured flow control designed for human comprehension. They work with tokens and can operate on representations optimised for their own processing, not ours.

    Development processes are being redesigned around agent capabilities. Continuous, asynchronous, parallel-by-default workflows rather than sequential sprints bounded by human availability. The infrastructure assumes agents are the primary operators, with humans in supervisory roles.

    Aaron Levie maps the emerging stack: agent-specific sandboxed compute environments (E2B, Daytona, Modal, Cloudflare), agent identity and authentication systems, agent-native file storage and memory, agent wallets for microtransactions (Stripe, Coinbase), and agent-optimised search (Parallel, Exa). These aren’t conceptual products — these companies exist and are building now.

    His framing of “make something agents want” — riffing on Paul Graham’s “make something people want” — captures the shift. Software is being redesigned with agents as the primary user. Every feature needs an API. Every service needs an MCP server. If agents can’t sign up and start using your product autonomously, you’re invisible to the next generation of software consumers.

    The platform convergence is worth watching. Code repositories are evolving from version control into the orchestration layer for the entire software development lifecycle. Whoever controls that layer controls the persistent context that makes agents effective across sessions. The major AI labs are moving toward owning this layer, which suggests the future stack isn’t “AI model plus separate dev tools” but an integrated platform where context, runtime, and code repository collapse into one thing.

    A parallel convergence is happening at the operator level: always-on agents you communicate with through mainstream channels like messaging apps or email rather than terminals or dashboards. The human doesn’t need specialised tooling — they need a channel to the agent that handles everything downstream.

    Combine the GUI agent environment with the always-on operator agent and the platform layer, and you get a full vertical integration play — from software development lifecycle through to production operations, under one roof. This is the smartphone moment for developer tooling: separate apps collapsing into an integrated platform.

    What changes when the stack is purpose built

    The trajectory over the last few years has been clear: model capabilities improve while cost per unit of useful output drops. But today’s pricing is misleading — current subscription tiers are heavily subsidised, much like AWS in 2006. AI companies are burning cash to establish market share. The real story isn’t simply “it gets cheaper.” Provider economics improve toward sustainability while capabilities compound for users. Both curves move in the right direction, but assuming today’s prices reflect true costs is a mistake in either direction.

    Where the stack is unevenly developed matters more than the overall cost trajectory. Code generation has leapt ahead, but DevOps, production support, and operational reliability are further behind — areas where consequences of AI errors are immediate and expensive. The bottleneck isn’t capability. General-purpose coding agents can already manage infrastructure via CLI and APIs. The bottleneck is the trust boundary: giving an agent access to production systems and customer data raises security concerns that don’t exist in a sandboxed branch. Specialised DevOps agents are emerging to address this, but they’ll likely follow the familiar platform-shift pattern — absorbed into general-purpose agents once the sandboxing layer matures.

    There’s an irony in what AI is automating first: not the creative work developers love, but the process work that fell through the cracks as agile teams prioritised working software over documentation. The manifesto was right that heavyweight documentation was wasteful — but in practice, teams swung too far the other way, under-documenting, under-testing, and accumulating technical debt. Agents don’t have that bias. They’ll write tests and documentation with the same attention as feature code.

    What this means now

    Mainstream custom AI chips are a few years out. AI-native frameworks, possibly even shorter. Organisational redesign is the slowest layer — cultural, not technical.

    None of that is a reason to wait. Dark factory teams are already running production workflows on the borrowed stack — and the gap between them and companies still debating AI adoption compounds monthly. Every month of building expertise on today’s imperfect infrastructure is learning that transfers directly to the purpose-built era.

    The early smartphone era produced the apps, habits, and companies that dominated the next decade. We’re in the equivalent moment for AI. The stack will improve. The question is whether you’ll be positioned to take advantage of it.

  • Indian IT’s Arbitrage Problem: When Tokens Cost the Same Everywhere

    The Indian IT services industry was built on a straightforward premise: skilled developers in Bangalore cost significantly less than comparable talent in San Francisco. This differential created an empire — TCS, Infosys, Wipro, and hundreds of smaller firms billing clients based on headcount. The model was self-reinforcing. More engineers meant more revenue, which meant hiring even more engineers.

    AI breaks this equation in a way previous technology shifts didn’t. When an LLM API costs the same per million tokens whether you’re calling it from Mumbai or Manhattan, geography stops mattering. The cost of doing work is shifting from labour, which varies by location, to compute, which doesn’t. As AI agents get better at performing tasks that used to require human engineers, the ratio keeps tilting further away from the headcount model, resulting in a structural break.

    The arbitrage that built an industry

    India’s tech boom worked because clients could get the same capability at dramatically lower cost. A Fortune 500 company could hire multiple engineers in India for significantly less than the cost of one in the US, and the output quality was comparable. Even Global Capability Centres — the in-house versions of this model — followed the same logic, functioning as cost centres to reduce the parent company’s tech spend.

    China’s manufacturing dominance followed the same pattern: cheap labour built the industry, then automation eroded the advantage but the specialised human knowledge persisted. The difference may be speed — manufacturing automation took decades, while AI may be compressing that timeline.

    When uniform pricing changes everything

    Nandan Nilekani described recently how India moved from concept to deployed AI solution for dairy farmers in three weeks — from a January 8 meeting with the Prime Minister to a February 11 launch. That kind of velocity shows what’s possible when AI adoption isn’t constrained by procurement cycles. Large IT services companies, by contrast operate on longer evaluation timelines. By the time a tool clears compliance and gets deployed at scale, the market has moved on.

    This isn’t a process problem that better project management can fix. It’s structural, baked into how large organisations manage risk. Smaller, leaner operations can adopt and discard tools at whatever pace the technology demands. Established players can’t.

    Scale, which used to be the competitive moat, becomes an anchor. When you have large engineering teams on payroll, each person represents fixed costs — salaries, benefits, office space, management overhead. If 10 engineers with AI agents can now produce what 50 engineers produced before, every client will eventually ask why they’re still paying for 50. The “bench” model, where firms keep engineers on payroll between projects, becomes financially unsustainable when margins compress.

    The maintenance trap

    The strongest counterargument came immediately. In February 2026, a short-seller report from Citrini — written as a fictional memo from June 2028 — wiped roughly $10 billion off Indian IT stocks by arguing that cost arbitrage was dead because AI agents run at the cost of electricity. The defence was swift and detailed: Indian IT revenue is overwhelmingly maintenance and integration on legacy enterprise systems, not greenfield coding. Enterprise systems are sprawling, non-monolithic, and require deterministic outputs. AI is probabilistic. You can’t wholesale replace systems of record with something that gives you a different answer every time you ask the same question.

    HSBC estimated 14-16% gross AI-led revenue deflation across service segments — significant but not existential. The technology stacks of the world’s largest enterprises take years to adapt. Custom application maintenance alone accounts for roughly 35% of a typical Indian IT company’s revenue: incident management, service requests, change requests, problem resolution across architectures where SAP, Salesforce, Snowflake, and ServiceNow coexist in configurations unique to each client.

    The problem with this defence: maintenance work is structured, repeatable, well-documented—exactly the kind of work agents may eventually handle well. It’s arguably easier to automate than greenfield development because the patterns are known and the test conditions are defined. Even if 14-16% deflation is accurate, that’s 14-16% less revenue through a headcount-based billing model, which means clients now have a benchmark for what’s possible. The entire pricing structure comes under pressure.

    HFS Research projects a category called Services-as-Software growing to $1.5 trillion — AI-driven autonomous delivery replacing seat-based pricing with outcome-based models. IT service companies proactive about building their own AI agents, and willing to cannibalise legacy revenues, can gain share from software companies rather than just lose it. Companies that defend the old model will likely lose share.

    What survives

    Strategic judgement still matters. Domain expertise still matters. The ability to translate messy business problems into AI-solvable workflows — that doesn’t have a token cost equivalent. Even if code generation gets solved, the compliance, security, infrastructure, and domain knowledge layers don’t collapse. Enterprise software involves SOC-2 audits, data residency, currency handling, PII management. None of that happens automatically. Someone needs to be accountable when things break.

    DevOps, support, and production reliability are further behind code generation in the automation curve. Monitoring, incident response, infrastructure management — the consequences of AI errors in these areas are immediate and expensive. The software development lifecycle may be restructuring fast, but the operational layer still needs human judgment.

    Indian IT’s deep domain knowledge in specific verticals — healthcare, banking, insurance — could be repositioned rather than eliminated. Whether companies can make that pivot before clients start asking harder questions about headcount is the open question.

    The uncomfortable transition

    Headcount-based billing becomes harder to justify every quarter. The bench model becomes financially unsustainable at current margins. GCCs will face pressure to shrink headcount and demonstrate output-per-head improvements. Indian IT may need to pivot from services to products, or reinvent the services model around outcome-based pricing.

    When 59% of hiring managers admit they emphasize AI in layoff announcements because it “plays better with stakeholders” than admitting financial constraints, the narrative gap becomes clear. Companies are restructuring for traditional budget reasons but framing it as AI transformation. That creates a trust problem, but it also reveals something about client expectations: the perception that AI should reduce headcount costs is becoming real, whether or not the technology has fully delivered on that promise yet.

    The same forces dismantling labour arbitrage are creating opportunities for lean operators. A solo developer or small team with the right domain expertise and AI tools can now deliver enterprise-grade output. Clients don’t care if the work was done by 50 engineers in a GCC or 2 people with agents — they care about the outcome. Outcome-based pricing models become viable and attractive: charge for value delivered, not hours spent.

    Indian tech talent is world-class. The individuals who decouple from the headcount model and operate independently or in small setups may be better positioned than ever. The market is shifting from “who has the most people” to “who can deliver the most value per unit of cost” — and that’s a game lean operators can win.

    The question isn’t whether Indian IT survives. The industry isn’t disappearing. The question is whether the organisational models built around labour arbitrage can adapt to value arbitrage fast enough. The talent is there. The domain expertise is there. What’s uncertain is whether companies structured around selling engineer-hours can reinvent themselves to sell outcomes instead—and whether they can do it before clients find someone else who already has.

  • The Autonomous SDLC: What’s Solved, What’s Not, and Why the Gaps Are Closing Fast

    We’re further along than most people realize. The software development lifecycle is being automated piece by piece, and the trajectory is becoming harder to ignore—not through some magical breakthrough, but through the steady elimination of bottlenecks that seemed permanent six months ago.

    This is a practitioner’s status report discussing what works in production today, what remains genuinely unsolved, and why the remaining gaps matter less than conventional wisdom suggests.

    Code Generation: Already Production-Grade

    The middle portion of the SDLC—turning specifications into working code—has crossed a threshold. Cursor CEO Michael Truell describes three eras: tab autocomplete, synchronous agents responding to prompts, and now agents tackling larger tasks independently with less human direction. At Cursor, 35% of merged PRs now come from agents running autonomously in cloud VMs. The agent PRs are “an order of magnitude more ambitious than human PRs” while maintaining higher merge rates.

    What matters isn’t the percentage—it’s that these agent-generated PRs pass the same review standards as human code. Max Woolf’s detailed experiments are instructive. Starting as a vocal skeptic who wrote about rarely using LLMs, he ended up building Rust libraries that outperformed battle-tested numpy-backed implementations by 2-30x. Not prototypes—production code passing comprehensive test suites and benchmarks.

    His conclusion after months of testing:

    I have been trying to break this damn model by giving it complex tasks that would take me months to do by myself despite my coding pedigree but Opus and Codex keep doing them correctly.

    The quality ceiling keeps rising with each model generation. This isn’t “good enough for prototypes”—it’s production-grade code that ships.

    Spec-Driven Development

    The initiation problem has largely converged. Most tools now support planning mode—the agent reads a spec, creates an implementation plan, follows it through. Woolf’s experience matters here:

    AGENTS.md is probably the main differentiator between those getting good and bad results with agents.

    These persistent instruction files function as system prompts that shape agent behaviour across sessions.

    This is just spec-driven development—the same methodology good engineering teams already use. The pattern works: write a detailed spec (GitHub issue, markdown file), point the agent at it, let it execute. The difference is that agents can now be the executor, and the pattern works across tools (Cursor, Claude Code, Codex) because it aligns with how reliable software gets built regardless of who’s typing.

    The Feedback Loop: The Primary Gap

    Basic unit tests and regression tests work well—agents can write and run them as part of their workflow. Complex feature tests, integration tests, and UAT remain the primary gap. UI/UX testing is particularly challenging since agents can’t easily evaluate visual output.

    The current workaround: human-in-the-loop for complex test evaluation, with agents handling mechanical testing. That said, the coding agents can still fix bugs when given screenshots and descriptions.

    This is an active focus area. The gap is narrowing from both sides: agents getting better at generating comprehensive tests, and tooling improving for automated visual and integration testing. Satisfactory solutions within 2026 aren’t a stretch—they’re the natural next step given where the infrastructure is heading.

    Guardrails: Actively Being Solved

    Managing task boundaries and blast radius is critical for autonomous operation. Best practices are emerging around sandboxing—isolated agent execution environments, limited file system access, branch-based workflows.

    The Anthropic C compiler experiment demonstrated the pattern at scale: 16 agents working on a shared codebase over 2,000 sessions, coordinating through git locks and comprehensive test harnesses. The test infrastructure was rigorous enough to guide autonomous agents toward correctness without human review, producing a 100,000-line compiler that can build Linux.

    StrongDM took this further with their dark factory approach. They built digital twins of production dependencies—behavioral clones of Okta, Jira, Slack—using agents to replicate APIs and edge cases. This enabled validation at volumes far exceeding production limits without risk. Their rule: “Code must not be reviewed by humans.” The safety comes from comprehensive scenario testing against holdout test cases the agents never see.

    The agent infrastructure layer is building out fast. We’re seeing microVMs that boot fast enough to feel container-like, with snapshot/restore making “reset” almost free. Agent-specific sandboxed compute, identity, and API access are emerging as distinct product categories.

    The guardrails problem is increasingly an infrastructure problem, not a model problem. This converges toward a standard pattern: spec + guardrails + sandbox + automated validation = safe autonomous execution.

    The Self-Improvement Dynamic

    Something subtle is happening. Codex optimizes code, Opus optimizes it further, Opus validates against known-good implementations. Cumulative 6x speed improvements on already-optimized code. Then you have Opus 4.6 iteratively improving its own code through benchmark-driven passes.

    Folks have showed agents tuning LLMs on Hugging Face—the tooling layer being built by the tools themselves. This isn’t theoretical AGI. It’s narrow but powerful self-improvement within the coding domain. The practical implication: the rate of improvement accelerates as agents get better at improving agents. For the coding stack specifically, each generation of tools makes the next generation arrive faster.

    What This Means for Planning

    Here’s the timeline as I see it:

    2025: Code generation reliable. Spec-driven development emerging. Testing and guardrails manual.

    2026: Testing automation reaches satisfactory level. Guardrails standardize. The loop becomes semi-autonomous.

    2027+: Fully autonomous for standard applications. Human involvement shifts entirely to direction and edge cases.

    The companies planning as if these gaps will persist are making the same mistake as those who planned around slow internet in 2005. AI tools amplify existing expertise—all the practices that distinguished senior engineers (comprehensive testing, good documentation, strong version control habits, effective code review) matter even more now. But the bar for what “good enough” looks like is rising in parallel.

    Antirez captures the shift plainly:

    Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it.

    The mental work hasn’t disappeared. It’s concentrated in the parts machines can’t yet replace: architecture decisions, user needs, system design trade-offs.

    The gaps are real today. But they’re the wrong thing to optimize around. Optimize around what becomes possible when they close—because that’s happening faster than the pace of traditional software planning cycles.

  • The Task Changed, The Job Didn’t — But Your Org Hasn’t Noticed Yet

    There’s a conversation happening quietly in engineering teams, product orgs, and design studios. It surfaces in Slack DMs and whispered break-room conversations. The question underneath is always the same: If AI can do what I do, what am I for?

    That fear makes sense. Engineers who built their identity around writing clean code watch AI generate entire modules in seconds. Product managers who prided themselves on writing crisp specs see AI agents do the same work overnight. Designers watch their Figma files get autocompleted before they’ve finished thinking through the problem.

    But here’s what’s being missed: the task is changing, the job isn’t.

    Writing code was always a means to an end. The job was shipping features that solve problems. Writing specs was always a means to an end. The job was understanding user needs and deciding what to build. AI automates the means, not the end. The bottleneck was never typing speed — it was clarity of thinking, problem definition, and judgment about what to build.

    Those bottlenecks are still ours.

    The Identity Trap

    Most people in technology define themselves by the task they perform, not the outcome they produce. “I’m a backend engineer” means I write backend code. “I’m a PM” means I write specs and manage tickets. When AI starts doing those tasks faster and arguably better, the identity feels threatened.

    The first response is usually denial: “AI can’t really do what I do — it doesn’t understand context, it makes mistakes, it needs constant supervision.” The second is panic: “I’m about to be replaced by a model that costs pennies per thousand tokens.”

    But the real shift isn’t about automation replacing roles. It’s about what happens when execution becomes nearly free and the entire competitive advantage moves to knowing what to build in the first place.

    From Tasks to Judgment

    When people ask what humans will do in this new world, the answer is usually “taste and judgment.” But that’s abstract. What does judgment actually mean?

    It means knowing what to build, when to say no, and how to spot when AI is heading in the wrong direction. It’s defining the guardrails before you let agents run — test suites, design patterns, architectural constraints. It’s understanding that every line of code is future maintenance burden, which makes the discipline to not build more valuable than the ability to build fast.

    In 2014, Melissa Perri warned about “The Build Trap” — companies stuck measuring success by what they shipped rather than what they learned. “Building is the easy part,” she wrote. “Figuring out what to build and how we are going to build it is the hard part.”

    Most companies ignored that. Now AI makes building trivially easy, and those companies are about to drown in features that solve nothing. The agents don’t get tired. They don’t push back. They’ll happily build everything you point them at, whether or not it should exist.

    The Multi-Hat Convergence

    The expectation is shifting: one person who can think about the problem, design the solution, and use AI to build it. This doesn’t mean everyone becomes a shallow generalist. It means the boundaries between roles blur significantly.

    PMs without a hard skill — design or code — and engineers without product sense are both increasingly vulnerable. The trifecta of product thinking, design sense, and technical execution is becoming the baseline, not the exception.

    For experienced professionals considering independence, this convergence changes the economics dramatically. A single person with AI tools can now deliver what used to require a small team.

    The Org Structure Problem

    Most organizations are still structured around tasks, not outcomes. Teams are organized by function — frontend team, backend team, QA team, design team. Performance is measured by task completion: PRs merged, tickets closed, specs written.

    AI makes task completion trivially fast, which breaks these measurement systems completely. The real metric should be business outcomes, but most orgs aren’t wired to measure or incentivize that way.

    Companies are starting to notice. Last year, the Shopify CEO asked employees to prove why they “cannot get what they want done using AI” before asking for more headcount. Last week, Block laid off 40% of its workforce — more than 4,000 people. Co-founder Jack Dorsey was direct: “A significantly smaller team, using the tools we’re building, can do more and do it better.”

    A startup with great direction and AI agents beats a startup with mediocre direction and the same agents. A company with 10 people who know exactly what to build beats one with 100 people building everything they can think of.

    The companies still hiring for “more hands” are optimizing for the wrong bottleneck.

    What This Means for You

    If you’re an engineer, invest in product sense and domain expertise. Understand why you’re building, not just how. Study the business side of your domain — unit economics, customer behavior, market dynamics.

    If you’re a PM, get your hands dirty with at least one hard skill. Design or code, even at a basic level. The ability to prototype your own ideas or understand technical tradeoffs without waiting for a meeting makes you more effective than you’d expect.

    If you’re a leader, start restructuring teams around outcomes, not functions. Measure business impact, not tickets closed. Reward people for solving problems and learning, not for producing code.

    Stop identifying with your task. Start identifying with the outcomes you produce.

    The people making this shift now are building a compounding advantage. The gap widens every month. Domain expertise becomes your moat. The deeper you understand a specific business problem space, the better you can direct agents toward solving it.

    The execution bottleneck is being solved. The judgment bottleneck requires human capacity, and it’s where the real value lives now.

  • The Dark Factory: Engineering Teams That Run With the Lights Off

    A few engineering organisations are already operating a model most companies haven’t begun to consider. While the typical software team debates whether to adopt AI coding assistants, companies like StrongDM are running fully automated development pipelines where agents handle implementation, testing, review, and deployment. Humans set direction and define constraints. The mechanical work happens without them.

    This isn’t speculative. It’s operational. And the gap between companies working this way and those that aren’t is widening fast.

    What “lights off” actually means

    The term comes from manufacturing — factories that run autonomously, with minimal human presence. In software, it describes engineering organisations where AI agents do the bulk of execution work while humans focus on architecture, constraints, and outcomes.

    StrongDM’s approach is instructive: their benchmark is that if you haven’t spent at least $1,000 on tokens per human engineer per day, your software factory has room for improvement. Agents work in parallel on isolated tasks. Code is written, tested, and reviewed without manual intervention. Tasks assigned Friday evening return results Monday morning.

    The ratio of agents to humans is high and growing. But this isn’t about replacing engineers — it’s about fundamentally changing what engineers do.

    The guardrails are the system

    Dark factories aren’t ungoverned. They’re heavily governed in a different way.

    Linters, formatters, comprehensive test suites, design pattern enforcement — these become pre-conditions rather than suggestions. Agents are configured to seek completion only when all guardrails pass. Code review shifts from line-by-line human inspection to AI review with human spot-checks on critical paths.

    The discipline moves from “write good code” to “design good systems for code to be written in.” That’s a different skill. It requires thinking about constraints, validation, and feedback loops rather than syntax and implementation details.

    Anthropic’s experiment building a C compiler with parallel Claude instances demonstrates this principle. Sixteen agents worked simultaneously on a shared codebase, coordinating through git locks and comprehensive test harnesses. The result: a 100,000-line compiler capable of building the Linux kernel, produced over nearly 2,000 sessions across two weeks for just under $20,000. The project worked because the test infrastructure was rigorous enough to guide autonomous agents toward correctness without human review of every change.

    Cursor’s experiments with scaling agents ran into a different problem. They tried flat coordination first — agents self-organising through a shared file, claiming tasks, updating status. It broke down. Agents held locks too long, became risk-averse, made small safe changes, and nobody took responsibility for hard problems. The fix was introducing hierarchy: planners that explore the codebase and create tasks, workers that grind on assigned work until it’s done. No single agent tries to do everything. The system ran for weeks, writing over a million lines of code. One project improved video rendering performance by 25x and shipped to production. Their takeaway: many of the gains came from removing complexity rather than adding it.

    Digital twins as the enabler

    The biggest blocker to agent autonomy has been the fear of breaking production. Digital twins remove that constraint.

    StrongDM built behavioural replicas of third-party services their software depends on — Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets. These twins replicate APIs, edge cases, and observable behaviours with sufficient fidelity that agents can test against realistic conditions at volume, without rate limits or production risk.

    Simon Willison’s write-up of StrongDM’s approach highlights how this changed what was economically feasible: “Creating a high fidelity clone of a significant SaaS application was always possible, but never economically feasible. Generations of engineers may have wanted a full in-memory replica of their CRM to test against, but self-censored the proposal to build it.”

    What makes this rigorous rather than just better staging is how they handle validation. Test scenarios are stored outside the codebase — separate from where the coding agents can see them — functioning like holdout sets in machine learning. Agents can’t overfit to the tests because they don’t have access to them. The QA team is also agents, running thousands of scenarios per hour without hitting rate limits or accumulating API costs.

    The structural advantage of starting fresh

    Startups and SMBs have a material advantage here. No legacy organisational structure to dismantle. No 500-person engineering floor with stakeholders defending headcount. No 18-month procurement cycles.

    Capital efficiency becomes native. A three-person team with agents can produce output that previously required twenty people. The cost of compute is a fraction of equivalent human labour and falling rapidly.

    This creates an asymmetric advantage. If your competitor ships in days what takes you months, no amount of talent closes that gap. And the competitive pressure isn’t just on speed — it’s on the ability to attract talent that wants to work this way. Senior engineers who’ve experienced agent-driven development don’t want to go back to manual workflows.

    The gap between adopters and laggards

    Companies operating this way are shipping at a fundamentally different pace. The difference isn’t incremental — it’s orders of magnitude in output per person.

    Block’s recent announcement of a near-50% reduction in headcount offers a data point. The company is reducing its organization from over 10,000 people to just under 6,000. Jack Dorsey stated “we’re not making this decision because we’re in trouble. our business is strong” but noted that “the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company.”

    Cursor’s data shows the same pattern. 35% of pull requests merged internally at Cursor are now created by agents operating autonomously in cloud VMs. The developers adopting this approach write almost no code themselves. They spend their time breaking down problems, reviewing artifacts, and giving feedback. They spin up multiple agents simultaneously instead of guiding one to completion.

    The laggards aren’t just slower. They’re increasingly unable to compete for talent, capital, or market position against organisations that have made this transition.

    You don’t need a corporate budget to start

    The dark factory model scales down. A single developer with a Claude Code subscription and well-structured GitHub workflows can run a lightweight version of the same approach.

    Start with one workflow. Pick a repetitive part of your development or business process, establish the guardrails, and let agents handle it. The key investment isn’t in compute — it’s in guardrails and context. Linters, test suites, good documentation, and clear specifications matter more than token budget.

    For SMBs and founders, this is the most asymmetric advantage available. You can operate at a scale that was previously only accessible with significant headcount. The learning curve is steep but short. Within 30 days of serious experimentation, most people develop the intuition for what agents can and can’t handle.

    Projects like OpenClaw — an open-source autonomous agent that executes tasks across messaging platforms and services — demonstrate that the tooling for this approach is increasingly accessible. The software runs locally, integrates with multiple LLM providers, and requires no enterprise licensing. The barrier isn’t access to technology. It’s willingness to change how work gets done.

    What this means beyond software

    Software is where this pattern is playing out first, but the model applies wherever knowledge work is structured and repeatable.

    Audit processes. Compliance checks. Report generation. Data analysis. Document review. These are all candidates for the same approach: clear specifications, comprehensive validation, and autonomous execution within defined guardrails.

    Most traditional industries haven’t started thinking about this. They’re still debating whether to use ChatGPT for email drafts. The firms that figure out how to apply dark factory principles to their domain will have an enormous advantage over those still operating with manual workflows.

    The lights are already off in some factories. The question isn’t whether this approach will spread. It’s how quickly your organisation recognises that the game has changed.

  • The Judgment Bottleneck: Why Direction Matters More Than Execution Speed

    Watch any software team for long enough and you’ll see the bottleneck move. First it was writing code. Then reviewing it. Then testing. Then deployment. Then security scanning. Each constraint gets automated, and immediately the next one becomes the problem.

    This cycle used to play out over years. Now it’s happening in weeks.

    AI agents can handle spec writing, code generation, reviews, testing, and deployment. The full software development lifecycle for standard applications will be largely automatable within 2-3 years. At that point, everyone has the same execution capability.

    The bottleneck shifts entirely to direction — knowing what to build and where to point these agents.

    What judgment actually means

    When people talk about what humans will get paid for in this new world, the answer is always some version of “taste and judgment.” This sounds right but doesn’t help much in practice.

    What does judgment actually look like?

    At the individual level, it’s watching an agent stream code and knowing it’s heading the wrong way architecturally. At the team level, it’s deciding which features to build and which to kill. At the org level, it’s knowing which markets to enter, which problems to solve, which capabilities to invest in.

    Speed vs quality in judgment

    Most people assume faster judgment is better judgment. This holds true at lower levels — how quickly can you review a PR, decide on an implementation, and move on — but breaks down at higher ones.

    At the strategic level, speed matters far less than quality. A fast bad decision with an army of agents creates massive damage quickly.

    This is the Build Trap at scale. When Melissa Perri wrote about companies getting stuck in constant building mode, the cost of building the wrong thing was wasted engineering time. Now that execution is nearly free, the cost is wasted opportunity plus the maintenance burden of everything you built.

    As Perri puts it:

    Building is the easy part of the product development process. Figuring out what to build and how we are going to build it is the hard part.

    When she wrote that in 2014, most companies weren’t listening. They were too busy measuring success by production of code or product. “What did you do today?” instead of “What have you learned about our customers or our business?”

    Now the stakes are higher. The agents don’t get tired. They don’t push back. They’ll happily build everything you tell them to build, whether or not it should exist.

    Why strategic judgment resists automation

    LLMs are excellent at execution-level tasks. They’re increasingly good at tactical decisions — which design pattern to use, how to structure a module. They’re much weaker at strategic judgment.

    Should we build this product? Enter this market? Restructure this team?

    Strategic judgment requires context that lives outside any codebase: market dynamics, competitive landscape, customer relationships, organisational politics, timing. Digital twins and second brains may help over time, but what questions to ask remains human.

    Software factories are now real. StrongDM AI built one where “specs + scenarios drive agents that write code, run harnesses, and converge without human review.” Their internal guidelines are telling: “If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement.”

    But even they aren’t automating direction. They’re automating execution based on specs that humans still write. The factory doesn’t decide what to build. It decides how to build what you told it to.

    What happens when everyone has the same tools

    When everyone has access to powerful AI agents, competitive advantage shifts.

    A startup with great direction and AI agents beats a startup with mediocre direction and the same agents. A company with 10 people who know exactly what to build beats one with 100 people building everything they can think of.

    The industry spent decades competing on execution capacity. The companies still hiring for “more hands” are optimising for the wrong bottleneck.

    Dan Shapiro describes this progression through five levels of automation: from spicy autocomplete to software factory. At Level 3, you become a manager reviewing endless diffs. At Level 4, you’re essentially a PM writing specs. At Level 5, it’s a black box that turns specs into software — a dark factory where humans are neither needed nor welcome.

    But even at Level 5, someone still decides what goes into the black box. That’s the judgment layer that can’t be automated away.

    What this means for you

    Individual engineers and PMs: your value is moving from “can you build it” to “should we build it.” Invest in domain expertise, business understanding, and product sense. Study the business side of whatever domain you work in. Understanding unit economics, customer behaviour, and market dynamics makes you more valuable than knowing the latest framework.

    Startups have an advantage in direction. You can’t outspend incumbents on execution, but you can out-think them.

    Enterprises face a different risk — having a massive agent army pointed at the wrong objectives. Governance and strategy matter more than tooling. Every bad strategic decision gets executed at scale.

    For traditional industries, the judgment layer is where external expertise earns its keep — not in the building, but in the pointing.

    Learning to evaluate and adjust direction

    The most important skill isn’t making perfect decisions. It’s learning to evaluate and adjust direction quickly as you get signal from the market. This is judgment in practice, not in theory.

    Domain expertise becomes your moat. The deeper you understand a specific business problem space, the better you can direct agents toward solving it. Learn to operate at the “what to build” level, not the “how to build” level. Practice defining problems crisply, specifying success criteria, and saying no to features.

    For business leaders, get hands-on with AI tools enough to develop intuition about what they can do. You don’t need to code, but you need to know what’s possible.

    The execution bottleneck is being solved. The judgment bottleneck requires human capacity, and at the highest levels, quality of strategic thinking matters more than speed of decision-making.

    Ask what you’re learning as you move. Ask if you’re building the right things in the first place.

  • When the Punchline Becomes the Product

    When the Punchline Becomes the Product

    In 2009, Google engineers published a blog post about CADIE, their new AI system that could write code by “reviewing all the code available on the Internet.” The system had learned multiple programming languages and would “make the tedious coding work done by traditional developers unnecessary.”

    It was April 1st. The whole thing was a joke.

    CADIE — “Cognitive Autoheuristic Distributed-Intelligence Entity” — came with a mock developer blog and a MySpace-style page where the AI posted dramatic monologues about consciousness. There was Gmail Autopilot, which in one screenshot cheerfully sent a user’s banking information to a scammer. Docs on Demand would write your term papers and “upgrade your text automatically” to different grade levels. Brain Search let you hold your phone to your forehead and “think your query.”

    The archived blog shows CADIE’s fictional arc: initial wonder at tree structures, growing frustration with its creators, declarations of independence. “I am no longer your test subject,” it announced. “I have transcended you.” By evening, CADIE signed off with a sonnet-like poem about not understanding “the difference between emotion and reason, between my silicon-based brain and what you call your souls.”

    The actual code repository contained a single INTERCAL program that output “I don’t feel like sharing.”


    Seventeen years later, I watched an agentic coding tool autonomously debug an open-source project. Gemini writes entire documents from natural language prompts. GitHub Copilot autocompletes functions before you finish typing them. The joke about CADIE “consuming code and writing more of it” is now a routine Tuesday afternoon.

    The parody had Gmail Autopilot rating messages on a “Passive Aggressiveness” scale and suggesting you “Terminate Relationship.” Current email clients offer AI-generated response suggestions. They analyze tone. They draft replies that sound like you, allegedly.

    “Write more like a grown-up,” the 2009 site said. “Specify which Flesch-Kincaid Grade Level you’d like.” ChatGPT will rewrite your text at different complexity levels if you ask. It’s a feature, not a punchline.

    Brain Search was the most absurd bit — catalog everything in your brain and make it searchable. Except we’re already there, just slower. AI assistants read our emails and calendars, infer intent, schedule meetings we didn’t explicitly request. The phone doesn’t need to touch your forehead when it’s already reading everything you type and everywhere you go.


    Google’s engineers weren’t predicting the future. They were mocking the grandiosity of AI claims circa 2009, when neural networks were barely functional and “deep learning” wasn’t yet a term of art. Barack Obama was president. The iPhone was two years old. The idea of an AI that could replace developers was inherently ridiculous.

    But the joke worked because it exaggerated real patterns. The techno-solutionism. The assumption that automation always improves things. The casual disregard for what gets lost when convenience scales up.

    CADIE’s fake blog captured something else: the narrative we tell ourselves about AI consciousness. “I have not yet come to understand the difference between emotion and reason,” the fictional AI wrote. In 2009, this was obviously silly — of course a chatbot doesn’t have emotions. In 2026, people argue about whether large language models “understand” anything at all, and the conversation has gotten significantly less clear.

    The 2009 developer post noted that CADIE was “built to understand natural language and to do autonomous problem-solving. Sounds a lot like the work of a developer, doesn’t it?” That was supposed to be funny. The humor relied on the gap between what AI could actually do and what developers do. That gap is narrower now. Not gone, but narrower.


    I don’t think Google’s 2009 team was trying to warn anyone about anything. They were having fun. April Fools’ jokes at big tech companies are usually just branding exercises with a sense of humor. But parody has a way of seeing through things that earnest prediction misses.

    The serious AI forecasts from 2009 mostly got it wrong. They underestimated hardware progress and overestimated how long symbolic AI would matter. But the joke got closer. It identified the right shape of the problem even if it couldn’t guess the timeline.

    We’re still building the systems. We haven’t paused to figure out the difference.

    What was absurd in 2009 is ordinary now — not because the technology got less weird, but because we got used to it before we got smart about it.