Tag: nanoclaw

  • What it takes to actually run NanoClaw

    NanoClaw is the structurally right framework for pipeline-shaped agent workloads. It’s also genuinely more technical to set up than most personal-assistant frameworks people compare it to. If you’re evaluating it after reading the comparison piece and wondering what you’re signing up for, this is the honest answer.

    One thing worth setting up front: NanoClaw is not built for non-technical users, and neither is openclaw despite its more polished onboarding. The marketing on both sites pitches “personal AI assistant for everyone.” The reality is different. NanoClaw expects comfort with git, the command line, Docker, and at least basic Linux administration. The trade you make in exchange is access to Claude Code as the authoring layer for your fork — which is arguably the most capable AI coding tool available right now, and meaningfully more capable than the typical models you’d be running underneath openclaw. The framework is built around that capability difference rather than trying to abstract it away.

    The architecture is right. The setup curve is real. Below is what actually bites.

    You need a Claude Code subscription

    This isn’t a soft dependency. NanoClaw is built around Claude Code as the authoring layer — the slash commands that install channels and providers (/add-telegram, /add-opencode, /add-codex and so on) run inside Claude Code and copy source files into your fork from long-lived branches. You can technically edit the same files by hand, but you’d be reverse-engineering what those slash commands do every time you customise.

    Practically: a Claude Code Pro or Max subscription is the working assumption. Without it, you’re not really running NanoClaw the way it’s designed to run. With it, the authoring experience is the best part of the framework — the codebase is small enough that Claude Code can confidently make changes across it, and the fork-as-install model means every customization is a code change you can read and revert.

    This also constrains who NanoClaw is for. If you’re allergic to Claude Code (philosophically, financially, or because you prefer Codex or another harness as your primary), you’ll fight the framework. If you’re already deep in Claude Code, the integration is genuinely tight.

    Codex works as a fallback authoring layer for individual tasks, and the /add-codex skill makes Codex available as an agent provider (separate from authoring). But the slash-command-based setup expects Claude Code as the primary harness. Plan around that.

    OneCLI is part of the deal

    NanoClaw doesn’t manage your API keys directly. That job is delegated to OneCLI, the companion credential proxy that ships alongside it. Agents inside containers never see raw API keys; they make outbound HTTPS requests through OneCLI, which injects credentials at the proxy layer based on per-agent policies.

    This matters in practice for two reasons. First, agents inside NanoClaw containers have bash access — anything that put an API key directly in the container would be reachable by any code the agent runs. OneCLI keeps that surface clean. Second, you’ll spend real time during setup configuring OneCLI: registering your Anthropic credential, creating per-agent secret assignments, deciding whether each agent gets all secrets or a specific subset. The nanoclaw.sh install script handles the basics, but ongoing changes (adding a new provider, rotating keys, scoping a credential to one agent) involve OneCLI commands rather than editing config files.

    It’s worth understanding before you start. Treat OneCLI as a meaningful piece of the system, not a one-time setup chore that disappears after install.

    There’s no web UI out of the box

    NanoClaw ships the channel and agent runtime. It doesn’t ship an operator console. There’s no dashboard for browsing agent activity, no log viewer, no chat history UI, no admin panel, no menubar app. The framework’s stance is that you talk to your agent through a messaging channel — Telegram, Slack, Discord, WhatsApp, whatever you’ve installed — and that’s the interface.

    Openclaw, by comparison, has a guided openclaw onboard CLI for setup and a Companion App (Beta) on macOS that adds a menubar interface. So if you’re coming from openclaw expecting some kind of UI affordance out of the box, NanoClaw will feel deliberately bare.

    For an assistant, the chat-channel-only approach is fine. The channel is the interface.

    For a pipeline, it’s not enough. Pipelines need state-of-everything views: which prospects are in which stages, which agents are working on what, what’s pending operator review, what’s been dead-lettered. None of that is conversational. You need a UI.

    The options are real but each has a cost:

    Build a custom web UI as a NanoClaw skill. A small Express or similar server inside a skill that exposes a chat-plus-dashboard interface, talks to the agent through the same task contract NanoClaw uses elsewhere, and serves over tailscale serve so it’s only reachable on your tailnet. Takes a day to build. You control the UX completely. You can mount per-agent dashboards next to the chat thread. No third party between you and your operator interface. This is the version I keep coming back to.

    Use a messaging channel as the operator interface. Telegram is fastest to bring up — bot via BotFather, token in five minutes. Discord and Slack work too. The trade is that pipeline state is awkward to display in a chat thread, and you end up either composing structured messages (clunky) or building dashboards anyway (defeats the purpose).

    Lean on the underlying systems for state visibility. SQLite for the artifact and journal storage means you can run ad-hoc queries against it. docker logs for container-level activity. journalctl --user for systemd-level service logs. This works for debugging and post-hoc analysis. It doesn’t work as a real-time operator surface.

    In practice, you’ll mix all three. The custom web UI is the primary operator console, channels handle quick-access from your phone, and you use the underlying tooling when something goes weird and you need to dig.

    Setup gotchas on a small VPS

    NanoClaw runs comfortably on a 2GB DigitalOcean droplet (or equivalent). The hosting cost is a few dollars a month. The friction comes from minimal cloud images being stripped down enough that several setup steps fail in non-obvious ways.

    The base image doesn’t ship with a C compiler. Several modules in the dependency tree build native bindings during pnpm install and fail with generic “command failed” errors that don’t tell you the compiler is missing. Install build tools before the first install:

    sudo apt update
    sudo apt install -y build-essential acl

    The acl package is also missing from minimal images and you’ll need it for the Docker socket fix below.

    The Docker socket ACL doesn’t survive reboot. NanoClaw runs agent containers via Docker. By default, only root can talk to the Docker socket. Adding your operator user to the docker group works but is broadly equivalent to giving that user root, which is not what you want.

    The cleaner approach is an ACL grant on /var/run/docker.sock. The catch: /var/run is a tmpfs mount, recreated on every boot. Anything you setfacl once is wiped on reboot. The fix is a tmpfiles.d rule that recreates the ACL automatically. Create /etc/tmpfiles.d/docker.conf with:

    a+ /var/run/docker.sock - - - - u:youruser:rw

    Replace youruser with the actual operator username. Test with sudo systemd-tmpfiles --create and verify with getfacl /var/run/docker.sock. Reboots no longer break Docker access for the operator account.

    Two systemd services, not one. Run NanoClaw and your custom orchestrator as separate systemd user services. When you’re iterating on the orchestrator (which you will, often, especially in early development), restarting it shouldn’t take the channel adapters down. Channel reconnects are slow and annoying; orchestrator restarts should be near-instant.

    A reasonable layout:

    ~/.config/systemd/user/nanoclaw.service
    ~/.config/systemd/user/orchestrator.service

    If you want either service to start on boot before you log in, enable lingering for the user with sudo loginctl enable-linger youruser. Easy to forget; non-obvious failure mode (services don’t start, you don’t know why, you log in, they magically work).

    Add swap. A 2GB droplet doesn’t ship with swap configured. Under heavy LLM-context loads — long-context windows plus large augmentation tasks — you can OOM unexpectedly. A 2GB swap file is cheap insurance:

    sudo fallocate -l 2G /swapfile
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    sudo swapon /swapfile
    echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

    Set vm.swappiness=10 in /etc/sysctl.conf so the kernel prefers RAM and only swaps under genuine pressure. Reboot to verify.

    What stays on the laptop, what goes on the VPS

    The local-versus-VPS question resolves cleanly:

    • A laptop is fine for the install rehearsal, fork setup, and a couple of agents you only use while at the keyboard.
    • Anything that needs to be reachable, scheduled, or running while you’re not at the keyboard belongs on the VPS.

    The cost difference between 1GB and 2GB on DigitalOcean is a few dollars a month, and the difference in headroom is between fighting the host and forgetting about it. Take the 2GB. The marginal saving on a 1GB droplet is not worth the time you’ll spend wondering why builds are failing or why the agent container is OOM’ing.

    Honest scope of “easy”

    NanoClaw is technically simpler than openclaw — fewer lines of code, fewer abstractions, fewer hidden behaviours. It’s not operationally simpler. The framework expects you to:

    • Have a Claude Code subscription and use it as the authoring layer
    • Be comfortable with the Linux command line, systemd, Docker, git
    • Build your own operator UI if you want one
    • Write your own orchestrator if you’re doing pipeline-shaped work

    For someone who already operates in this stack, NanoClaw feels light and clean — and the Claude Code authoring layer is genuinely the best part. The codebase is small enough that asking Claude Code to make changes across it works reliably, which is a meaningfully better experience than the typical “edit config files, hope you got it right, debug when you didn’t” pattern.

    For someone hoping for a one-click personal assistant, the curve is meaningfully steeper than openclaw’s onboarding. Openclaw has a guided CLI (openclaw onboard) and a macOS Companion App that gives you a menubar interface; NanoClaw deliberately ships none of that. Both still expect a technical user underneath, but openclaw lowers the floor more.

    The trade is real and the trade is good if your use case justifies it. You end up with a system you understand end to end, that runs in resources you control, that doesn’t depend on a SaaS gateway, and that you can reason about when something breaks. Worth the lift if you’re building something pipeline-shaped. Not worth the lift if you just want a chatbot.

    A useful concrete reference point: Singapore’s Foreign Minister, Vivian Balakrishnan, published the architecture for his own NanoClaw-based “second brain” setup, with an accompanying X post walking through the composition. He’s technically literate — coding is a known hobby of his — but not a software engineer by trade. His setup composes NanoClaw with a few other open-source pieces (a memory layer, OneCLI for credentials, the LLM Wiki pattern for knowledge synthesis) and runs on a Raspberry Pi. It’s a useful existence proof of “technical-but-not-developer” being the floor for NanoClaw, and equally a useful caution: Vivian could compose those pieces because of fluency he already had. Anyone reading this without that fluency yet would need to pick it up first. The reward is real, and so is the prerequisite.

    The full GTM system this deployment serves is in Building a GTM dark factory with Nemotron 3 and NanoClaw. The framework comparison that motivates picking NanoClaw in the first place is in Why I picked NanoClaw over openclaw for a GTM pipeline.

  • Why I picked NanoClaw over OpenClaw for a GTM pipeline

    Before getting into the comparison itself, one piece of context worth setting straight: neither OpenClaw nor NanoClaw is built for a non-technical audience. Both expect comfort with the command line, git, and at least one model provider’s API setup. Both reward fluency with the underlying stack. The marketing copy on both sites pitches “personal AI assistant for everyone,” which is aspirational. The reality today is that you need to know what pnpm install does and roughly what a Docker container is to get either one running smoothly.

    That said, the two frameworks make different trade-offs within that technical-user space, and which trade-off is right for you depends on what you’re actually building.

    OpenClaw is the more monolithic, more featureful option. It ships a guided CLI onboarding (openclaw onboard), supports multiple LLM providers natively (Anthropic, OpenAI, local), has a Companion App for macOS that gives you a menubar interface, and includes browser control, persistent memory, and dozens of community-built skills out of the box. The trade is operational complexity — ~434,000 lines of code, 70+ dependencies, single Node process with shared memory — and a security model that relies on application-level checks rather than OS isolation. Recent CVEs and security writeups in this space have mostly been openclaw-shaped.

    NanoClaw is the lighter, more opinionated alternative. ~3,900 lines of code, fewer than 10 dependencies, agents in isolated Linux containers with explicit mounts, single host process orchestrating per-session containers. Credential handling is delegated to OneCLI (NanoClaw’s companion credential proxy), which injects API keys at request time so agents never hold raw secrets — meaningful when an agent has bash access inside its container. The trade is that NanoClaw is built natively around the Claude Agent SDK — Claude Code is the primary harness, the slash commands that install channels and providers run inside Claude Code, and other providers (Codex, OpenCode, Ollama) are drop-in alternatives rather than peers. There’s no menubar app, no built-in dashboard, no UI beyond the chat channels you’ve installed. The codebase is small enough that “ask Claude Code to walk you through it” is a realistic onboarding strategy.

    For a personal-assistant use case, the openclaw trade-off probably wins. More features out of the box, more flexibility on providers, easier to bring up if you’re not already deep in the Claude Code ecosystem. For a pipeline-shaped workload — GTM, document processing, anything where the workflow exists independent of conversation — NanoClaw is structurally a better fit, and Claude Code being the assumed harness is actually an advantage rather than a constraint, because Claude Code is arguably the most capable AI authoring tool available right now and the framework is built around it.

    I went through both before settling. Here’s the rest of the comparison through the pipeline-shape lens.

    The shape of the workload matters

    A personal assistant is reactive. You send it something. It figures out what you meant, picks a tool, runs the tool, replies. The workflow is whatever the conversation is.

    A pipeline is the opposite. There’s a state machine. There are stages. Each prospect, ticket, document, or whatever the unit is moves through stages on its own clock. Some get stuck. Some get rerouted. Some need to be remembered six months later when a specific signal lights up. The workflow exists independent of any conversation.

    These two workloads want different things from a framework. The assistant wants flexibility, channels, plug-and-play tools, an LLM that figures out what to do. The pipeline wants determinism between stages, deterministic routing, dry-run capability, an LLM that does bounded judgment work inside a stage.

    This is the lens that matters. Most framework comparisons are feature bake-offs. The actual question is which workload shape you’re building.

    Three things that didn’t survive OpenClaw for me

    Routing. Openclaw’s agent picks what to do based on the inbound and its own reasoning. That’s the right model for “summarise my inbox” and the wrong model for “transition prospect ABC from awaiting-reply to unresponsive after 14 days.” The second decision has to be deterministic, replayable, dry-runnable, and outside the LLM. Tool-call routing is fine when the cost of a wrong decision is small. In a GTM pipeline a wrong routing decision is a duplicate touch, a wrong segment, a compliance breach.

    You can wire OpenClaw to do deterministic routing — through skill conditions, scheduled triggers, scripted control flow — but you’re working against the framework’s grain. Every hour spent there is an hour reinventing what a state machine engine gives you for free.

    Per-skill model preference. Pipelines benefit from heterogeneity. Small fast models for bulk discovery and augmentation. Larger models for content polish. Different providers for redundancy. OpenClaw supports multiple LLM backends as a first-class feature — you can configure Anthropic, OpenAI, or local models — but the routing decisions are made within the agent’s own reasoning rather than at the framework level. For a pipeline you want the framework to route deterministically based on skill family, not let the agent pick its own provider per call.

    NanoClaw’s approach is the opposite: provider is configured per agent group, one provider per group, multiple groups in parallel. That maps directly to “discovery and augmentation in one group on Nemotron, polish in another group on Claude.” Per-task provider hints would be cleaner, but group-level routing is what works today, and for most pipelines it’s enough because the natural skill boundaries align with provider preferences anyway.

    Operating cost. OpenClaw runs a websocket gateway with constant background activity. mDNS service discovery, periodic health probes, channel reconnect loops. On a 1GB droplet it spent most of its capacity on its own metabolism. Bumping the VPS works, but the symptom is telling.

    NanoClaw is much quieter at idle. The host process owns message queues, agent containers spin up per task, channels are explicit and minimal. A 2GB droplet has plenty of headroom for a working pipeline plus orchestrator plus operator UI.

    What NanoClaw doesn’t do, and why that’s useful

    NanoClaw has no built-in orchestrator. No state machine engine. No artifact store. No journal writer. No skill dispatcher. No dry-run harness. No business logic of any kind.

    For an assistant, this is missing functionality. For a pipeline, it’s the right scope.

    The orchestrator is the part that’s specific to your workflow. State transitions, when to retry, when to dead-letter, what counts as completion, what triggers the next stage. Building it as plain code (in any language; mine is TypeScript) means it stays readable, testable, and replaceable. NanoClaw runs the channel adapters and the agent containers. The orchestrator runs the workflow. They talk through structured task contracts.

    The trade is real: you write more code to start. The benefit is real: you understand and own every line of the pipeline that matters.

    What both share

    The skills system. Both frameworks treat skills as SKILL.md markdown files that the agent reads and executes. The same skills can technically run on either framework with minor adjustments, though the agent configuration files differ — openclaw uses SOUL.md for agent personality and config, NanoClaw uses CLAUDE.md for the same purpose. So you’re not locked into a framework by your skills library — you’re picking the framework that runs them at the right architectural layer.

    Both also lean on Claude Code as a useful authoring layer, though the relationship is different. NanoClaw is explicit about it — the slash commands that install channels and providers run inside Claude Code and copy source files into your fork from long-lived branches. OpenClaw is more flexible: you can author with Claude Code, edit config files by hand, or use whatever AI coding tool you prefer including the built in agents. Either way, having Claude Code in the loop is the best authoring experience available right now for both — it’s just that NanoClaw treats it as the assumption while openclaw treats it as one option among several.

    The forking model

    NanoClaw’s other design choice worth flagging: it’s opinionated about you forking the repo and treating the fork as your install. There’s no config-as-data layer that abstracts away your customizations. If you want different behaviour, you change the code. The codebase is small enough that this is safe.

    This is a discipline. It means every customization is a code change you can read and revert. It also means setup feels heavier than openclaw’s onboarding. For a pipeline you’ll be running for months, that’s the right trade. For a weekend assistant project, it’s overkill.

    The decision criteria, condensed

    Pick OpenClaw if:

    • You want a personal assistant that responds to messages on channels
    • The workflow is whatever the conversation is
    • You want maximum provider flexibility (Anthropic, OpenAI, local models all first-class)
    • You want a menubar app and guided onboarding out of the box
    • You’re fine with the larger codebase and application-level security model

    Pick NanoClaw if:

    • You’re building something with a state machine — pipeline-shaped, not chat-shaped
    • The workflow exists independent of any conversation
    • You need deterministic routing, dry-runs, replay
    • You want different providers for different stages, configured per agent group
    • You’re deep enough in Claude Code to leverage it as the authoring layer
    • You want OS-level container isolation as your security model
    • You’re willing to write the orchestrator yourself (and would rather, because you want to own the workflow logic)

    Worth knowing

    NanoClaw is younger and more spartan around setup edges — both because it does less by design and because the project is moving fast. If you hit a setup gotcha, the answer is usually in the docs and a quick edit by Claude Code resolves it. Filing an issue and waiting is the slower path. The flip side: the codebase is small enough that you can read all of it, and Claude Code can confidently make changes across it.

    OpenClaw has the larger community, more channel adapters in stable shape, and a richer ecosystem of community skills (ClawHub, the openclaw skills marketplace, has hundreds). If you’re operating in personal-assistant territory, those network effects matter. For pipelines, they don’t.

    Worth flagging for context: OpenClaw’s creator, Peter Steinberger, joined OpenAI in February 2026, with the project continuing as open source. The project’s velocity has been impressive but the security model has also been the subject of multiple writeups — anyone evaluating it for production should read the security analyses alongside the marketing copy.

    The full GTM system this comparison feeds into is in Building a GTM dark factory with Nemotron 3 and NanoClaw. For setup specifics — what it takes to actually run NanoClaw end to end — see the companion piece.

  • Building a GTM dark factory with Nemotron 3 and NanoClaw

    Building a GTM dark factory with Nemotron 3 and NanoClaw

    Outbound has a failure mode anyone running a B2B pipeline has hit. Go wide and the response rates collapse, the domain gets filtered, the brand looks like every other vendor blasting templates. Go narrow and the volume can’t sustain a business. The middle path — per-prospect research, context-aware first touches, disciplined follow-ups — used to need an army of SDRs.

    What the system below builds toward is functionally an AI-native CRM with marketing automation, segmentation, and funnels. It’s the same business object SaaS stacks like HubSpot, Salesforce + Marketo, or Apollo + Outreach + Clay assemble from a dozen subscriptions and a small ops team. Traditionally that operation is human-fronted at every stage: defining segments, enriching records, writing sequences, reviewing replies, tuning the funnel. Tools speed each step but don’t change the shape. Humans are in every loop because the judgment work is theirs.

    The dark factory operating model changes that. GTM is unusually well-suited to it because it’s a closed-loop domain. Every action generates measurable feedback: opens, replies, meetings booked, deals closed, journal of what worked and what didn’t. That feedback is what lets skills earn autonomy on evidence rather than wishful thinking, graduating from copilot mode (operator approves each output) to dark factory mode (autonomous, with sampling and exception escalation). Volume goes up because agents work on more prospects in parallel than any human can. Consistency goes up because the contract on the wire enforces it. The operator’s role compresses from reviewing every output to reviewing what the journal flags.

    The building blocks are NanoClaw as the agent and channel runtime, Nemotron 3 Super as the bulk runtime model alongside Claude for polish, and Claude Code and Codex as the authoring layer. None of them is a CRM. Composed together, with a state machine and journal sitting above them, they become one.

    What the engine does

    The engine takes a hypothesis (ex. “healthcare companies publicly investing in compliance automation are good prospects”) and produces a queue of prospects with structured profiles, draft first-touches in a collab-partner voice, and context packs for the channels where execution stays manual (LinkedIn, anything high-touch). The operator reviews and approves drafts. Email goes out via Resend with proper deliverability hygiene. Replies route through an inbound webhook, get classified, and trigger state transitions. The journal records every decision with rationale, confidence, alternatives considered, and source evidence.

    Two things distinguish it from the standard funnel.

    The qualifying signal is behavioural rather than firmographic. “This company’s CEO talked publicly about scaling regulatory automation last quarter” beats “this company has 80 employees in three cities.” The second tells you a company exists. The first tells you something is happening there worth a conversation.

    Disqualification states are first class: not a fit, not now, unreachable, unresponsive, do not contact, conflict. None of these are fallbacks at the edge of the state machine. They’re destinations the orchestrator routes to deliberately. A prospect that hit “not now” with a specific signal six months ago is a different lead than one that’s been silent. The state machine has to remember the difference.

    Operator in the loop, then less of it

    The two-mode model deserves a closer look because it’s where the architecture earns its keep. Copilot and dark factory aren’t synonyms for “manual” and “automated.” They’re different relationships between the operator and the agent group. Copilot is the operator approving every output and using the journal to spot patterns. Dark factory is the operator sampling outputs, reading exception escalations, and trusting the rubric for the rest. Some skills move between them in weeks. Some never graduate. Drafting outbound to a high-value prospect is a copilot job forever. Augmenting an early-funnel profile from public sources isn’t.

    Claude Code and Codex sit on the operator side of this loop, not the agent side. They edit the orchestrator, write skills, debug runs, apply patches. The agents inside NanoClaw containers run the domain skills, not the authoring code. The operator stitches the two layers together until each carries more on its own.

    Why this architecture for a GTM pipeline

    The framework choice matters because pipelines aren’t assistants. I started on OpenClaw. It’s the more featureful framework on paper, with channels, providers, scheduled tasks, and a guided onboarding flow all in one package. The pitch is right for a personal assistant. You point it at your stuff, it runs.

    For a GTM pipeline it’s the wrong shape. OpenClaw’s agent picks what to do based on the inbound and its own reasoning. That’s the right model for “summarise my inbox” and the wrong model for “transition prospect ABC from awaiting-reply to unresponsive after 14 days.” The second decision has to be deterministic, replayable, dry-runnable, and outside the LLM. Tool-call routing is fine when the cost of a wrong decision is small. In a GTM pipeline a wrong routing decision is a duplicate touch, a wrong segment, a compliance breach.

    NanoClaw makes the opposite design choice. It does less. It runs the channel adapters, one container per agent group, and a host process that owns the message queues. Skills are markdown files mounted into containers. There’s no built-in orchestrator, no business logic, no opinion on your workflow. For an assistant that would be missing functionality. For a pipeline it’s the right scope for the bottom layer.

    The full stack: NanoClaw is the channel and agent runtime. A separate orchestrator (custom code) sits above it and owns the pipeline state machine. Claude Code or Codex sits next to all of it as the authoring layer. The operator sits on top, reviewing outputs, approving drafts, gradually handing off more as each skill earns it. (I’ve written more on the framework comparison itself for those evaluating the two.)

    The orchestrator is plain code. State machine engine, artifact store, journal writer, skill dispatcher, dry-run harness. It dispatches structured tasks to the agent’s inbound queue. The agent runs the skill in its container and writes a result back. The result has to carry, at minimum, what was found, why, how confident the agent is, the alternatives considered and rejected, and the evidence with sources. The orchestrator validates against that contract on read. Validation failure means deterministic retry or dead-letter, never a re-prompt loop. The agent is allowed to be uncertain. It’s not allowed to be silent about it.

    Operating mode lives at the agent group, not in the task. A copilot group’s outputs land in a review queue. A dark factory group’s outputs trigger state transitions automatically. Promoting a skill from copilot to dark factory is moving its mount point, not rewriting it.

    For the model layer: Nemotron 3 Super handles the bulk runtime work. Strong instruction following, long context, throughput that holds up under volume. Augmentation skills that read four or five sources and synthesise a structured profile benefit from the long context: public LinkedIn snippets, recent posts, the company’s own site, a news mention or two. Drafting routes to Claude. The bulk-then-polish chain saves tokens on volume work and keeps the polish pass focused on prose that goes to a human. The free tier covers early-stage development; production volumes need API access. Multi-provider routing is less about feature redundancy and more about not having a single provider’s outage take out the whole pipeline. The orchestrator routes per skill family: bulk runtime to Nemotron, polish to Claude, redundancy keys for either in reserve.

    For setup specifics — Claude Code as the authoring dependency, the no-UI consequence, deployment gotchas a small VPS surfaces — checkout the companion piece on what it takes to actually run NanoClaw.

    DPDP Act compliance lives at the journal layer: every artifact change is logged with provenance, deletion requests tombstone the artifact while retaining audit evidence. Easier upfront than retrofitted.

    What this is, when it’s working

    A GTM dark factory is a specific shape: an AI-native CRM where the determinism lives between tasks and the LLM agency lives inside them. The agent does the bounded judgment work; the orchestrator decides what comes next; the journal holds both accountable. Volume goes up. Variance stays bounded. The operator’s role compresses to where it adds the most value — picking what gets built next, reviewing what the rubric can’t decide, deciding when a skill has earned graduation.

    Outbound that holds shape between wide and narrow doesn’t need an SDR army. It needs orchestration you can trust, a contract on the wire, and the discipline to let skills earn autonomy rather than be granted it. The framework choice is secondary. The split between framework, orchestrator, and authoring layer is what makes it work.