NanoClaw is the structurally right framework for pipeline-shaped agent workloads. It’s also genuinely more technical to set up than most personal-assistant frameworks people compare it to. If you’re evaluating it after reading the comparison piece and wondering what you’re signing up for, this is the honest answer.
One thing worth setting up front: NanoClaw is not built for non-technical users, and neither is openclaw despite its more polished onboarding. The marketing on both sites pitches “personal AI assistant for everyone.” The reality is different. NanoClaw expects comfort with git, the command line, Docker, and at least basic Linux administration. The trade you make in exchange is access to Claude Code as the authoring layer for your fork — which is arguably the most capable AI coding tool available right now, and meaningfully more capable than the typical models you’d be running underneath openclaw. The framework is built around that capability difference rather than trying to abstract it away.
The architecture is right. The setup curve is real. Below is what actually bites.
You need a Claude Code subscription
This isn’t a soft dependency. NanoClaw is built around Claude Code as the authoring layer — the slash commands that install channels and providers (/add-telegram, /add-opencode, /add-codex and so on) run inside Claude Code and copy source files into your fork from long-lived branches. You can technically edit the same files by hand, but you’d be reverse-engineering what those slash commands do every time you customise.
Practically: a Claude Code Pro or Max subscription is the working assumption. Without it, you’re not really running NanoClaw the way it’s designed to run. With it, the authoring experience is the best part of the framework — the codebase is small enough that Claude Code can confidently make changes across it, and the fork-as-install model means every customization is a code change you can read and revert.
This also constrains who NanoClaw is for. If you’re allergic to Claude Code (philosophically, financially, or because you prefer Codex or another harness as your primary), you’ll fight the framework. If you’re already deep in Claude Code, the integration is genuinely tight.
Codex works as a fallback authoring layer for individual tasks, and the /add-codex skill makes Codex available as an agent provider (separate from authoring). But the slash-command-based setup expects Claude Code as the primary harness. Plan around that.
OneCLI is part of the deal
NanoClaw doesn’t manage your API keys directly. That job is delegated to OneCLI, the companion credential proxy that ships alongside it. Agents inside containers never see raw API keys; they make outbound HTTPS requests through OneCLI, which injects credentials at the proxy layer based on per-agent policies.
This matters in practice for two reasons. First, agents inside NanoClaw containers have bash access — anything that put an API key directly in the container would be reachable by any code the agent runs. OneCLI keeps that surface clean. Second, you’ll spend real time during setup configuring OneCLI: registering your Anthropic credential, creating per-agent secret assignments, deciding whether each agent gets all secrets or a specific subset. The nanoclaw.sh install script handles the basics, but ongoing changes (adding a new provider, rotating keys, scoping a credential to one agent) involve OneCLI commands rather than editing config files.
It’s worth understanding before you start. Treat OneCLI as a meaningful piece of the system, not a one-time setup chore that disappears after install.
There’s no web UI out of the box
NanoClaw ships the channel and agent runtime. It doesn’t ship an operator console. There’s no dashboard for browsing agent activity, no log viewer, no chat history UI, no admin panel, no menubar app. The framework’s stance is that you talk to your agent through a messaging channel — Telegram, Slack, Discord, WhatsApp, whatever you’ve installed — and that’s the interface.
Openclaw, by comparison, has a guided openclaw onboard CLI for setup and a Companion App (Beta) on macOS that adds a menubar interface. So if you’re coming from openclaw expecting some kind of UI affordance out of the box, NanoClaw will feel deliberately bare.
For an assistant, the chat-channel-only approach is fine. The channel is the interface.
For a pipeline, it’s not enough. Pipelines need state-of-everything views: which prospects are in which stages, which agents are working on what, what’s pending operator review, what’s been dead-lettered. None of that is conversational. You need a UI.
The options are real but each has a cost:
Build a custom web UI as a NanoClaw skill. A small Express or similar server inside a skill that exposes a chat-plus-dashboard interface, talks to the agent through the same task contract NanoClaw uses elsewhere, and serves over tailscale serve so it’s only reachable on your tailnet. Takes a day to build. You control the UX completely. You can mount per-agent dashboards next to the chat thread. No third party between you and your operator interface. This is the version I keep coming back to.
Use a messaging channel as the operator interface. Telegram is fastest to bring up — bot via BotFather, token in five minutes. Discord and Slack work too. The trade is that pipeline state is awkward to display in a chat thread, and you end up either composing structured messages (clunky) or building dashboards anyway (defeats the purpose).
Lean on the underlying systems for state visibility. SQLite for the artifact and journal storage means you can run ad-hoc queries against it. docker logs for container-level activity. journalctl --user for systemd-level service logs. This works for debugging and post-hoc analysis. It doesn’t work as a real-time operator surface.
In practice, you’ll mix all three. The custom web UI is the primary operator console, channels handle quick-access from your phone, and you use the underlying tooling when something goes weird and you need to dig.
Setup gotchas on a small VPS
NanoClaw runs comfortably on a 2GB DigitalOcean droplet (or equivalent). The hosting cost is a few dollars a month. The friction comes from minimal cloud images being stripped down enough that several setup steps fail in non-obvious ways.
The base image doesn’t ship with a C compiler. Several modules in the dependency tree build native bindings during pnpm install and fail with generic “command failed” errors that don’t tell you the compiler is missing. Install build tools before the first install:
sudo apt updatesudo apt install -y build-essential acl
The acl package is also missing from minimal images and you’ll need it for the Docker socket fix below.
The Docker socket ACL doesn’t survive reboot. NanoClaw runs agent containers via Docker. By default, only root can talk to the Docker socket. Adding your operator user to the docker group works but is broadly equivalent to giving that user root, which is not what you want.
The cleaner approach is an ACL grant on /var/run/docker.sock. The catch: /var/run is a tmpfs mount, recreated on every boot. Anything you setfacl once is wiped on reboot. The fix is a tmpfiles.d rule that recreates the ACL automatically. Create /etc/tmpfiles.d/docker.conf with:
a+ /var/run/docker.sock - - - - u:youruser:rw
Replace youruser with the actual operator username. Test with sudo systemd-tmpfiles --create and verify with getfacl /var/run/docker.sock. Reboots no longer break Docker access for the operator account.
Two systemd services, not one. Run NanoClaw and your custom orchestrator as separate systemd user services. When you’re iterating on the orchestrator (which you will, often, especially in early development), restarting it shouldn’t take the channel adapters down. Channel reconnects are slow and annoying; orchestrator restarts should be near-instant.
A reasonable layout:
~/.config/systemd/user/nanoclaw.service~/.config/systemd/user/orchestrator.service
If you want either service to start on boot before you log in, enable lingering for the user with sudo loginctl enable-linger youruser. Easy to forget; non-obvious failure mode (services don’t start, you don’t know why, you log in, they magically work).
Add swap. A 2GB droplet doesn’t ship with swap configured. Under heavy LLM-context loads — long-context windows plus large augmentation tasks — you can OOM unexpectedly. A 2GB swap file is cheap insurance:
sudo fallocate -l 2G /swapfilesudo chmod 600 /swapfilesudo mkswap /swapfilesudo swapon /swapfileecho '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Set vm.swappiness=10 in /etc/sysctl.conf so the kernel prefers RAM and only swaps under genuine pressure. Reboot to verify.
What stays on the laptop, what goes on the VPS
The local-versus-VPS question resolves cleanly:
- A laptop is fine for the install rehearsal, fork setup, and a couple of agents you only use while at the keyboard.
- Anything that needs to be reachable, scheduled, or running while you’re not at the keyboard belongs on the VPS.
The cost difference between 1GB and 2GB on DigitalOcean is a few dollars a month, and the difference in headroom is between fighting the host and forgetting about it. Take the 2GB. The marginal saving on a 1GB droplet is not worth the time you’ll spend wondering why builds are failing or why the agent container is OOM’ing.
Honest scope of “easy”
NanoClaw is technically simpler than openclaw — fewer lines of code, fewer abstractions, fewer hidden behaviours. It’s not operationally simpler. The framework expects you to:
- Have a Claude Code subscription and use it as the authoring layer
- Be comfortable with the Linux command line, systemd, Docker, git
- Build your own operator UI if you want one
- Write your own orchestrator if you’re doing pipeline-shaped work
For someone who already operates in this stack, NanoClaw feels light and clean — and the Claude Code authoring layer is genuinely the best part. The codebase is small enough that asking Claude Code to make changes across it works reliably, which is a meaningfully better experience than the typical “edit config files, hope you got it right, debug when you didn’t” pattern.
For someone hoping for a one-click personal assistant, the curve is meaningfully steeper than openclaw’s onboarding. Openclaw has a guided CLI (openclaw onboard) and a macOS Companion App that gives you a menubar interface; NanoClaw deliberately ships none of that. Both still expect a technical user underneath, but openclaw lowers the floor more.
The trade is real and the trade is good if your use case justifies it. You end up with a system you understand end to end, that runs in resources you control, that doesn’t depend on a SaaS gateway, and that you can reason about when something breaks. Worth the lift if you’re building something pipeline-shaped. Not worth the lift if you just want a chatbot.
A useful concrete reference point: Singapore’s Foreign Minister, Vivian Balakrishnan, published the architecture for his own NanoClaw-based “second brain” setup, with an accompanying X post walking through the composition. He’s technically literate — coding is a known hobby of his — but not a software engineer by trade. His setup composes NanoClaw with a few other open-source pieces (a memory layer, OneCLI for credentials, the LLM Wiki pattern for knowledge synthesis) and runs on a Raspberry Pi. It’s a useful existence proof of “technical-but-not-developer” being the floor for NanoClaw, and equally a useful caution: Vivian could compose those pieces because of fluency he already had. Anyone reading this without that fluency yet would need to pick it up first. The reward is real, and so is the prerequisite.
The full GTM system this deployment serves is in Building a GTM dark factory with Nemotron 3 and NanoClaw. The framework comparison that motivates picking NanoClaw in the first place is in Why I picked NanoClaw over openclaw for a GTM pipeline.
Comments
2 responses to “What it takes to actually run NanoClaw”
[…] For setup specifics — Claude Code as the authoring dependency, the no-UI consequence, deployment gotchas a small VPS surfaces — checkout the companion piece on what it takes to actually run NanoClaw. […]
LikeLike
[…] The full GTM system this comparison feeds into is in Building a GTM dark factory with Nemotron 3 and NanoClaw. For setup specifics — what it takes to actually run NanoClaw end to end — see the companion piece. […]
LikeLike