zero-claw: Turning Claude Code into an openclaw-style Personal Assistant
2026-4-22
| 2026-4-22
字数 1923阅读时长 5 分钟
type
Post
status
Published
date
Apr 22, 2026
slug
zero-claw-en
summary
Simulating an openclaw-style personal assistant on Claude Code. Three additions — heartbeat, memory, evolution — plus a knowledge base following Karpathy's LLM Wiki idea.
tags
工具
Agentic Engineering
category
Agentic Engineering
icon
password
priority
3

Why zero-claw

I had been using openclaw for a while, with a Claude subscription underneath — which got banned recently. Switching to the minimax API wasn't a good trade: not cheap, noticeably weaker.
Then Claude Code shipped the channel mechanism — external message sources plug into a Claude Code session as channels. That was half of a claw sitting right there. So I started using Claude Code to simulate openclaw.
A few weeks in, the experience was better than expected. Underneath sits a claude subscription (no per-token billing), 1M context, a mature skill ecosystem, and the MCP tool chain — all ready to use. The remaining issues: the channel mechanism restricts sending claude commands proactively, and stability isn't quite there. I added a Supervisor to patch that layer and manage multiple agents.
A few weeks later, what I had was no longer just an openclaw replacement — it had become a personal assistant and knowledge base that moves with git. I've already fully switched away from openclaw in daily use, and open-sourced this one: zero-claw.

Quick Start

zero-claw is built for one scenario: already have a claude subscription with spare capacity. Turn that spare capacity into an openclaw-style personal assistant — persistent memory, self-evolution, optional knowledge base.
Run these commands in a Claude Code session:
/zero-claw:setup is a wizard that asks for the bot token from BotFather and your Telegram user_id, generates ecosystem.config.cjs, starts the process via pm2, and optionally adds a supervisor bot. Run ./start.sh and the main bot is reachable on Telegram.
Coming from openclaw? zero-claw ships a migrate-from-openclaw skill that carries over existing configuration and data.

The "Zero" Design Philosophy

The core of "zero" is glue existing tools rather than reinvent them — so the composed assistant grows stronger as each underlying tool grows stronger. Claude Code has already done the two most expensive things:
  • The strongest agent brain available: Opus + 1M context
  • Mature orchestration: skills ecosystem, MCP, channel, CronCreate, sub-agents
The remaining question is what a capable personal AI assistant still needs. The answer is short — three things: heartbeat (advances time on its own), memory (state persists across sessions), and evolution (fixes itself). zero-claw does only these three, plus an optional knowledge-base module (built on Karpathy's LLM Wiki idea).
The concrete constraints:
  • Behavior is described in markdown (CLAUDE.md, HEARTBEAT.md, SLEEP.md, SOUL.md) — no framework code
  • State is fully git-tracked — journal, memory, wiki are all plain markdown
  • No platform abstraction layer — when Claude Code upgrades, zero-claw's capabilities upgrade with it
  • Custom code only handles stability and command control (like restart); brain and orchestration are fully delegated to Claude Code

Supervisor: Stability and Command Control

openclaw ships a web panel for management, and on reflection it wasn't really necessary. What actually happened on the web side wasn't much — whatever needed installing or configuring, the bot could handle itself once told. The one thing the web really owned was triggering restarts. zero-claw keeps that pattern: routine requests go to the main bot directly, and the Supervisor only steps in when state needs to be restarted or reset.
Supervisor does two things: stability (watchdog) and command control (/restart, /status, /logs, /send <text>). On the stability side, the claude process crashes occasionally and the Telegram plugin subprocess sometimes drops — once the link is lost, the whole session goes silent; the watchdog finds and recovers it. On the command side, claude (like openclaw) occasionally needs a restart, and occasionally needs a line pushed into the session — the latter fills the gap where the channel mechanism doesn't allow proactive claude commands.
Implementation-wise, tmux wraps claude, and a Node.js process outside it does the supervising. Two entry points share one dispatcher: the supervisor bot on Telegram, and a local CLI (for headless mode, communicating via a Unix socket).
At heart, Supervisor is a minimal control panel — taking the channel from "runs" to "stays alive and stays controllable," giving heartbeat, memory, and evolution a stable host.

Heartbeat

Heartbeat is the bot's own active pulse — not triggered by user messages. When the main bot starts, it calls Claude Code's CronCreate to register an hourly cron on itself (waking hours, 8am to 11pm); each callback carries one line: Read HEARTBEAT.md and follow it.
HEARTBEAT.md is a plain markdown checklist. The bot works through it:
  • Send an online ping to Telegram (skip if MCP is disconnected)
  • Review the last hour of conversation
  • Append events to journal/YYYY-MM-DD.md, in the format - HH:MM description (skills: x, y) (candidate-skill: foo) — the skill tags feed into evolution later
  • If the last hour produced long-term relational content, write directly to memory/*.md
  • If a knowledge vault is configured, run one round of llm-wiki Capture/Ingest/Recompile
The heartbeat boundary is explicit: "cheap work on the last hour that can be done right now." Deeper reflection is saved for the nightly sleep.

Memory: Memory Follows Git

Persistent memory for an assistant isn't hardest at the storage step — it's hardest at migration and upgrades. openclaw requires manual backups when upgrading, with files scattered and no unified timeline. zero-claw keeps the entire assistant state in one git repo, and every commit is a natural checkpoint: bad upgrade, revert; new machine, clone and everything moves.
The layering:
  • journal/YYYY-MM-DD.md: temporal stream. Heartbeat appends hourly — append only, no rewrite.
  • memory/*.md: relational memory between user and bot — preferences, feedback patterns, interaction habits. One topic per file, with frontmatter (type: user/feedback/project/reference). MEMORY.md is the index, kept under 200 lines.
  • USER.md: user profile (name, timezone, language, Telegram id). The main agent updates it reactively during conversation when relevant information surfaces.
  • SOUL.md: the bot's identity and personality — name, role, core responsibilities, principles, boundaries. Only the user writes this file; neither heartbeat nor sleep touch it.
  • <vault>/_wiki/: world knowledge — owned by llm-wiki, covered in the next section.
The boundaries matter. memory holds "user-bot relationship" content only; world knowledge goes to the wiki; raw events stay in journal and are never rewritten. Code patterns, git history, one-off task details are not stored anywhere — Claude Code can already query those.
Claude Code's built-in auto-memory is deliberately not used (it lives in the Claude Code cache under the user's home directory), because that memory doesn't travel with the repo. Portability is the core requirement.

Evolution: Sleep-driven evolve

Heartbeat does only cheap work during the day; deeper reflection is saved for sleep — the third core addition in zero-claw.
Sleep is driven by the Supervisor, not a cron owned by the bot. The reason: the host may be off in the small hours, and a bot-owned cron would miss it; the Supervisor checks for catch-up once the host boots, then feeds the sleep prompt into the TUI. Sleep finishes without sending any Telegram message — conclusions land in today's journal, and the first morning heartbeat reports them back.
Sleep follows the flow in SLEEP.md:
  1. Review today's and yesterday's journal (when woken by catch-up, the real content lives in yesterday)
  1. Memory distillation: lift long-keeping relational content from journal into memory/*.md entries — journal is the stream, memory is the sediment
  1. Run evolve: self-upgrade the skill library
  1. Wiki lint (if a vault is configured)
evolve is the most interesting piece of the evolution mechanism. It's a meta-skill, scoped strictly to the self-skill list under .claude/skills/. Two actions:
  • upgrade (abstraction-triggered): add or modify a skill only when the invariant can be stated in one sentence ("when X, do Y"). Candidate sources are the (candidate-skill:) tags heartbeat left in the journal, plus repeat patterns from the past 7 days. The guiding principle: "Evolution is abstraction, not counting." — not "saw three similar things, write a skill," but "articulate the rule first, then sanity-check with concrete cases from the last 7 days."
  • retire (usage-triggered): a skill with zero (skills:) tag hits in 90 days, past a 90-day grace period, is deleted.
Commits use the format evolve(YYYY-MM-DD): .... No action means no commit — "a quiet day with no changes is itself the correct outcome." Before writing, it also checks git log --grep='^Revert.*evolve(' to avoid rewriting changes the user has already reverted.
Heartbeat, memory, and evolution together give the bot a skeleton that keeps running over time and keeps improving itself.

Knowledge Base: llm-wiki + Question-driven learn

So far the system only solves "the assistant remembers and evolves." But as a partner for knowledge work, an assistant also has to accumulate world knowledge — papers read, projects examined, domains studied.
zero-claw's knowledge base follows Karpathy's LLM Wiki directly: raw sources (papers, articles, repos) drop into the vault's raw directory, and the LLM compiles them into a structured wiki — summaries, entity pages, concept pages, backlinks, all written by the LLM. The difference from RAG: RAG retrieves fresh on every query, whereas the wiki is a persistent, structured artifact that compounds.
One key observation from Karpathy's original: human-maintained wikis collapse at scale because maintenance cost grows faster than value. LLM-maintained wikis don't — because the LLM carries the maintenance burden.
zero-claw implements this as a skill with five operations:
  • Capture: the bot lifts valuable context from its own session into a new raw file
  • Ingest: compile a raw source into wiki pages, auto-build [[link]] backlinks, update _wiki/index.md and _wiki/log.md
  • Recompile: incremental invalidation — wiki-graph --diff --json yields dirty pages; simple pages auto-recompile, dense ones get listed for user review
  • Query: during lookup, file valuable synthesis back as wiki pages, so every exploration compounds
  • Lint: mechanical checks (broken links, orphans, missing frontmatter) plus semantic checks (contradictions, stale claims, missing cross-references)
The cadence: every heartbeat runs Capture+Ingest+Recompile; every nightly sleep runs Lint.
One question remains once the knowledge base is up: where does input come from? Organizing existing material alone isn't enough. zero-claw's answer is a skill called learn, which inverts the usual human-LLM relationship — instead of asking the LLM questions, the LLM asks you, in a ZPD (zone of proximal development) difficulty-adaptive conversation.
Trigger cues are phrases like "learn / understand / walk me through / break down." Once in learn mode, it roughly does:
  1. About 5 rounds of probing conversation to sense your background, goals, and knowledge edge
  1. Internally (not shown to you) build a topic graph with 4–8 nodes, extracting the 20% load-bearing concepts, 3 consensus points, 3 controversies
  1. Enter a "test, calibrate, clarify" three-beat cycle — difficulty adjusts per answer (a miss drops 1–2 notches immediately; 2 consecutive hits bump up by 1)
When a learn session ends, two things are left for the bot: memory/learn/<slug>.md holding the learner state (resumable), and — when the session is worth keeping — the next heartbeat's llm-wiki Capture lifts the session's distilled load-bearing concepts, consensus, and controversy structure into the vault, feeding the wiki as persistent input.
The best way to learn is not to ask the LLM but to let the LLM ask you — here, that's literally the mechanism.

Assistant + Knowledge Base

Starting from "replace openclaw," what zero-claw ended up as:
  • Brain and orchestration: Claude Code provides Opus + skill ecosystem + channel + CronCreate
  • Stability foundation: Supervisor (tmux + Node.js for supervision)
  • Skeleton that keeps running: heartbeat + memory + sleep-driven evolution
  • Knowledge base: Karpathy's LLM Wiki + question-driven learn as persistent input
  • Programmable scheduled tasks: CRONTAB.md lets the user define email digests, news feeds, periodic reminders — sharing the same CronCreate mechanism with heartbeat
At its core, it's a personal AI assistant that moves with a git clone to a new machine — brain borrowed from Claude Code, knowledge base borrowed from Karpathy's idea, with only three gaps filled in-house (heartbeat, memory, evolution) plus a Supervisor for stability and command control.
Back to the core idea of zero: glue existing tools, don't reinvent — Claude Code's brain and orchestration, Karpathy's knowledge-base paradigm, git's version control and migration. The advantage of gluing is that when any one of the glued tools grows stronger, the whole assistant grows stronger: Claude Code ships a new skill, longer context, or a stronger model, and zero-claw inherits those capabilities without changing anything. Less code, longer lever.

  • 工具
  • Agentic Engineering
  • OneTrans 推荐系统对齐序列处理与特征交叉10 Simple Habits That Double Your Claude Code Success Rate
    Loading...