Series · OpenClaw QuickStart · Chapter 1

OpenClaw QuickStart (1): What This Thing Actually Is

OpenClaw is a self-hosted AI agent gateway that bridges 20+ chat platforms into one runtime. This first piece explains what the project actually is, where it sits relative to ChatGPT and Claude, and which problems it earns its disk space by solving.

I keep getting asked “is OpenClaw just another wrapper around an LLM?” The short answer is no, and the reason matters enough that I wanted to write it down before walking through the QuickStart.

This is the first piece in a six-part series. By the end you should have a working OpenClaw on your own machine, talking to a model, listening on at least one chat channel, and doing something useful that survives a reboot.

What it is, in one paragraph

OpenClaw is a self-hosted AI agent platform. You install a single Node binary, give it API keys for one or more LLM providers, and it runs a long-lived gateway process that listens for messages on whatever chat platforms you wire up — DingTalk, WeChat (via WorkBuddy), Telegram, Discord, Slack, Feishu, you pick. The gateway routes those messages through an agent loop with tools, skills, memory, and a cron scheduler. The whole thing is MIT-licensed, the data lives on your machine, and you can swap models without rewriting your prompts.

If you have used Claude Code or Cursor, the agent loop will feel familiar. The difference is that those products own the surface — you talk to them in their UI. OpenClaw inverts that. The surface is whatever messenger you already use. The agent comes to you.

Where it sits

I find it useful to put it on a 2x2.

                hosted              self-hosted
            +------------------+------------------+
   chat-app | ChatGPT, Claude  | (rare)           |
   surface  | (Anthropic UI)   |                  |
            +------------------+------------------+
   yours    | Many SaaS bots   | OpenClaw         |
   surface  | (Glean, Slack AI)|                  |
            +------------------+------------------+

The bottom-right cell is small but it is the one I want. I want my agent to:

  • run on a box I own
  • read my files, my notes, my email
  • post into the same Telegram thread I already use to nag myself
  • not be metered into a subscription

OpenClaw is the most polished way to live in that cell that I have found in 2026.

Three things it earns its keep on

I want to be honest about the things a wrapper would not solve and OpenClaw does.

1. Channel adapters are real engineering. The DingTalk Stream protocol, WeChat’s lockdown, and Discord’s rate limits each have their own footguns. OpenClaw ships adapters for 20-plus channels and each one knows the platform’s quirks. Writing one yourself is a weekend; writing five is a month.

2. Skills are not just system prompts. A Skill is a Markdown file plus optional helper scripts plus a manifest. The agent only loads the manifest at boot — the body of the skill is paged in lazily when the skill matches the current task. This is not glamorous but it is the difference between a 4k context window of system prompt and a 40k library that costs you almost nothing per turn.

3. Memory survives across sessions. There is a typed memory store (user profile, project state, references, feedback) plus an indexable MEMORY.md. When the same conversation resumes three days later in a different chat, the agent doesn’t start cold.

If none of those three matter to you, you can stop reading and use Claude Code directly. If at least one of them rings a bell, the rest of this series will pay off.

What you’ll have at the end

By article six in this series:

  • OpenClaw installed locally, gateway running on :18789
  • A model provider configured (Bailian Coding Plan, Claude direct, or DashScope free tier — your choice)
  • TUI working from your terminal
  • One real chat channel wired up — I will use Telegram in this series because it has the cleanest setup
  • Two custom Skills loaded
  • One MCP server attached for browser automation

That is enough surface area to build any of the case studies in the official docs (second-brain, daily briefing, devops automation, content pipeline). I will point at one of those at the end.

What this series will not cover

  • Multi-agent routing — the official docs do this better
  • Production hardening beyond the basics
  • WeChat WorkBuddy setup beyond a brief demo (it is a Tencent product and the registration flow is non-trivial)

Next piece, we install. Should take ten minutes if your Node is current and twenty if it isn’t.

Liked this piece?

Follow on GitHub for the next one — usually one a week.

GitHub