OpenClaw QuickStart (2): Install and First Chat in 10 Minutes
Install OpenClaw on macOS or Ubuntu, plug in a model provider, run the TUI, and have a working agent in ten minutes. Plus the small Node-version landmine that wastes the most time.
The README claims five minutes. I will say ten — the extra five is for the Node version mistake almost everyone makes the first time.
Prerequisites
- Node
v22.16or newer. The project really means it. Older Node will install but the gateway throws on optional chaining in some places. I runv24because that’s the recommended track. - About 2 GB free RAM at runtime, more if you load big skills.
- An LLM API key from one of: DashScope (free tier works), Anthropic, OpenAI, or the Aliyun Bailian Coding Plan (200元/month for eight models).
Check Node first:
| |
If you are stuck on something old, install nvm:
| |
That is the one foot-gun. From here it’s smooth.
Install OpenClaw
Two flavors — npm global, or the curl-bash. I prefer npm because I want to know where the binary lives:
| |
(Yes, the npm scope is @anthropic-ai. The project’s relationship with that org is a long story; the short version is “trademark history, harmless now”.)
Onboard
Run the onboarding wizard. It writes a config file into ~/.openclaw/:
| |
It will ask:
- What to call the agent — I pick something memorable so I can scold it by name in chat. Mine is
Lobster. - What it should call you — I use my actual handle, not “Boss”. Helps when reading logs.
- Which provider — pick the one whose key you have. I’ll use DashScope for this walkthrough since it has a free tier.
- The API key.
- The default model —
qwen-plusis the right default for general use.
The wizard writes to ~/.openclaw/openclaw.json. You can edit by hand later.
Start the gateway
| |
You should see something like:
[gateway] listening on http://127.0.0.1:18789
[agent] loaded skills: 17
[memory] index ready (0 entries)
[channels] none configured (yet)
The gateway is a long-running process. In the next pieces we attach channels and skills to it. For now it is sitting there with no input.
TUI: talk to it from your terminal
Open a second terminal and run:
| |
You get a chat-like terminal UI. Try a few things in order:
Hi — introduce yourself in one sentence.
Read the file ~/.zshrc and tell me what aliases I have.
Make a directory ~/openclaw-test and create a file
notes.md inside with the words "first run" in it.
Three things should happen:
- The first message returns a one-liner — model is talking.
- The second triggers the
readtool — the agent asks the gateway to read a file, and you see a tool-call line scroll past in the gateway log. - The third actually mutates your filesystem. Verify with
ls ~/openclaw-test/.
If all three worked, the install is done. If only the first worked, the agent isn’t getting access to tools — most likely the model you picked is too small to do tool-calling reliably. Switch to qwen-plus or qwen3-max and try again.
Web dashboard (optional, useful)
There’s a web UI if you prefer:
| |
I leave this off most of the time — TUI is faster — but it is useful for inspecting memory and skill state visually. It also shows the cron jobs once you have any.
What just happened, architecturally
your terminal --(stdin)--> openclaw tui
|
v
openclaw gateway :18789
|
+---------------+-------------------+
| | |
v v v
agent loop skills index tool registry
|
v
LLM provider (DashScope, Anthropic, ...)
tui is just a thin client. The gateway is where the agent loop lives. That separation is what lets you later attach Telegram, DingTalk, or the web UI as alternate front-ends, all talking to the same agent.
Next piece, we open up that gateway and look at what’s actually happening when you type a message.