OpenClaw -Hands-On Setup
AI NEWSTECHNOLOGY
1/31/202611 min read


Welcome to the build phase.
Part 1 was theory. This is practice. By the end of this post, you'll have OpenClaw running on a VPS, connected to Telegram, and executing your first custom skills. No hand-waving. No skipped steps. Just a working AI agent that remembers things and does stuff.
This isn't a high-level teaser. This is a resource you can follow step by step and reference later if something breaks. Because things will break. That's how infrastructure works. But you'll know how to fix it.
Let's build.
Choosing Where to Run It
First decision: where does this thing live?
Why You Shouldn't Start on Your Laptop
OpenClaw can run on your local machine. macOS, Linux, or Windows via WSL2. The installer works across platforms. But starting local is a trap for one reason: uptime.
Your laptop sleeps. Your desktop reboots. You close the lid for a meeting and your agent dies mid-task. That's fine for testing, but terrible for a persistent assistant that's supposed to monitor email, check calendars, or run scheduled jobs.
A VPS solves this. Always on. Persistent IP. No battery constraints. No accidental shutdowns because you needed to upgrade your OS at 3 PM on a Tuesday.
VPS Benefits
Here's what you get with a VPS:
Uptime: 24/7 operation. Your agent doesn't sleep when you do.
Security: Isolated environment. Not sharing resources with your local dev setup, personal files, or browser sessions.
Isolation: If something goes wrong, it fails in a sandbox. Your local machine isn't compromised.
Remote access: SSH in from anywhere. Manage your agent from your phone if needed.
Budget and Performance Tiers
You don't need a beast of a machine. OpenClaw's resource footprint is modest unless you're running local models.
Starter tier (1–2 GB RAM, 1 CPU): Fine if you're using cloud APIs like Claude or GPT. Your VPS is just routing requests.
Mid-tier (4 GB RAM, 2 CPU): Comfortable for most use cases. Room to run additional services or heavier skills.
Heavy tier (8+ GB RAM, 4+ CPU): Only needed if you're running local models via Ollama. LLMs are hungry.
Providers worth considering: DigitalOcean, Linode, Vultr, Hetzner, OVHcloud. Pick one with straightforward billing and SSH key support.
Cost: $5-$20 per month, depending on the tier. Less than most SaaS subscriptions.
OS and SSH Primer
Ubuntu recommended. Specifically, Ubuntu 22.04 LTS or newer. Why? Most OpenClaw documentation assumes Debian-based systems. Docker setup is straightforward. Package management just works.
You could use other distros. Don't. Not for your first deployment. Save the customization for later.
SSH keys over passwords. Always. Passwords are brute-forceable. SSH keys are not.
Generate a key locally if you don't have one:
```
ssh-keygen -t ed25519 -C "your_email@example.com"
```
Add the public key to your VPS during setup. Most providers let you paste it during provisioning.
Connect:
```
ssh username@your-vps-ip
```
If that works, you're in. If it doesn't, check your firewall rules.
Firewall Basics: UFW
UFW is Ubuntu's firewall manager. Simple syntax. Does exactly what you need.
Install and enable:
```
sudo apt update
sudo apt install ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw enable
```
Translation: Block everything inbound except SSH. Allow everything outbound. This keeps your VPS locked down while still allowing OpenClaw to talk to external APIs.
Check status:
```
sudo ufw status
```
You should see port 22 allowed. That's your SSH access. Don't lock yourself out.
Backups
If the bot dies, don't cry. Back up your config.
Snapshot: Most VPS providers offer snapshots. Take one after your initial setup. Take another after OpenClaw is configured.
Rsync: Copy the `~/.openclaw` directory to a remote location periodically. This contains your agent's memory, configuration, and API keys.
```
rsync -avz ~/.openclaw/ backup-location/
```
Cloud provider backups: Enable automated backups if your provider supports it. Costs a few bucks extra per month. Worth it.
Strategy: Snapshot weekly. Rsync daily if you're running critical workflows. Sleep better knowing you can rebuild in minutes.
Installing OpenClaw Step by Step
Time to get OpenClaw running.
Core Dependencies
You need two things: Node.js 22+ and Docker.
Install Node.js 22:
```
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
node --version
```
Install Docker and Docker Compose:
```
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
```
Log out and log back in for the group change to take effect.
Verify:
```
docker --version
docker compose version
```
Both should return version numbers. If they do, you're ready.
Note: The Docker image uses Node 22 internally, but the CLI installer and setup scripts require Node.js 22+ on the host as well.
Repository Cloning
Clone the OpenClaw repo:
```
git clone https://github.com/openclaw/openclaw
cd openclaw
```
This grabs the latest stable version. The repo contains a Docker Compose configuration and setup script.
If you're paranoid about running the latest commit, check out a specific release tag. But for most users, the main branch is fine.
Initial Config: Docker Setup
OpenClaw includes a Docker setup script that handles everything.
Run it:
```
```
This script does several things in sequence:
Builds the Docker image locally.
Creates two directories: `~/.openclaw` for config and memory, `~/openclaw/workspace` for agent-accessible files.
Generates a gateway token and writes it to `.env` (you'll need this token later if you use the Control UI).
Launches an interactive onboarding wizard.
The onboarding wizard is where configuration happens.
Onboarding Wizard: Step-by-Step
The wizard asks a series of questions. Here's what to expect and how to answer.
Onboarding mode: You'll see two options: QuickStart and Manual.
Choose QuickStart for your first run. It uses safe defaults and skips complexity.
The manual gives you full control but requires understanding gateway types, network configs, and model routing. Save that for later.
Gateway type: Select "Local gateway (this machine)".
This runs OpenClaw on your VPS without exposing remote endpoints. The gateway listens on port 18789 by default. Secure by default.
Model provider: OpenClaw supports multiple backends:
OpenAI
Anthropic (Claude)
Gemini (Google)
Ollama (local models)
MiniMax (free tier option)
For your first deployment, pick a cloud provider for which you already have an API key. Claude or a current-generation OpenAI model are solid choice.
If you want to test without spending money, MiniMax offers a free tier. It's not as capable, but it works for validation.
The wizard will ask you to log in or paste your API key. Follow the prompts.
Optional integrations: The wizard may ask about mesh networking or additional tools.
Skip these unless you know what they do. You can enable them later.
Model Keys and Cost Control
Your API key controls two things: access and spend.
Access: Without a valid key, OpenClaw can't call the model. Obvious.
Spend: Every request costs money. Pricing varies significantly by model and tier, don't assume one provider is universally cheaper than another. Check current pricing pages before committing. Local models via Ollama are free to run but require more hardware.
Set up usage limits with your provider. OpenAI, Anthropic, and Google all support spending caps. Use them. Don't wake up to a surprise $500 bill because your agent got stuck in a retry loop.
Fallback models: OpenClaw supports model routing. You can configure a primary model for complex tasks and a cheaper fallback for simple queries.
We'll cover this in Part 3. For now, one model is enough.
First Run
Once the wizard finishes, Docker Compose starts OpenClaw:
```
docker compose up -d
```
The `-d` flag runs it in the background.
Check if it's running:
```
docker ps
```
You should see a container named `openclaw-gateway`.
Check logs:
```
docker compose logs -f
```
You'll see initialization output: model connection, gateway startup, and interface readiness.
If you see errors, read them. Most issues are due to API key issues or network misconfigurations.
What to Expect on First Boot
OpenClaw initializes its memory store, loads default skills, and waits for connections.
The first time it runs, there's no conversation history. No learned preferences. It's a blank slate. That's normal. Memory builds over time as you interact.
You'll also see skill loading messages in the logs. OpenClaw ships with a few default skills for system tasks. We'll create custom ones later.
If the logs show "gateway ready" and no errors, you're good. Time to connect an interface.
Connecting a Messaging Interface
OpenClaw supports a wide range of messaging platforms: Telegram, WhatsApp, Discord, Slack, iMessage, Signal, Google Chat, Microsoft Teams, Matrix, and more. The setup pattern is similar across all of them — get a token or credentials from the platform, plug them into OpenClaw's config, and pair your account.
We're walking through Telegram here because it's the fastest to configure. The same general flow applies to other platforms; check the OpenClaw docs for platform-specific instructions.
Why Telegram
Telegram treats bots as first-class citizens. The bot API is robust, webhook support is native, and setup takes 5 minutes.
No OAuth dance. No app approval process. No waiting 48 hours for platform review.
You create a bot. You get a token. You plug it into OpenClaw. Done.
Security model: Telegram bots are isolated. They can't access your contacts or personal messages unless you explicitly invite them to a group.
Multi-device access: Telegram syncs across phones, desktops, and the web. Your agent is accessible everywhere.
Bot Creation: BotFather Walkthrough
Telegram uses a bot called BotFather to manage other bots. Meta, but effective.
Open Telegram and search for @BotFather.
Start a chat. Send `/newbot`.
BotFather asks two questions:
Display name: This is what users see. Example: "OpenClaw Assistant".
Username: Must be unique and end in "bot". Example: "my_openclaw_bot".
Once you answer, BotFather replies with your bot token. It looks like this:
```
123456789:ABCdefGHIjklMNOpqrsTUVwxyz
```
Copy this token immediately. You need it in the next step.
Linking to OpenClaw
Back in your terminal where OpenClaw is running, the onboarding wizard should have asked about communication channels.
If you skipped Telegram during setup, you can add it manually.
Open the OpenClaw configuration file:
```
nano ~/.openclaw/openclaw.json
```
Look for the messaging section and add your Telegram bot token.
Alternatively, if you're still in the onboarding flow, paste the token when prompted.
The wizard will ask for your Telegram User ID. This authorizes you to control the bot.
Get your User ID by messaging @userinfobot on Telegram. It replies with your numeric ID. Copy and paste it into the OpenClaw prompt.
Pairing Your Device
Once the token and User ID are configured, restart OpenClaw:
```
docker compose restart
```
Check logs:
```
docker compose logs -f
```
You should see "Telegram configured" and "gateway restarting".
Open Telegram. Find your bot (search for the username you created). Start a chat with `/start`.
The bot replies with a pairing code.
In your terminal, approve the pairing:
```
openclaw pairing approve telegram <code>
```
Replace `<code>` with the code from Telegram.
If successful, the bot sends a welcome message. Your agent is now live.
Testing the Connection
Send a simple message: "Hello."
The bot should respond. If it does, your setup is correct.
Try something more complex: "What's the weather today?"
If OpenClaw has web search enabled, it should fetch current weather data and reply.
If it doesn't respond, check logs for errors. Common issues: API key misconfiguration, network blocks, or incorrect pairing.
UI Patterns: Commands and Interaction
Telegram bots support commands (prefixed with `/`) and free text.
OpenClaw responds to both. You can issue structured commands like `/skills` to list available skills, or send natural language requests.
The bot understands context. If you say "Summarize the last three emails," it knows you're referring to your email account (assuming you've configured that skill).
Commands are handy for system tasks: checking status, listing skills, clearing memory. Natural language is better for delegation.
Experiment. See what works. The agent learns your patterns over time.
## Your First Skills
Skills are what make OpenClaw useful. A skill is a defined workflow that the agent can execute.
Without skills, OpenClaw is just a chatbot. With skills, it's an assistant.
### What a Skill Is
A skill is a JSON file that describes:
Name: What the skill is called.
Description: When the agent should use it.
Instructions: Step-by-step logic for execution.
Tools: APIs or system functions it needs.
OpenClaw uses a pattern called progressive disclosure. When the agent starts, it loads only skill names and descriptions. When a task matches a skill's description, it reads the full instructions.
This keeps the agent fast. It doesn't load every workflow into context unless needed.
Note: The skill schema shown here reflects community documentation at the time of writing. Check the [official docs](https://github.com/openclaw/openclaw) for the latest schema and supported fields.
Skill Anatomy: Example
Let's create a simple skill: a crypto price checker.
Navigate to your skills directory:
```
cd ~/openclaw/workspace/skills
mkdir crypto-price
cd crypto-price
nano skill.json
```
Paste this:
```json
{
"name": "crypto-price",
"description": "Fetches the current price of a cryptocurrency",
"instructions": "When the user asks for a crypto price, call the CoinGecko API to fetch the current USD price. Return the result in a readable format.",
"tools": ["web_request"],
"enabled": true
}
```
Save and exit.
This skill does one thing: fetches crypto prices via API. Simple, focused, effective.
Creating a Skill: To-Do Integration
Let's create something more useful: a to-do skill that adds tasks to a list.
Create a new skill directory:
```
mkdir ~/openclaw/workspace/skills/todo-manager
cd ~/openclaw/workspace/skills/todo-manager
nano skill.json
```
Example structure:
```json
{
"name": "todo-manager",
"description": "Manages a personal to-do list. Can add, list, and complete tasks.",
"instructions": "When the user says 'add task' or 'todo', create a new entry in the task list file. When they say 'list tasks', read and display all tasks. When they say 'complete task', mark it done.",
"tools": ["file_write", "file_read"],
"enabled": true
}
```
This skill uses file operations. OpenClaw can read and write files in the workspace directory.
The agent interprets "add task [description]" as a trigger to write to `tasks.txt`. It reads the file when you ask for the list. It modifies entries when you mark things complete.
No database. No API. Just files. But it works.
Loading and Testing Skills
Once you've created a skill, restart OpenClaw to load it:
```
docker compose restart
```
In Telegram, send `/skills` to list available skills.
You should see your new skills listed as "ready".
Test the crypto skill: "What's the current price of Bitcoin?"
If configured correctly, OpenClaw calls the CoinGecko API and returns the price.
Test the to-do skill: "Add task: Write Part 3 of blog post"
Check the workspace:
```
cat ~/openclaw/workspace/tasks.txt
```
Your task should be there.
Debugging Skills
Skills fail. Usually because:
- Syntax errors in JSON.
- Missing tool permissions.
- API keys not configured.
- Instructions too vague for the agent to parse.
Check logs:
```
docker compose logs -f
```
OpenClaw shows skill-loading errors, tool-execution failures, and API request issues.
Test incrementally: Start with simple skills. File operations before API calls. Static responses before dynamic queries. Build complexity gradually.
Validate JSON: Use a linter. One misplaced comma breaks everything.
Skill Library
OpenClaw has a community skill repository. Browse it for inspiration: email summarizer, calendar integration, web scraper, RSS feed reader, weather checker, and more.
You can copy existing skills and modify them. Most are MIT licensed. Use what works.
Common Setup Mistakes
You're going to hit issues. Everyone does. Here are the ones that trip people up most often.
Wrong API Key Format
OpenAI keys start with `sk-`. Anthropic keys start with `sk-ant-`. If your key doesn't match the provider's format, it won't work.
Copy keys carefully. No extra spaces. No line breaks.
Firewall Blocking Outbound Requests
If your agent can't reach external APIs, check UFW.
Make sure outbound traffic is allowed:
```
sudo ufw default allow outgoing
```
If you locked down specific ports, whitelist HTTPS (443).
Docker Permission Issues
If you get "permission denied" errors running Docker commands, your user isn't in the Docker group.
Fix:
```
sudo usermod -aG docker $USER
```
Log out and back in.
Model Overload
If you ask the agent to do something complex and it times out, your model might not be powerful enough.
Try a more capable model tier, or switch from a small Ollama model to a larger one. Check your provider's current model offerings — they change frequently.
Telegram Pairing Fails
If pairing doesn't work, double-check your User ID.
Message @userinfobot again. Make sure you copied the number correctly.
Also, verify your bot token. If BotFather gave you a new token because you created multiple bots, use the correct one.
Skills Not Loading
If `/skills` returns an empty list, OpenClaw didn't find your skill files.
Check file paths. Skills must be in `~/openclaw/workspace/skills/`.
Check JSON syntax. One error breaks the whole file.
Restart after adding skills. OpenClaw loads them at startup.
### Logs Show "Connection Refused."
Your VPS firewall, or the provider-level firewall, might be blocking Docker's internal networking.
Check Docker network:
```
docker network ls
```
Inspect logs for the specific service failing:
```
docker compose logs openclaw-gateway
```
Usually, this is a DNS or routing issue. Verify your VPS can reach external IPs.
You're Live
If you made it here, you have:
A VPS running OpenClaw in Docker
Telegram connected and responding
At least one custom skill deployed
That's the foundation. Everything else builds on this.
Your agent is live. It's persistent. It remembers context. It executes tasks. It's yours.
Next step: make it better.
In Part 3, we'll turn this functional bot into a power-user tool. Model routing for cost optimization. Multi-agent workflows. Memory hygiene. Advanced skill patterns. Real automation that saves hours.
But for now, just appreciate this: You have a self-hosted AI agent. No SaaS subscription. No vendor lock-in. No data is sent to external servers unless you explicitly configure it.
You built it. You own it. You control it.
That's the whole point.
Now go break something and fix it. That's how you learn.