OpenClaw Quick Start
In a Nutshell
OpenClaw is an open-source AI personal assistant framework that runs on your own computer or server. It connects to messaging platforms like WhatsApp, Telegram, and Discord, enabling AI to help you handle messages, execute tasks, and manage schedules.
Think of it as: A 24/7 online AI employee that communicates with you through your favorite chat apps.
Unlike cloud-based AI assistants, all of OpenClaw's data stays on your own device. Your chat history, preferences, and work files are all stored locally. No third-party servers snooping on your data, no monthly subscription locking down your workflow.
History
OpenClaw's story is an open-source legend:
Nov 2025 Peter Steinberger (iOS developer, PSPDFKit founder)
Built a weekend project connecting AI models to messaging apps
Originally called Warelay, purely a personal AI learning experiment
|
Nov 2025 Project renamed to Clawdbot, open-sourced
Grew from 0 to 123K GitHub Stars in 3 weeks
Developer community went viral
|
Jan 27, 2026 Anthropic requested trademark change
(Clawdbot name too similar to Claude)
Project renamed to Moltbot
|
Jan 30, 2026 Community vote, final name: OpenClaw
"Open" emphasizes open-source, "Claw" pays tribute to the lobster claw
Slogan: EXFOLIATE! EXFOLIATE!
|
Feb 2026 GitHub Stars surpassed 227K
Fork count exceeded 43,000
Became one of the fastest-growing open-source projects everWhy did it grow so fast? Because it solved a real pain point: Everyone wants a personal AI assistant, but nobody wants to hand all their data to the cloud. OpenClaw lets you run AI on your own device, interact through chat apps you already use, with zero learning curve.
Key Milestones
| Date | Event | Significance |
|---|---|---|
| Nov 2025 | Project created (Warelay) | Peter's personal AI learning project |
| Nov 2025 | Open-sourced as Clawdbot | 123K Stars in 3 weeks |
| Jan 2026 | Renamed to Moltbot | Anthropic trademark request |
| Jan 2026 | Final name: OpenClaw | Community vote decided |
| Feb 2026 | 227K+ Stars | One of the fastest-growing open-source projects ever |
| Feb 2026 | Daily release cadence | Version format vYYYY.M.D |
Comparison with Other AI Assistant Frameworks
There are quite a few AI assistant/chatbot frameworks on the market. OpenClaw's positioning is fundamentally different from them.
OpenClaw vs Botpress
| Dimension | OpenClaw | Botpress |
|---|---|---|
| Positioning | Personal AI assistant | Enterprise chatbot platform |
| Deployment | Local-first, runs on your own device | Cloud SaaS primarily |
| Data | All stored locally | Stored on Botpress cloud |
| AI Models | Supports 29+ providers, freely switchable | Mainly tied to its own models |
| Target Users | Developers, tech enthusiasts | Enterprise customer service teams |
| Conversation Design | Natural language driven, no flowcharts needed | Visual flow editor |
| Pricing | Completely free and open-source (MIT) | Limited free tier, enterprise version paid |
Botpress is suited for customer service bots with a visual conversation flow editor, ideal for non-technical users. OpenClaw doesn't do customer service — it's your personal assistant, helping you manage emails, write code, and control smart home devices.
OpenClaw vs Rasa
| Dimension | OpenClaw | Rasa |
|---|---|---|
| Architecture | Gateway (core service process for receiving and dispatching messages) + Agent (independent AI assistant instance) loop | NLU + Dialogue Management + Action Server |
| Learning Curve | npm install -g openclaw and you're running | Need to understand NLU pipeline, training data format |
| AI Capability | Directly calls large language models | Traditional NLU + optional LLM |
| Messaging Platforms | Built-in support for 15+ platforms | Need to write your own Connector |
| Tool Calling | Built-in browser control, file operations, etc. | Need to implement your own Action Server |
| Use Cases | Personal assistant, developer tools | Enterprise-level dialogue systems |
Rasa represents the traditional NLU approach, requiring annotated training data, defined intents and entities. OpenClaw stands directly on the shoulders of large language models — no training data needed, works out of the box.
OpenClaw vs DIY Solutions
Many developers think: Can't I just call the OpenAI API + build a Telegram Bot?
Of course you can, but you'll quickly run into these issues:
Problems a DIY solution needs to solve:
+-- Messaging platform adaptation (each platform API is different)
+-- Session management (multi-turn conversations, context windows)
+-- Streaming output (real-time AI response display)
+-- Tool calling (letting AI perform actual operations)
+-- Memory system (AI remembering your preferences)
+-- Security controls (preventing unauthorized access)
+-- Multi-model switching (different models for different tasks)
+-- Model failover (auto-switch to backup when primary model fails)
+-- Media processing (images, audio, video)
+-- Daemon management (24/7 operation)
+-- Update maintenance (keeping up with API changes)OpenClaw handles all of this. You just need npm install -g openclaw && openclaw onboard, and you'll have a fully functional AI assistant in 5 minutes.
Recommendation Guide
| Your Need | Recommended Solution |
|---|---|
| Personal AI assistant, privacy-first | OpenClaw |
| Enterprise customer service bot, need visualization | Botpress |
| Enterprise-level dialogue system, need precise control | Rasa |
| Simple single-platform Bot | DIY solution |
| Multi-platform + multi-model + tool calling | OpenClaw |
Core Architecture
OpenClaw's architecture design is very elegant. The official self-definition is: Multi-channel AI gateway with extensible messaging integrations.
The core has three layers:
+------------------------------------------------------------------+
| Messaging Platform Layer (Channels) |
| |
| WhatsApp | Telegram | Discord | Slack | Signal | iMessage |
| BlueBubbles | Google Chat | Teams | LINE | Zalo | WebChat | ... |
+-------------------------------+----------------------------------+
|
+-------------------------------v----------------------------------+
| Gateway (Control Plane) |
| ws://127.0.0.1:18789 |
| |
| +----------+ +----------+ +----------+ +----------------+ |
| | Routing | | Session | | Security | | WebSocket API | |
| | Engine | | Manager | | Control | | | |
| +----------+ +----------+ +----------+ +----------------+ |
| |
| +----------+ +----------+ +----------+ +----------------+ |
| | Skill | | Memory | | Tool | | Plugin | |
| | System | | System | | System | | System | |
| +----------+ +----------+ +----------+ +----------------+ |
| |
| +----------+ +----------+ +----------+ +----------------+ |
| | Cron | | Webhook | | Media | | Control UI | |
| | Scheduler| | | | Pipeline | | | |
| +----------+ +----------+ +----------+ +----------------+ |
+-------------------------------+----------------------------------+
|
+-----------v-----------+
| Pi Agent Runtime |
| (RPC Mode) |
+-----------+-----------+
|
+-------------------------------v----------------------------------+
| AI Model Provider Layer (Models) |
| |
| OpenAI | Anthropic | Google | Ollama | Mistral | xAI |
| AWS Bedrock | Qwen | GLM | DeepSeek | OpenRouter | ... |
+------------------------------------------------------------------+Data Flow: The Journey of a Message
What happens behind the scenes when you send a message to OpenClaw on WhatsApp?
1. WhatsApp message arrives
|
2. Baileys library receives message, converts to unified format
|
3. Gateway routing engine determines:
+-- Which Agent does this message belong to?
+-- Is the sender authorized? (DM pairing check)
+-- Was it @mentioned in a group chat?
|
4. Session manager loads/creates session:
+-- Load historical context
+-- Load memory files (MEMORY.md + today's log)
+-- Load skill instructions
|
5. Pi Agent executes reasoning loop:
+-- Assemble complete Prompt (system instructions + memory + history + user message)
+-- Call AI model (e.g., Claude Opus 4.6)
+-- Model returns text or tool call request
+-- If tool call -> execute tool -> feed result back to model
+-- Loop until model gives final response
|
6. Streaming output:
+-- AI response pushed to WhatsApp in real-time
+-- Simultaneously pushed to WebChat UI (if open)
+-- Typing indicator lets the other party know AI is "thinking"
|
7. State persistence:
+-- Session history written to disk
+-- If AI decides to remember something -> written to memory file
+-- Usage statistics updatedKey design decisions in this flow:
Each session executes serially — Messages within the same session are queued, preventing race conditions
Streaming output — No need to wait for AI to finish thinking before sending, see responses in real-time
Tool calling loop — AI can call multiple tools consecutively until the task is complete
Core Concepts Explained
Gateway
Gateway is the heart of OpenClaw, a long-running daemon process. It's not a simple HTTP server, but a complete control plane.
Gateway responsibilities:
Gateway Core Responsibilities:
+-- Connection Management
| +-- Manage long connections to all messaging platforms
| +-- WhatsApp uses Baileys, Telegram uses grammY, Discord uses discord.js
| +-- Auto-reconnect, heartbeat keepalive
| +-- Unified message format conversion
|
+-- Session Management
| +-- Main session (direct chat)
| +-- Group chat isolation (independent session per group)
| +-- Activation modes (@mention, keyword trigger, etc.)
| +-- Queue mode (message queuing)
|
+-- Agent Scheduling
| +-- Multi-Agent routing (different platforms/contacts -> different Agents)
| +-- Independent workspace per Agent
| +-- Independent authentication and configuration
|
+-- Security Control
| +-- DM pairing mechanism (unknown senders require verification code)
| +-- Allow list (allowFrom)
| +-- Permission isolation
|
+-- API Exposure
| +-- WebSocket API (default 127.0.0.1:18789)
| +-- Control UI (Web control panel)
| +-- WebChat (browser chat interface)
|
+-- Automation
+-- Cron scheduled tasks
+-- Webhook reception
+-- Gmail Pub/Sub pushGateway is installed as a system service: macOS uses launchd, Linux uses systemd. This way it auto-starts on boot and runs 24/7.
# Install Gateway daemon
openclaw onboard --install-daemon
# Manual start (for debugging)
openclaw gateway --port 18789 --verbose
# Check Gateway health
openclaw doctorChannel (Messaging Platform Abstraction Layer)
Channel is OpenClaw's unified abstraction for messaging platforms. Whether you're using WhatsApp or Telegram, it's just a Channel to the Gateway.
What each Channel needs to handle:
| Capability | Description |
|---|---|
| Messaging | Text, images, audio, video, files |
| Group Chat Support | @mention detection, reply quoting, group member info |
| Typing Indicator | Show "typing..." to the other party |
| Message Chunking | Auto-split long messages (different platform limits) |
| Media Processing | Image compression, audio transcoding, video frame extraction |
| Authentication | Different authentication methods per platform |
Currently supported Channels fall into two categories:
Core Channels (independent implementations in src/ directory):
| Platform | Integration Library | Authentication |
|---|---|---|
| Baileys (Web protocol) | QR code pairing | |
| Telegram | grammY | Bot Token |
| Discord | discord.js | Bot Token |
| Slack | Bolt | OAuth App |
| Signal | signal-cli | Phone number registration |
| BlueBubbles | BlueBubbles API | iMessage (recommended) |
| iMessage | imsg CLI | macOS native (legacy) |
| WebChat | Built-in | No authentication needed |
Extension Channels (loaded via extensions):
| Platform | Description |
|---|---|
| Google Chat | Google Workspace integration |
| Microsoft Teams | Enterprise communication platform |
| Matrix | Decentralized communication protocol |
| Zalo | Vietnam's mainstream messaging app |
| Zalo Personal | Zalo personal account |
| LINE | Mainstream in Japan/Southeast Asia |
| Feishu (Lark) | ByteDance enterprise communication platform |
Channel security defaults are important. Since OpenClaw connects to real messaging platforms, DMs from strangers default to pairing mode:
Stranger sends message -> OpenClaw returns a pairing code -> You confirm in terminal
|
openclaw pairing approve telegram ABC123
|
User is added to allow list, subsequent messages processed normallySkills
If tools are AI's hands, then skills are the textbooks teaching AI how to combine these tools.
The essence of a skill is a set of system instructions + tool definitions, telling AI what to do in specific scenarios.
Tools = Individual atomic operations
e.g., Read file, send HTTP request, execute Shell command
Skills = Knowledge for combining tools to complete complex tasks
e.g., Manage GitHub Issues = Search Issues + Read details + Add comments + Modify labelsSkills are organized in three tiers:
| Tier | Description | Source |
|---|---|---|
| Bundled Skills | Built-in skills, installed with OpenClaw | Core repository |
| Managed Skills | Community skills installed via ClawHub | clawhub.ai |
| Workspace Skills | Your own local skills | Workspace directory |
50+ built-in skills overview:
| Category | Skill | Function |
|---|---|---|
| Productivity | gog | Google Workspace (email, calendar, docs) |
notion | Notion pages and database management | |
trello | Trello boards and task management | |
slack | Slack workspace message management | |
1password | 1Password password lookup (read-only) | |
| Note Management | obsidian | Obsidian note search and management |
apple-notes | Apple Notes | |
bear-notes | Bear Notes | |
| Developer Tools | coding-agent | Programming assistant (write code, debug, refactor) |
github | GitHub repo management, PRs, Issues | |
gh-issues | GitHub Issues specialized management | |
tmux | Terminal session management | |
| Task Management | apple-reminders | Apple Reminders |
things-mac | Things task management | |
| Multimedia | spotify-player | Spotify music control |
voice-call | Voice calls (ElevenLabs) | |
peekaboo | Screenshot capture | |
camsnap | Camera photo | |
video-frames | Video frame extraction | |
| Utilities | weather | Weather lookup |
summarize | Content summarization | |
nano-pdf | PDF processing | |
skill-creator | Create custom skills | |
| System Admin | healthcheck | System health check |
session-logs | Session log viewer | |
model-usage | Model usage statistics |
Memory System
OpenClaw's memory system design philosophy is: Simple to the extreme.
No vector database, no RAG pipeline, no embedding index. AI's memory is just Markdown files written to disk.
~/.openclaw/workspace/
+-- MEMORY.md # Long-term memory (curated important information)
+-- memory/
+-- 2026-02-25.md # Today's log
+-- 2026-02-24.md # Yesterday's log
+-- ... # Older logsTwo-layer memory architecture:
| Layer | File | Content | When Loaded |
|---|---|---|---|
| Long-term Memory | MEMORY.md | Your preferences, important decisions, key facts | Every session startup |
| Daily Log | memory/YYYY-MM-DD.md | Notes and context for the day | Load today + yesterday |
AI has two memory tools:
memory_search— Semantic search across all memory filesmemory_get— Precise retrieval of specific memory content
Benefits of this design:
Readable — You can open the Markdown files anytime to see what AI remembers
Editable — You can directly modify memory files, delete things you don't want AI to remember
Version-controllable — Put it in Git to track memory changes
Zero dependencies — No additional database service needed
The memory system is a plugin slot — only one memory plugin can be active at a time. The official plan is to converge to a single recommended default in the future.
Tools
Tools are AI's ability to perform actual operations. OpenClaw comes with a powerful built-in toolset:
| Tool Category | Capability | Description |
|---|---|---|
| Browser Control | Open pages, screenshot, click, fill forms | Based on Playwright, with dedicated Chrome instance |
| Canvas | Visual workspace | Agent-driven UI, supports A2UI push |
| Node Operations | Camera, screen recording, location | Via macOS/iOS/Android nodes |
| File Operations | Read/write files, directory management | Operates within workspace |
| Shell Execution | Run command-line commands | Executed in security sandbox |
| Messaging | Send messages via any Channel | Cross-platform message push |
| Scheduled Tasks | Cron expression scheduling | Timed reminders, automated tasks |
| Webhook | Receive external events | Integration with third-party services |
Tool calling security model: AI requests tool call -> Gateway checks permissions -> Execute tool -> Return results to AI.
Plugin System
OpenClaw's core stays lean; optional features are extended via plugins.
Plugin distribution methods:
npm packages — Install via npm, standard Node.js package management
Local extensions — Load local directories directly during development
Community plugins — Discover and install via ClawHub (clawhub.ai)
Plugin API provides an SDK:
// Plugin SDK import
import { ... } from 'openclaw/plugin-sdk'The official stance on plugins: The bar for the core repository is very high. Most new features should be published as independent plugins to ClawHub, rather than merged into the core codebase.
MCP Support
OpenClaw supports MCP (Model Context Protocol) via the mcporter bridge:
OpenClaw Gateway <-> mcporter <-> MCP ServersBenefits of this bridging approach:
Adding or replacing MCP servers doesn't require restarting Gateway
Core tools/context stays lean
Changes in the MCP ecosystem don't affect core stability
Tech Stack Details
Why TypeScript?
The official answer is straightforward: OpenClaw is essentially an orchestration system — handling Prompts, tools, protocols, and integrations. TypeScript was chosen because:
Widely used — Most developers can read and modify it
Fast iteration — Balance of dynamic typing + type checking
Rich ecosystem — npm has SDKs for almost all messaging platforms
Easy to extend — Low barrier for plugin development
Core Dependencies
From package.json, you can see OpenClaw's technology choices:
| Category | Library | Purpose |
|---|---|---|
| Runtime | Node.js >= 22.12.0 | Server-side JavaScript runtime |
| Language | TypeScript 5.9+ | Type safety |
| Build | tsdown | TypeScript bundler |
| Testing | Vitest 4.x | Unit tests + E2E tests |
| Code Quality | oxlint + oxfmt | Lint + formatting (written in Rust, extremely fast) |
| Web Framework | Express 5.x | HTTP/WebSocket service |
| WebSocket | ws | WebSocket communication |
| Schema | Zod 4.x + TypeBox | Runtime type validation |
| Database | sqlite-vec | Vector search (memory system) |
| Image Processing | Sharp | Image compression and conversion |
| Browser | Playwright | Browser automation |
| pdfjs-dist | PDF parsing | |
| Configuration | JSON5 (config format with comments) + dotenv | Main config file is JSON5 (~/.openclaw/openclaw.json) |
Messaging Platform SDKs:
| Platform | Library |
|---|---|
| @whiskeysockets/baileys | |
| Telegram | grammy |
| Discord | discord.js (via @buape/carbon) |
| Slack | @slack/bolt + @slack/web-api |
| Feishu (Lark) | @larksuiteoapi/node-sdk |
| LINE | @line/bot-sdk |
| AWS Bedrock | @aws-sdk/client-bedrock |
AI Agent Core:
| Library | Purpose |
|---|---|
| @mariozechner/pi-agent-core | Agent runtime core |
| @mariozechner/pi-ai | AI model abstraction layer |
| @mariozechner/pi-coding-agent | Coding Agent |
| @mariozechner/pi-tui | Terminal UI |
Project Structure
openclaw/
+-- src/ # Core source code
+-- extensions/ # Extension Channels and plugins
+-- skills/ # Built-in skill definitions
+-- apps/
| +-- macos/ # macOS menu bar app (Swift)
| +-- ios/ # iOS app (Swift)
| +-- android/ # Android app (Kotlin)
+-- ui/ # Control UI (Web frontend)
+-- docs/ # Official docs (Mintlify)
+-- scripts/ # Build and utility scripts
+-- test/ # Test files
+-- dist/ # Build output
+-- openclaw.mjs # CLI entry point
+-- package.json # Dependencies and scripts
+-- tsconfig.json # TypeScript config
+-- vitest.unit.config.ts # Unit test config
+-- vitest.e2e.config.ts # E2E test config
+-- vitest.live.config.ts # Live test configDevelopment Toolchain
# Build from source
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm ui:build
pnpm build
# Development mode (auto-reload on file changes)
pnpm gateway:watch
# Run tests
pnpm test # Parallel unit tests
pnpm test:e2e # E2E tests
pnpm test:live # Live model tests
# Code quality
pnpm check # Format + type + lint
pnpm lint:fix # Auto-fixVersioning Strategy
OpenClaw uses date-based version numbers: vYYYY.M.D, e.g., v2026.2.24.
Three release channels:
| Channel | npm Tag | Description |
|---|---|---|
| stable | latest | Official release, recommended |
| beta | beta | Pre-release, may lack macOS app |
| dev | dev | Latest code from main branch |
# Switch channels
openclaw update --channel stable
openclaw update --channel beta
openclaw update --channel devSupported AI Model Providers
OpenClaw supports 29+ AI model providers — one of its core competitive advantages. You're not locked into any single model.
Recommended Configuration
The official strong recommendation is Anthropic Claude Opus 4.6, because:
Strong long-context processing capability
Better prompt injection defense
High tool calling accuracy
Complete Provider List
International Mainstream:
| Provider | Representative Model | Features |
|---|---|---|
| OpenAI | GPT-5.2 | Strongest general capability, OpenClaw sponsor |
| Anthropic | Claude Opus 4.6, Sonnet 4.6 | Officially recommended, strongest for coding and security |
| Gemini 3.1 | Strong multimodal capabilities | |
| Mistral | Mistral Large | European open-source model |
| xAI | Grok | Real-time information capability |
Local/Self-hosted:
| Provider | Representative Model | Features |
|---|---|---|
| Ollama | Llama, Qwen, Mistral | Run locally, privacy-first |
| vLLM | Various open-source models | High-performance local inference |
| node-llama-cpp | GGUF models | Run directly in Node.js |
China Mainland:
| Provider | Representative Model | Features |
|---|---|---|
| Qwen | Qwen-Max | Alibaba, strong Chinese capability |
| Qianfan (Baidu) | ERNIE | Baidu, Chinese optimized |
| GLM (Zhipu) | GLM-4 | Tsinghua-affiliated, strong academic capability |
| Moonshot/Kimi | Moonshot Vision | Chinese optimized, vision capability |
| Xiaomi | MiLM | Xiaomi AI |
| MiniMax | abab6 | Multimodal |
| DeepSeek | DeepSeek | Strong coding capability |
Proxy/Routing Services:
| Provider | Features |
|---|---|
| OpenRouter | One API Key for all models |
| LiteLLM | Unified interface proxy layer |
| AWS Bedrock | Enterprise-grade, supports Claude and Titan |
| Vercel AI Gateway | Vercel ecosystem integration |
| Cloudflare AI Gateway | CDN acceleration |
| Together AI | Open-source model hosting, cost-effective |
| NVIDIA NIM | GPU-accelerated inference |
| Venice AI | Privacy-first, no data logging |
| HuggingFace | Community open-source models |
Model Failover
OpenClaw supports configuring priority and failover strategies for multiple models. When the primary model is unavailable, it automatically switches to a backup model, ensuring the assistant is always online.
See official documentation for details.
Client Ecosystem
There are many ways to connect to the Gateway, covering mainstream platforms:
Native Apps
| Platform | Type | Tech Stack | Features |
|---|---|---|---|
| macOS | Menu bar app | Swift | Control panel, Voice Wake, Talk Mode, Canvas |
| iOS | Mobile app | Swift | Canvas, Voice Wake, Talk Mode, Camera |
| Android | Mobile app | Kotlin | Canvas, Talk Mode, Camera, Screen recording |
Command Line
# Chat directly with AI
openclaw agent --message "Check tomorrow's weather for me"
# Send message to a specific platform
openclaw message send --to +1234567890 --message "Hello"
# Interactive TUI
openclaw tuiWeb Interface
Control UI — Browser control panel for managing Gateway configuration
WebChat — Browser chat interface for chatting directly with AI
Live Canvas — Agent-driven visual workspace
Voice Capabilities
Voice Wake — Voice activation, similar to "Hey Siri" (macOS/iOS/Android)
Talk Mode — Real-time voice conversation, based on ElevenLabs TTS
PTT (Push to Talk) — Push-to-talk mode
Community Ecosystem
Project Stats (as of February 2026)
| Metric | Data |
|---|---|
| GitHub Stars | 227,000+ |
| Forks | 43,000+ |
| Contributors | 30+ (core repo, GitHub API pagination limit) |
| Open Issues | 7,500+ |
| License | MIT |
| Release Frequency | Almost daily |
| Sponsors | OpenAI, Blacksmith |
Community Channels
| Channel | Link | Purpose |
|---|---|---|
| Discord | discord.gg/clawd | Main community, real-time discussion |
| GitHub Discussions | github.com/openclaw/openclaw | Feature requests, technical discussion |
| GitHub Issues | github.com/openclaw/openclaw/issues | Bug reports |
| Official Docs | docs.openclaw.ai | Complete documentation |
| Official Website | openclaw.ai | Project homepage |
| DeepWiki | deepwiki.com/openclaw/openclaw | AI-generated code documentation |
ClawHub (Skill Marketplace)
ClawHub (clawhub.ai) is OpenClaw's skill and plugin marketplace. Community developers can publish their own skills, and other users can install them with one click.
The official stance encourages new skills to be published to ClawHub first, rather than submitted to the core repository. The bar for merging into the core repository is very high.
Contribution Guide
If you want to contribute code to OpenClaw:
One PR = One issue — Don't bundle multiple unrelated fixes together
PR no more than 5000 lines — Extra-large PRs are only reviewed in special circumstances
Don't batch-submit small PRs — Each PR has review overhead
Small fixes can be combined — Related small fixes can go in one PR
Use Cases and Non-Use Cases
Suitable Scenarios for OpenClaw
Personal Productivity Assistant:
Manage schedules and to-dos via WhatsApp/Telegram
Let AI help you organize emails, reply to messages
Voice control smart home devices
Timed reminders and automated workflows
Developer Tools:
Manage GitHub Issues and PRs through chat apps
AI programming assistant, write code directly in chat
Monitor server status, auto-notify on anomalies
Automate DevOps tasks
Knowledge Management:
Connect Obsidian/Notion, AI helps organize notes
Auto-summarize long documents and web pages
Cross-platform information aggregation
Unified Multi-Platform Entry Point:
One AI assistant, accessible through all your favorite chat apps
Conversations started on WhatsApp can continue on Telegram
Unified memory system, AI "knows" you across all platforms
Unsuitable Scenarios for OpenClaw
| Scenario | Reason | Alternative |
|---|---|---|
| Enterprise customer service bot | OpenClaw is single-user design | Botpress, Intercom |
| Multi-user SaaS product | No multi-tenant architecture | Custom solution |
| Users who don't want to touch terminal | Installation and config currently require CLI | ChatGPT, Claude.ai |
| Low-spec devices | Requires Node.js 22+, significant memory usage | Lightweight bot frameworks |
| 100% uptime required | Personal devices may shut down/lose connectivity | Cloud AI services |
OpenClaw's design philosophy is clear: It's a personal assistant, not an enterprise platform. Single-user, local-first, privacy above all.
Version History and Roadmap
Current Priorities (from official VISION.md)
Highest Priority:
Security and safe defaults
Bug fixes and stability
Installation reliability and first-use experience
Next Priorities:
Support all mainstream model providers
Improve mainstream messaging platform support (and add high-demand platforms)
Performance and testing infrastructure
Better Computer Use and Agent capabilities
CLI and web frontend usability
macOS, iOS, Android, Windows, Linux companion apps
Features That Won't Be Merged (Currently)
The official team has explicitly listed PR types that won't be accepted for now:
New core skills that could go on ClawHub
Full documentation translations (plan to use AI auto-translation)
Commercial service integrations that don't clearly fall under model provider category
Wrappers around existing Channels (unless there's a clear capability or security gap)
First-class MCP runtime in core (mcporter already provides integration path)
Agent hierarchy frameworks (managers of managers/nested planning trees)
Heavy orchestration layers that duplicate existing Agent and tool infrastructure
The official team says these are roadmap guardrails, not iron laws. Strong user demand and technical justification can change them.
Suggested Learning Path
Beginner Track (1-2 hours)
Step 1: Install OpenClaw
-> Visit docs.openclaw.ai/start/getting-started
-> Run openclaw onboard
Step 2: Connect your first messaging platform
-> Visit docs.openclaw.ai for messaging platform guides
-> Recommended to start with Telegram (simplest)
Step 3: Configure AI model
-> Visit docs.openclaw.ai for model configuration
-> Recommended to start with OpenAI or Anthropic
Step 4: Try built-in skills
-> Visit docs.openclaw.ai for skill system guides
-> Try simple skills like weather, summarizeIntermediate Track (1 week)
Step 5: Understand the memory system
-> Visit docs.openclaw.ai for memory system guides
-> Teach AI to remember your preferences
Step 6: Multi-Agent configuration
-> Visit docs.openclaw.ai for multi-agent guides
-> Create dedicated Agents for different scenarios
Step 7: Docker deployment
-> Visit docs.openclaw.ai for Docker deployment guides
-> Run 24/7 on a server
Step 8: Security hardening
-> Visit docs.openclaw.ai for security configuration
-> Configure DM policies and permissionsAdvanced Track (Continuous Learning)
Step 9: Develop custom skills
-> Learn Workspace Skills development
-> Publish to ClawHub
Step 10: Plugin development
-> Learn Plugin SDK
-> Develop custom Channels or tools
Step 11: Source code contribution
-> Fork the repo, build from source
-> Fix bugs or add features
-> Submit PRRecommended Learning Resources
| Resource | Link | Description |
|---|---|---|
| Official Docs | docs.openclaw.ai | The most authoritative reference |
| Getting Started | docs.openclaw.ai/start/getting-started | Official getting started guide |
| Onboarding Wizard | docs.openclaw.ai/start/wizard | Interactive installation wizard |
| Discord Community | discord.gg/clawd | Real-time Q&A |
| DeepWiki | deepwiki.com/openclaw/openclaw | AI-generated code documentation |
| This Tutorial Series | What you're reading now | Complete guide |
Quick Experience
If you can't wait, here's the fastest way to get started:
# 1. Install
npm install -g openclaw@latest
# 2. Run the setup wizard (guides you through model and platform configuration)
openclaw onboard --install-daemon
# 3. Start Gateway (debug mode)
openclaw gateway --port 18789 --verbose
# 4. Send a test message
openclaw agent --message "Hello, introduce yourself"In 5 minutes, you'll have an AI personal assistant running on your own computer.