Skip to content

OpenClaw Quick Start

In a Nutshell

OpenClaw is an open-source AI personal assistant framework that runs on your own computer or server. It connects to messaging platforms like WhatsApp, Telegram, and Discord, enabling AI to help you handle messages, execute tasks, and manage schedules.

Think of it as: A 24/7 online AI employee that communicates with you through your favorite chat apps.

Unlike cloud-based AI assistants, all of OpenClaw's data stays on your own device. Your chat history, preferences, and work files are all stored locally. No third-party servers snooping on your data, no monthly subscription locking down your workflow.

History

OpenClaw's story is an open-source legend:

Plain
Nov 2025      Peter Steinberger (iOS developer, PSPDFKit founder)
              Built a weekend project connecting AI models to messaging apps
              Originally called Warelay, purely a personal AI learning experiment
                      |
Nov 2025      Project renamed to Clawdbot, open-sourced
              Grew from 0 to 123K GitHub Stars in 3 weeks
              Developer community went viral
                      |
Jan 27, 2026  Anthropic requested trademark change
              (Clawdbot name too similar to Claude)
              Project renamed to Moltbot
                      |
Jan 30, 2026  Community vote, final name: OpenClaw
              "Open" emphasizes open-source, "Claw" pays tribute to the lobster claw
              Slogan: EXFOLIATE! EXFOLIATE!
                      |
Feb 2026      GitHub Stars surpassed 227K
              Fork count exceeded 43,000
              Became one of the fastest-growing open-source projects ever

Why did it grow so fast? Because it solved a real pain point: Everyone wants a personal AI assistant, but nobody wants to hand all their data to the cloud. OpenClaw lets you run AI on your own device, interact through chat apps you already use, with zero learning curve.

Key Milestones

DateEventSignificance
Nov 2025Project created (Warelay)Peter's personal AI learning project
Nov 2025Open-sourced as Clawdbot123K Stars in 3 weeks
Jan 2026Renamed to MoltbotAnthropic trademark request
Jan 2026Final name: OpenClawCommunity vote decided
Feb 2026227K+ StarsOne of the fastest-growing open-source projects ever
Feb 2026Daily release cadenceVersion format vYYYY.M.D

Comparison with Other AI Assistant Frameworks

There are quite a few AI assistant/chatbot frameworks on the market. OpenClaw's positioning is fundamentally different from them.

OpenClaw vs Botpress

DimensionOpenClawBotpress
PositioningPersonal AI assistantEnterprise chatbot platform
DeploymentLocal-first, runs on your own deviceCloud SaaS primarily
DataAll stored locallyStored on Botpress cloud
AI ModelsSupports 29+ providers, freely switchableMainly tied to its own models
Target UsersDevelopers, tech enthusiastsEnterprise customer service teams
Conversation DesignNatural language driven, no flowcharts neededVisual flow editor
PricingCompletely free and open-source (MIT)Limited free tier, enterprise version paid

Botpress is suited for customer service bots with a visual conversation flow editor, ideal for non-technical users. OpenClaw doesn't do customer service — it's your personal assistant, helping you manage emails, write code, and control smart home devices.

OpenClaw vs Rasa

DimensionOpenClawRasa
ArchitectureGateway (core service process for receiving and dispatching messages) + Agent (independent AI assistant instance) loopNLU + Dialogue Management + Action Server
Learning Curvenpm install -g openclaw and you're runningNeed to understand NLU pipeline, training data format
AI CapabilityDirectly calls large language modelsTraditional NLU + optional LLM
Messaging PlatformsBuilt-in support for 15+ platformsNeed to write your own Connector
Tool CallingBuilt-in browser control, file operations, etc.Need to implement your own Action Server
Use CasesPersonal assistant, developer toolsEnterprise-level dialogue systems

Rasa represents the traditional NLU approach, requiring annotated training data, defined intents and entities. OpenClaw stands directly on the shoulders of large language models — no training data needed, works out of the box.

OpenClaw vs DIY Solutions

Many developers think: Can't I just call the OpenAI API + build a Telegram Bot?

Of course you can, but you'll quickly run into these issues:

Plain
Problems a DIY solution needs to solve:
+-- Messaging platform adaptation (each platform API is different)
+-- Session management (multi-turn conversations, context windows)
+-- Streaming output (real-time AI response display)
+-- Tool calling (letting AI perform actual operations)
+-- Memory system (AI remembering your preferences)
+-- Security controls (preventing unauthorized access)
+-- Multi-model switching (different models for different tasks)
+-- Model failover (auto-switch to backup when primary model fails)
+-- Media processing (images, audio, video)
+-- Daemon management (24/7 operation)
+-- Update maintenance (keeping up with API changes)

OpenClaw handles all of this. You just need npm install -g openclaw && openclaw onboard, and you'll have a fully functional AI assistant in 5 minutes.

Recommendation Guide

Your NeedRecommended Solution
Personal AI assistant, privacy-firstOpenClaw
Enterprise customer service bot, need visualizationBotpress
Enterprise-level dialogue system, need precise controlRasa
Simple single-platform BotDIY solution
Multi-platform + multi-model + tool callingOpenClaw

Core Architecture

OpenClaw's architecture design is very elegant. The official self-definition is: Multi-channel AI gateway with extensible messaging integrations.

The core has three layers:

Plain
+------------------------------------------------------------------+
|                    Messaging Platform Layer (Channels)            |
|                                                                  |
|  WhatsApp | Telegram | Discord | Slack | Signal | iMessage       |
|  BlueBubbles | Google Chat | Teams | LINE | Zalo | WebChat | ... |
+-------------------------------+----------------------------------+
                                |
+-------------------------------v----------------------------------+
|                    Gateway (Control Plane)                        |
|                    ws://127.0.0.1:18789                          |
|                                                                  |
|  +----------+  +----------+  +----------+  +----------------+   |
|  | Routing  |  | Session  |  | Security |  | WebSocket API  |   |
|  | Engine   |  | Manager  |  | Control  |  |                |   |
|  +----------+  +----------+  +----------+  +----------------+   |
|                                                                  |
|  +----------+  +----------+  +----------+  +----------------+   |
|  | Skill    |  | Memory   |  | Tool     |  | Plugin         |   |
|  | System   |  | System   |  | System   |  | System         |   |
|  +----------+  +----------+  +----------+  +----------------+   |
|                                                                  |
|  +----------+  +----------+  +----------+  +----------------+   |
|  | Cron     |  | Webhook  |  | Media    |  | Control UI     |   |
|  | Scheduler|  |          |  | Pipeline |  |                |   |
|  +----------+  +----------+  +----------+  +----------------+   |
+-------------------------------+----------------------------------+
                                |
                    +-----------v-----------+
                    |  Pi Agent Runtime     |
                    |  (RPC Mode)           |
                    +-----------+-----------+
                                |
+-------------------------------v----------------------------------+
|                    AI Model Provider Layer (Models)               |
|                                                                  |
|  OpenAI | Anthropic | Google | Ollama | Mistral | xAI            |
|  AWS Bedrock | Qwen | GLM | DeepSeek | OpenRouter | ...         |
+------------------------------------------------------------------+

Data Flow: The Journey of a Message

What happens behind the scenes when you send a message to OpenClaw on WhatsApp?

Plain
1. WhatsApp message arrives
   |
2. Baileys library receives message, converts to unified format
   |
3. Gateway routing engine determines:
   +-- Which Agent does this message belong to?
   +-- Is the sender authorized? (DM pairing check)
   +-- Was it @mentioned in a group chat?
   |
4. Session manager loads/creates session:
   +-- Load historical context
   +-- Load memory files (MEMORY.md + today's log)
   +-- Load skill instructions
   |
5. Pi Agent executes reasoning loop:
   +-- Assemble complete Prompt (system instructions + memory + history + user message)
   +-- Call AI model (e.g., Claude Opus 4.6)
   +-- Model returns text or tool call request
   +-- If tool call -> execute tool -> feed result back to model
   +-- Loop until model gives final response
   |
6. Streaming output:
   +-- AI response pushed to WhatsApp in real-time
   +-- Simultaneously pushed to WebChat UI (if open)
   +-- Typing indicator lets the other party know AI is "thinking"
   |
7. State persistence:
   +-- Session history written to disk
   +-- If AI decides to remember something -> written to memory file
   +-- Usage statistics updated

Key design decisions in this flow:

  • Each session executes serially — Messages within the same session are queued, preventing race conditions

  • Streaming output — No need to wait for AI to finish thinking before sending, see responses in real-time

  • Tool calling loop — AI can call multiple tools consecutively until the task is complete

Core Concepts Explained

Gateway

Gateway is the heart of OpenClaw, a long-running daemon process. It's not a simple HTTP server, but a complete control plane.

Gateway responsibilities:

Plain
Gateway Core Responsibilities:
+-- Connection Management
|   +-- Manage long connections to all messaging platforms
|   +-- WhatsApp uses Baileys, Telegram uses grammY, Discord uses discord.js
|   +-- Auto-reconnect, heartbeat keepalive
|   +-- Unified message format conversion
|
+-- Session Management
|   +-- Main session (direct chat)
|   +-- Group chat isolation (independent session per group)
|   +-- Activation modes (@mention, keyword trigger, etc.)
|   +-- Queue mode (message queuing)
|
+-- Agent Scheduling
|   +-- Multi-Agent routing (different platforms/contacts -> different Agents)
|   +-- Independent workspace per Agent
|   +-- Independent authentication and configuration
|
+-- Security Control
|   +-- DM pairing mechanism (unknown senders require verification code)
|   +-- Allow list (allowFrom)
|   +-- Permission isolation
|
+-- API Exposure
|   +-- WebSocket API (default 127.0.0.1:18789)
|   +-- Control UI (Web control panel)
|   +-- WebChat (browser chat interface)
|
+-- Automation
    +-- Cron scheduled tasks
    +-- Webhook reception
    +-- Gmail Pub/Sub push

Gateway is installed as a system service: macOS uses launchd, Linux uses systemd. This way it auto-starts on boot and runs 24/7.

Bash
# Install Gateway daemon
openclaw onboard --install-daemon

# Manual start (for debugging)
openclaw gateway --port 18789 --verbose

# Check Gateway health
openclaw doctor

Channel (Messaging Platform Abstraction Layer)

Channel is OpenClaw's unified abstraction for messaging platforms. Whether you're using WhatsApp or Telegram, it's just a Channel to the Gateway.

What each Channel needs to handle:

CapabilityDescription
MessagingText, images, audio, video, files
Group Chat Support@mention detection, reply quoting, group member info
Typing IndicatorShow "typing..." to the other party
Message ChunkingAuto-split long messages (different platform limits)
Media ProcessingImage compression, audio transcoding, video frame extraction
AuthenticationDifferent authentication methods per platform

Currently supported Channels fall into two categories:

Core Channels (independent implementations in src/ directory):

PlatformIntegration LibraryAuthentication
WhatsAppBaileys (Web protocol)QR code pairing
TelegramgrammYBot Token
Discorddiscord.jsBot Token
SlackBoltOAuth App
Signalsignal-cliPhone number registration
BlueBubblesBlueBubbles APIiMessage (recommended)
iMessageimsg CLImacOS native (legacy)
WebChatBuilt-inNo authentication needed

Extension Channels (loaded via extensions):

PlatformDescription
Google ChatGoogle Workspace integration
Microsoft TeamsEnterprise communication platform
MatrixDecentralized communication protocol
ZaloVietnam's mainstream messaging app
Zalo PersonalZalo personal account
LINEMainstream in Japan/Southeast Asia
Feishu (Lark)ByteDance enterprise communication platform

Channel security defaults are important. Since OpenClaw connects to real messaging platforms, DMs from strangers default to pairing mode:

Plain
Stranger sends message -> OpenClaw returns a pairing code -> You confirm in terminal
                                          |
                              openclaw pairing approve telegram ABC123
                                          |
                              User is added to allow list, subsequent messages processed normally

Skills

If tools are AI's hands, then skills are the textbooks teaching AI how to combine these tools.

The essence of a skill is a set of system instructions + tool definitions, telling AI what to do in specific scenarios.

Plain
Tools = Individual atomic operations
    e.g., Read file, send HTTP request, execute Shell command

Skills = Knowledge for combining tools to complete complex tasks
    e.g., Manage GitHub Issues = Search Issues + Read details + Add comments + Modify labels

Skills are organized in three tiers:

TierDescriptionSource
Bundled SkillsBuilt-in skills, installed with OpenClawCore repository
Managed SkillsCommunity skills installed via ClawHubclawhub.ai
Workspace SkillsYour own local skillsWorkspace directory

50+ built-in skills overview:

CategorySkillFunction
ProductivitygogGoogle Workspace (email, calendar, docs)
notionNotion pages and database management
trelloTrello boards and task management
slackSlack workspace message management
1password1Password password lookup (read-only)
Note ManagementobsidianObsidian note search and management
apple-notesApple Notes
bear-notesBear Notes
Developer Toolscoding-agentProgramming assistant (write code, debug, refactor)
githubGitHub repo management, PRs, Issues
gh-issuesGitHub Issues specialized management
tmuxTerminal session management
Task Managementapple-remindersApple Reminders
things-macThings task management
Multimediaspotify-playerSpotify music control
voice-callVoice calls (ElevenLabs)
peekabooScreenshot capture
camsnapCamera photo
video-framesVideo frame extraction
UtilitiesweatherWeather lookup
summarizeContent summarization
nano-pdfPDF processing
skill-creatorCreate custom skills
System AdminhealthcheckSystem health check
session-logsSession log viewer
model-usageModel usage statistics

Memory System

OpenClaw's memory system design philosophy is: Simple to the extreme.

No vector database, no RAG pipeline, no embedding index. AI's memory is just Markdown files written to disk.

Plain
~/.openclaw/workspace/
+-- MEMORY.md              # Long-term memory (curated important information)
+-- memory/
    +-- 2026-02-25.md      # Today's log
    +-- 2026-02-24.md      # Yesterday's log
    +-- ...                # Older logs

Two-layer memory architecture:

LayerFileContentWhen Loaded
Long-term MemoryMEMORY.mdYour preferences, important decisions, key factsEvery session startup
Daily Logmemory/YYYY-MM-DD.mdNotes and context for the dayLoad today + yesterday

AI has two memory tools:

  • memory_search — Semantic search across all memory files

  • memory_get — Precise retrieval of specific memory content

Benefits of this design:

  • Readable — You can open the Markdown files anytime to see what AI remembers

  • Editable — You can directly modify memory files, delete things you don't want AI to remember

  • Version-controllable — Put it in Git to track memory changes

  • Zero dependencies — No additional database service needed

The memory system is a plugin slot — only one memory plugin can be active at a time. The official plan is to converge to a single recommended default in the future.

Tools

Tools are AI's ability to perform actual operations. OpenClaw comes with a powerful built-in toolset:

Tool CategoryCapabilityDescription
Browser ControlOpen pages, screenshot, click, fill formsBased on Playwright, with dedicated Chrome instance
CanvasVisual workspaceAgent-driven UI, supports A2UI push
Node OperationsCamera, screen recording, locationVia macOS/iOS/Android nodes
File OperationsRead/write files, directory managementOperates within workspace
Shell ExecutionRun command-line commandsExecuted in security sandbox
MessagingSend messages via any ChannelCross-platform message push
Scheduled TasksCron expression schedulingTimed reminders, automated tasks
WebhookReceive external eventsIntegration with third-party services

Tool calling security model: AI requests tool call -> Gateway checks permissions -> Execute tool -> Return results to AI.

Plugin System

OpenClaw's core stays lean; optional features are extended via plugins.

Plugin distribution methods:

  • npm packages — Install via npm, standard Node.js package management

  • Local extensions — Load local directories directly during development

  • Community plugins — Discover and install via ClawHub (clawhub.ai)

Plugin API provides an SDK:

TypeScript
// Plugin SDK import
import { ... } from 'openclaw/plugin-sdk'

The official stance on plugins: The bar for the core repository is very high. Most new features should be published as independent plugins to ClawHub, rather than merged into the core codebase.

MCP Support

OpenClaw supports MCP (Model Context Protocol) via the mcporter bridge:

Plain
OpenClaw Gateway <-> mcporter <-> MCP Servers

Benefits of this bridging approach:

  • Adding or replacing MCP servers doesn't require restarting Gateway

  • Core tools/context stays lean

  • Changes in the MCP ecosystem don't affect core stability

Tech Stack Details

Why TypeScript?

The official answer is straightforward: OpenClaw is essentially an orchestration system — handling Prompts, tools, protocols, and integrations. TypeScript was chosen because:

  • Widely used — Most developers can read and modify it

  • Fast iteration — Balance of dynamic typing + type checking

  • Rich ecosystem — npm has SDKs for almost all messaging platforms

  • Easy to extend — Low barrier for plugin development

Core Dependencies

From package.json, you can see OpenClaw's technology choices:

CategoryLibraryPurpose
RuntimeNode.js >= 22.12.0Server-side JavaScript runtime
LanguageTypeScript 5.9+Type safety
BuildtsdownTypeScript bundler
TestingVitest 4.xUnit tests + E2E tests
Code Qualityoxlint + oxfmtLint + formatting (written in Rust, extremely fast)
Web FrameworkExpress 5.xHTTP/WebSocket service
WebSocketwsWebSocket communication
SchemaZod 4.x + TypeBoxRuntime type validation
Databasesqlite-vecVector search (memory system)
Image ProcessingSharpImage compression and conversion
BrowserPlaywrightBrowser automation
PDFpdfjs-distPDF parsing
ConfigurationJSON5 (config format with comments) + dotenvMain config file is JSON5 (~/.openclaw/openclaw.json)

Messaging Platform SDKs:

PlatformLibrary
WhatsApp@whiskeysockets/baileys
Telegramgrammy
Discorddiscord.js (via @buape/carbon)
Slack@slack/bolt + @slack/web-api
Feishu (Lark)@larksuiteoapi/node-sdk
LINE@line/bot-sdk
AWS Bedrock@aws-sdk/client-bedrock

AI Agent Core:

LibraryPurpose
@mariozechner/pi-agent-coreAgent runtime core
@mariozechner/pi-aiAI model abstraction layer
@mariozechner/pi-coding-agentCoding Agent
@mariozechner/pi-tuiTerminal UI

Project Structure

Plain
openclaw/
+-- src/                    # Core source code
+-- extensions/             # Extension Channels and plugins
+-- skills/                 # Built-in skill definitions
+-- apps/
|   +-- macos/             # macOS menu bar app (Swift)
|   +-- ios/               # iOS app (Swift)
|   +-- android/           # Android app (Kotlin)
+-- ui/                     # Control UI (Web frontend)
+-- docs/                   # Official docs (Mintlify)
+-- scripts/                # Build and utility scripts
+-- test/                   # Test files
+-- dist/                   # Build output
+-- openclaw.mjs            # CLI entry point
+-- package.json            # Dependencies and scripts
+-- tsconfig.json           # TypeScript config
+-- vitest.unit.config.ts   # Unit test config
+-- vitest.e2e.config.ts    # E2E test config
+-- vitest.live.config.ts   # Live test config

Development Toolchain

Bash
# Build from source
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm ui:build
pnpm build

# Development mode (auto-reload on file changes)
pnpm gateway:watch

# Run tests
pnpm test              # Parallel unit tests
pnpm test:e2e          # E2E tests
pnpm test:live         # Live model tests

# Code quality
pnpm check             # Format + type + lint
pnpm lint:fix          # Auto-fix

Versioning Strategy

OpenClaw uses date-based version numbers: vYYYY.M.D, e.g., v2026.2.24.

Three release channels:

Channelnpm TagDescription
stablelatestOfficial release, recommended
betabetaPre-release, may lack macOS app
devdevLatest code from main branch
Bash
# Switch channels
openclaw update --channel stable
openclaw update --channel beta
openclaw update --channel dev

Supported AI Model Providers

OpenClaw supports 29+ AI model providers — one of its core competitive advantages. You're not locked into any single model.

The official strong recommendation is Anthropic Claude Opus 4.6, because:

  • Strong long-context processing capability

  • Better prompt injection defense

  • High tool calling accuracy

Complete Provider List

International Mainstream:

ProviderRepresentative ModelFeatures
OpenAIGPT-5.2Strongest general capability, OpenClaw sponsor
AnthropicClaude Opus 4.6, Sonnet 4.6Officially recommended, strongest for coding and security
GoogleGemini 3.1Strong multimodal capabilities
MistralMistral LargeEuropean open-source model
xAIGrokReal-time information capability

Local/Self-hosted:

ProviderRepresentative ModelFeatures
OllamaLlama, Qwen, MistralRun locally, privacy-first
vLLMVarious open-source modelsHigh-performance local inference
node-llama-cppGGUF modelsRun directly in Node.js

China Mainland:

ProviderRepresentative ModelFeatures
QwenQwen-MaxAlibaba, strong Chinese capability
Qianfan (Baidu)ERNIEBaidu, Chinese optimized
GLM (Zhipu)GLM-4Tsinghua-affiliated, strong academic capability
Moonshot/KimiMoonshot VisionChinese optimized, vision capability
XiaomiMiLMXiaomi AI
MiniMaxabab6Multimodal
DeepSeekDeepSeekStrong coding capability

Proxy/Routing Services:

ProviderFeatures
OpenRouterOne API Key for all models
LiteLLMUnified interface proxy layer
AWS BedrockEnterprise-grade, supports Claude and Titan
Vercel AI GatewayVercel ecosystem integration
Cloudflare AI GatewayCDN acceleration
Together AIOpen-source model hosting, cost-effective
NVIDIA NIMGPU-accelerated inference
Venice AIPrivacy-first, no data logging
HuggingFaceCommunity open-source models

Model Failover

OpenClaw supports configuring priority and failover strategies for multiple models. When the primary model is unavailable, it automatically switches to a backup model, ensuring the assistant is always online.

See official documentation for details.

Client Ecosystem

There are many ways to connect to the Gateway, covering mainstream platforms:

Native Apps

PlatformTypeTech StackFeatures
macOSMenu bar appSwiftControl panel, Voice Wake, Talk Mode, Canvas
iOSMobile appSwiftCanvas, Voice Wake, Talk Mode, Camera
AndroidMobile appKotlinCanvas, Talk Mode, Camera, Screen recording

Command Line

Bash
# Chat directly with AI
openclaw agent --message "Check tomorrow's weather for me"

# Send message to a specific platform
openclaw message send --to +1234567890 --message "Hello"

# Interactive TUI
openclaw tui

Web Interface

  • Control UI — Browser control panel for managing Gateway configuration

  • WebChat — Browser chat interface for chatting directly with AI

  • Live Canvas — Agent-driven visual workspace

Voice Capabilities

  • Voice Wake — Voice activation, similar to "Hey Siri" (macOS/iOS/Android)

  • Talk Mode — Real-time voice conversation, based on ElevenLabs TTS

  • PTT (Push to Talk) — Push-to-talk mode

Community Ecosystem

Project Stats (as of February 2026)

MetricData
GitHub Stars227,000+
Forks43,000+
Contributors30+ (core repo, GitHub API pagination limit)
Open Issues7,500+
LicenseMIT
Release FrequencyAlmost daily
SponsorsOpenAI, Blacksmith

Community Channels

ChannelLinkPurpose
Discorddiscord.gg/clawdMain community, real-time discussion
GitHub Discussionsgithub.com/openclaw/openclawFeature requests, technical discussion
GitHub Issuesgithub.com/openclaw/openclaw/issuesBug reports
Official Docsdocs.openclaw.aiComplete documentation
Official Websiteopenclaw.aiProject homepage
DeepWikideepwiki.com/openclaw/openclawAI-generated code documentation

ClawHub (Skill Marketplace)

ClawHub (clawhub.ai) is OpenClaw's skill and plugin marketplace. Community developers can publish their own skills, and other users can install them with one click.

The official stance encourages new skills to be published to ClawHub first, rather than submitted to the core repository. The bar for merging into the core repository is very high.

Contribution Guide

If you want to contribute code to OpenClaw:

  • One PR = One issue — Don't bundle multiple unrelated fixes together

  • PR no more than 5000 lines — Extra-large PRs are only reviewed in special circumstances

  • Don't batch-submit small PRs — Each PR has review overhead

  • Small fixes can be combined — Related small fixes can go in one PR

Use Cases and Non-Use Cases

Suitable Scenarios for OpenClaw

Personal Productivity Assistant:

  • Manage schedules and to-dos via WhatsApp/Telegram

  • Let AI help you organize emails, reply to messages

  • Voice control smart home devices

  • Timed reminders and automated workflows

Developer Tools:

  • Manage GitHub Issues and PRs through chat apps

  • AI programming assistant, write code directly in chat

  • Monitor server status, auto-notify on anomalies

  • Automate DevOps tasks

Knowledge Management:

  • Connect Obsidian/Notion, AI helps organize notes

  • Auto-summarize long documents and web pages

  • Cross-platform information aggregation

Unified Multi-Platform Entry Point:

  • One AI assistant, accessible through all your favorite chat apps

  • Conversations started on WhatsApp can continue on Telegram

  • Unified memory system, AI "knows" you across all platforms

Unsuitable Scenarios for OpenClaw

ScenarioReasonAlternative
Enterprise customer service botOpenClaw is single-user designBotpress, Intercom
Multi-user SaaS productNo multi-tenant architectureCustom solution
Users who don't want to touch terminalInstallation and config currently require CLIChatGPT, Claude.ai
Low-spec devicesRequires Node.js 22+, significant memory usageLightweight bot frameworks
100% uptime requiredPersonal devices may shut down/lose connectivityCloud AI services

OpenClaw's design philosophy is clear: It's a personal assistant, not an enterprise platform. Single-user, local-first, privacy above all.

Version History and Roadmap

Current Priorities (from official VISION.md)

Highest Priority:

  • Security and safe defaults

  • Bug fixes and stability

  • Installation reliability and first-use experience

Next Priorities:

  • Support all mainstream model providers

  • Improve mainstream messaging platform support (and add high-demand platforms)

  • Performance and testing infrastructure

  • Better Computer Use and Agent capabilities

  • CLI and web frontend usability

  • macOS, iOS, Android, Windows, Linux companion apps

Features That Won't Be Merged (Currently)

The official team has explicitly listed PR types that won't be accepted for now:

  • New core skills that could go on ClawHub

  • Full documentation translations (plan to use AI auto-translation)

  • Commercial service integrations that don't clearly fall under model provider category

  • Wrappers around existing Channels (unless there's a clear capability or security gap)

  • First-class MCP runtime in core (mcporter already provides integration path)

  • Agent hierarchy frameworks (managers of managers/nested planning trees)

  • Heavy orchestration layers that duplicate existing Agent and tool infrastructure

The official team says these are roadmap guardrails, not iron laws. Strong user demand and technical justification can change them.

Suggested Learning Path

Beginner Track (1-2 hours)

Plain
Step 1: Install OpenClaw
         -> Visit docs.openclaw.ai/start/getting-started
         -> Run openclaw onboard

Step 2: Connect your first messaging platform
         -> Visit docs.openclaw.ai for messaging platform guides
         -> Recommended to start with Telegram (simplest)

Step 3: Configure AI model
         -> Visit docs.openclaw.ai for model configuration
         -> Recommended to start with OpenAI or Anthropic

Step 4: Try built-in skills
         -> Visit docs.openclaw.ai for skill system guides
         -> Try simple skills like weather, summarize

Intermediate Track (1 week)

Plain
Step 5: Understand the memory system
         -> Visit docs.openclaw.ai for memory system guides
         -> Teach AI to remember your preferences

Step 6: Multi-Agent configuration
         -> Visit docs.openclaw.ai for multi-agent guides
         -> Create dedicated Agents for different scenarios

Step 7: Docker deployment
         -> Visit docs.openclaw.ai for Docker deployment guides
         -> Run 24/7 on a server

Step 8: Security hardening
         -> Visit docs.openclaw.ai for security configuration
         -> Configure DM policies and permissions

Advanced Track (Continuous Learning)

Plain
Step 9: Develop custom skills
         -> Learn Workspace Skills development
         -> Publish to ClawHub

Step 10: Plugin development
          -> Learn Plugin SDK
          -> Develop custom Channels or tools

Step 11: Source code contribution
          -> Fork the repo, build from source
          -> Fix bugs or add features
          -> Submit PR
ResourceLinkDescription
Official Docsdocs.openclaw.aiThe most authoritative reference
Getting Starteddocs.openclaw.ai/start/getting-startedOfficial getting started guide
Onboarding Wizarddocs.openclaw.ai/start/wizardInteractive installation wizard
Discord Communitydiscord.gg/clawdReal-time Q&A
DeepWikideepwiki.com/openclaw/openclawAI-generated code documentation
This Tutorial SeriesWhat you're reading nowComplete guide

Quick Experience

If you can't wait, here's the fastest way to get started:

Bash
# 1. Install
npm install -g openclaw@latest

# 2. Run the setup wizard (guides you through model and platform configuration)
openclaw onboard --install-daemon

# 3. Start Gateway (debug mode)
openclaw gateway --port 18789 --verbose

# 4. Send a test message
openclaw agent --message "Hello, introduce yourself"

In 5 minutes, you'll have an AI personal assistant running on your own computer.