OpenClaw - Self-hosted AI Assistant Platform

OpenClaw Tutorial — Install OpenClaw, integrate Lamafa, and quickly set up a self-hosted AI assistant. Open-source project, supporting multi-channel integration like Telegram, Discord, WhatsApp.

🌟 Core Features

Multi-channel Integration

  • Multi-channel Integration: Supports various messaging channels like Telegram, Discord, WhatsApp, iMessage, and can be extended to more platforms via plugins.
  • Single Gateway: Unified management of all channels through a single Gateway process.
  • Voice Support: Supports macOS/iOS/Android voice interaction.
  • Canvas Interface: Can render interactive Canvas interfaces.

Self-hosting and Data Security

  • Fully Self-hosted: Runs on your own machine or server.
  • Open Source Transparency: MIT open-source license, completely transparent code.
  • Local Data: Context and skills are stored on your local computer, not in the cloud.

Intelligent Agent Capabilities

  • Continuous Operation: Supports persistent background operation with long-term memory.
  • Scheduled Tasks: Supports cron scheduled tasks.
  • Session Isolation: Isolates sessions by agent/workspace/sender.
  • Multi-agent Routing: Supports collaborative work among multiple agents.
  • Tool Calling: Native support for tool calling and code execution.

📦 Preparation Before Integration

Before integrating Lamafa, it's recommended to first get the Gateway and Control UI running according to OpenClaw's official current recommended process. This makes it easier to distinguish whether OpenClaw itself is not starting or if the model provider configuration is incorrect when troubleshooting later.

1. Install OpenClaw (macOS/Linux)

curl -fsSL https://openclaw.ai/install.sh | bash

For other installation methods, refer to the OpenClaw official documentation: Getting Started.

2. Run the Onboarding Wizard

openclaw onboard --install-daemon

This wizard will complete basic authentication, Gateway setup, and optional channel initialization. The goal here is to get OpenClaw running first, then switch the default model to Lamafa later.

3. Check Gateway and Control UI

openclaw gateway status
openclaw dashboard

If the browser can open the Control UI, it means OpenClaw's basic operation is normal. At this stage, there's no need to configure Telegram, Discord, Feishu, or other messaging channels yet.

4. Locate the Configuration File

OpenClaw's configuration file is usually located at ~/.openclaw/openclaw.json. You can continue to modify it based on what the onboarding wizard generated.

🚀 Using Lamafa as a Model Provider

OpenClaw supports integrating custom or OpenAI-compatible model gateways via models.providers. For Lamafa, the most common approach is to add it as a custom provider to the configuration, then point the default model to lamafa/model-ID.

Integration Approach

  1. Declare a lamafa provider under models.providers.
  2. Point baseUrl to your Lamafa address, ensuring it includes /v1.
  3. Set api to openai-completions.
  4. List the model IDs you want OpenClaw to use in models.
  5. Switch the default model in agents.defaults.model.primary to lamafa/....

First, provide your Lamafa key in the current shell, service environment, or an .env file readable by OpenClaw:

export LAMAFA_API_KEY="sk-your-lamafa-key"

Then, add or modify the following snippet in openclaw.json:

{
  models: {
    mode: "merge",
    providers: {
      lamafa: {
        baseUrl: "https://<your-lamafa-domain>/v1",
        apiKey: "${LAMAFA_API_KEY}",
        api: "openai-completions",
        models: [
          { id: "gemini-2.5-flash", name: "Gemini 2.5 Flash" },
          { id: "kimi-k2.5", name: "Kimi K2.5" },
        ],
      },
    },
  },

  agents: {
    defaults: {
      model: {
        primary: "lamafa/gemini-2.5-flash",
        fallbacks: ["lamafa/kimi-k2.5"],
      },
      models: {
        "lamafa/gemini-2.5-flash": { alias: "flash" },
        "lamafa/kimi-k2.5": { alias: "kimi" },
      },
    },
  },
}

This is not a complete configuration that must be copied exactly, but rather the most critical part for integrating Lamafa. As long as the provider, model IDs, and default model references are correct, OpenClaw will be able to call your exposed model resources via Lamafa.

Key Configuration Details

Configuration ItemDescription
models.modeRecommended to set to merge, appending lamafa while retaining OpenClaw's built-in providers
models.providers.lamafa.baseUrlYour Lamafa address, usually needs to include /v1
models.providers.lamafa.apiKeyLamafa key, recommended to inject via ${LAMAFA_API_KEY}
models.providers.lamafa.apiFor OpenAI-compatible gateways like Lamafa, use openai-completions
models.providers.lamafa.modelsThe model IDs listed here must match the actual model names exposed by your Lamafa
agents.defaults.model.primaryDefault primary model, format must be provider/model-id
agents.defaults.model.fallbacksList of fallback models, automatically switches if the primary model fails
agents.defaults.modelsOptional, used to create aliases for models, convenient for referencing in UI or sessions

Verify Successful Integration

After completing the configuration, return to the Control UI or reopen it:

openclaw dashboard

If you can initiate conversations normally in OpenClaw and the default model has become lamafa/..., then the integration is successful. You can also use:

openclaw models list

to confirm that models with the lamafa/ prefix appear in the selectable list.

Common Issues

  • baseUrl missing /v1: This is one of the most common integration errors.
  • Incorrect model ID: primary and fallbacks must correspond to the id in models.providers.lamafa.models.
  • API key only effective in the current terminal: If Gateway runs as a background service, ensure the service process can also read LAMAFA_API_KEY.
  • For foreground troubleshooting: Use the official foreground running method openclaw gateway --port 18789 to observe logs and errors.

How is this guide?