Skip to content

AbdullahMalik17/malikclaw

Repository files navigation

MalikClaw AI Agent running on $10 Linux SBC

MalikClaw 🦅

Ultra-Efficient Personal AI Assistant for Edge Hardware

$10 Hardware · <10MB RAM · <1s Boot · آگے بڑھو، ملک کلاؤ!

Build Status Golang 1.21+ Multi-Architecture Support MIT License
GitHub stars GitHub forks GitHub issues GitHub pull requests
GitHub code size GitHub contributors Official Website

اردو | 日本語 | Português | Tiếng Việt | Français | English


🦅 MalikClaw is a high-performance, ultra-lightweight personal AI Assistant built in Go. Designed to bring powerful agentic AI capabilities to low-cost hardware ($10 SBCs, old phones, RISC-V), MalikClaw prioritizes privacy, speed, and the South Asian developer ecosystem with native Urdu-first support.

⚡️ The Edge Champion: Runs on $10 hardware with <10MB RAM—99% less memory than typical AI gateways and 98% cheaper than a Mac mini!

MalikClaw ultra-low memory usage comparison

MalikClaw running on $10 NanoPi edge hardware

Caution

🚨 SECURITY & OFFICIAL CHANNELS / حفاظتی اعلان

  • NO CRYPTO: MalikClaw has NO official token/coin. Any such claims are SCAMS.

  • OFFICIAL REPO: The official repository is github.com/AbdullahMalik17/malikclaw

  • Warning: MalikClaw is in active development and may have unresolved security issues. Do not deploy to production environments before the v1.0 release.

  • Note: Recent feature additions may result in a larger memory footprint (10–20MB). Resource optimization is an ongoing priority.

📢 News

2026-03-14 🦅 MalikClaw rebranded with the new Gryphon identity! New logo, new vision — building the ultimate lightweight AI agent for South Asian developers and edge hardware enthusiasts.

2026-03-01 🚀 Community growing! MalikClaw is gaining traction with developers looking for a lightweight, privacy-first AI assistant. Check out our Roadmap and contribute!

2026-02-09 🎉 MalikClaw Launched! Built to bring AI Agents to $10 hardware with <10MB RAM. 🦅 MalikClaw, Let's Go! آگے بڑھو، ملک کلاؤ!

✨ Features

🌍 Urdu-First Strategy:

  • Bilingual Onboarding: Interactive setup in Urdu and English.
  • RTL Web UI: Native Right-to-Left support for Urdu/Arabic users.
  • Pakistan-Centric: Optimized for local workflows and South Asian languages.

🪶 Ultra-Lightweight: <10MB Memory footprint — 99% smaller than OpenClaw.

🚀 Lightning Fast: 400X Faster startup time, boot in 1 second.

📱 Mobile Operation: Control Android devices via ADB (screenshot, tap, type, swipe).

💼 Business Integration: Built-in Gmail and Odoo Accounting support via MCP.

🛠️ Self-Evolution: Guardian engine allows the agent to autonomously improve its own code.

🚀 Why MalikClaw?

If you are looking for a lightweight alternative to OpenClaw, AutoGPT, or BabyAGI that can run on a $10 budget, MalikClaw is the answer.

  • Low Cost: While other AI agents require a Mac Mini or a high-end Cloud VM, MalikClaw runs on Orange Pi Zero or Raspberry Pi Zero 2 W.
  • Privacy-First: Your data stays on your hardware.
  • Developer Friendly: Built in Go for maximum performance and easy cross-compilation.
OpenClaw NanoBot MalikClaw
Language TypeScript Python Go
RAM >1GB >100MB < 10MB
Startup
(0.8GHz core)
>500s >30s < 1s
Cost Mac Mini 599$ Most Linux SBC
~50$
Any Linux Board
As low as $10

MalikClaw vs OpenClaw vs NanoBot performance and cost comparison chart

🦾 Demonstration

🛠️ Standard Assistant Workflows

🧩 Full-Stack Engineer

🗂️ Logging & Planning Management

🔎 Web Search & Learning

MalikClaw AI agent performing full-stack engineering tasks

MalikClaw autonomous planning and memory management system

MalikClaw AI-powered web search and market research

Develop • Deploy • Scale Schedule • Automate • Memory Discovery • Insights • Trends

📱 Run on old Android Phones

Give your decade-old phone a second life! Turn it into a smart AI Assistant with Malikclaw. Quick Start:

  1. Install Termux (Available on F-Droid or Google Play).
  2. Execute cmds
# Note: Replace v0.1.1 with the latest version from the Releases page
wget https://github.com/AbdullahMalik17/malikclaw/releases/download/v0.1.1/malikclaw-linux-arm64
chmod +x malikclaw-linux-arm64
pkg install proot
termux-chroot ./malikclaw-linux-arm64 onboard

And then follow the instructions in the "Quick Start" section to complete the configuration! MalikClaw running on an old Android phone using Termux and Proot

🐜 Innovative Low-Footprint Deploy

MalikClaw can be deployed on almost any Linux device!

  • Raspberry Pi Zero 2 W (~$15) — Perfect for a home AI assistant
  • Orange Pi Zero (~$10) — Ultra-cheap edge AI deployment
  • Old Android Phones (free!) — Give your old device a second life
  • Any ARM/RISC-V SBC — If it runs Linux, it runs MalikClaw

🌟 More Deployment Cases Await!

📦 Install

Install with precompiled binary

Download the firmware for your platform from the release page.

Install from source (latest features, recommended for development)

git clone https://github.com/AbdullahMalik17/malikclaw.git

cd malikclaw
make deps

# Build, no need to install
make build

# Build for multiple platforms
make build-all

# Build for Raspberry Pi Zero 2 W (32-bit: make build-linux-arm; 64-bit: make build-linux-arm64)
make build-pi-zero

# Build And Install
make install

Raspberry Pi Zero 2 W: Use the binary that matches your OS: 32-bit Raspberry Pi OS → make build-linux-arm (output: build/malikclaw-linux-arm); 64-bit → make build-linux-arm64 (output: build/malikclaw-linux-arm64). Or run make build-pi-zero to build both.

🐳 Docker Compose

You can also run Malikclaw using Docker Compose without installing anything locally.

# 1. Clone this repo
git clone https://github.com/AbdullahMalik17/malikclaw.git
cd malikclaw

# 2. First run — auto-generates docker/data/config.json then exits
docker compose -f docker/docker-compose.yml --profile gateway up
# The container prints "First-run setup complete." and stops.

# 3. Set your API keys
vim docker/data/config.json   # Set provider API keys, bot tokens, etc.

# 4. Start
docker compose -f docker/docker-compose.yml --profile gateway up -d

Tip

Docker Users: By default, the Gateway listens on 127.0.0.1 which is not accessible from the host. If you need to access the health endpoints or expose ports, set MALIKCLAW_GATEWAY_HOST=0.0.0.0 in your environment or update config.json.

# 5. Check logs
docker compose -f docker/docker-compose.yml logs -f malikclaw-gateway

# 6. Stop
docker compose -f docker/docker-compose.yml --profile gateway down

Launcher Mode (Web Console)

The launcher image includes all three binaries (malikclaw, malikclaw-launcher, malikclaw-launcher-tui) and starts the web console by default, which provides a browser-based UI for configuration and chat.

docker compose -f docker/docker-compose.yml --profile launcher up -d

Open http://localhost:18800 in your browser. The launcher manages the gateway process automatically.

Warning

The web console does not yet support authentication. Avoid exposing it to the public internet.

Agent Mode (One-shot)

# Ask a question
docker compose -f docker/docker-compose.yml run --rm malikclaw-agent -m "What is 2+2?"

# Interactive mode
docker compose -f docker/docker-compose.yml run --rm malikclaw-agent

Update

docker compose -f docker/docker-compose.yml pull
docker compose -f docker/docker-compose.yml --profile gateway up -d

🚀 Quick Start

Tip

Set your API Key in ~/.malikclaw/config.json. Get API Keys: Volcengine (CodingPlan) (LLM) · OpenRouter (LLM) · Zhipu (LLM). Web search is optional — get a free Tavily API (1000 free queries/month) or Brave Search API (2000 free queries/month).

1. Initialize

malikclaw onboard

2. Configure (~/.malikclaw/config.json)

{
  "agents": {
    "defaults": {
      "workspace": "~/.malikclaw/workspace",
      "model_name": "gpt-5.4",
      "max_tokens": 8192,
      "temperature": 0.7,
      "max_tool_iterations": 20
    }
  },
  "model_list": [
    {
      "model_name": "ark-code-latest",
      "model": "volcengine/ark-code-latest",
      "api_key": "sk-your-api-key",
      "api_base":"https://ark.cn-beijing.volces.com/api/coding/v3"
    },
    {
      "model_name": "gpt-5.4",
      "model": "openai/gpt-5.4",
      "api_key": "your-api-key",
      "request_timeout": 300
    },
    {
      "model_name": "claude-sonnet-4.6",
      "model": "anthropic/claude-sonnet-4.6",
      "api_key": "your-anthropic-key"
    }
  ],
  "tools": {
    "web": {
      "brave": {
        "enabled": false,
        "api_key": "YOUR_BRAVE_API_KEY",
        "max_results": 5
      },
      "tavily": {
        "enabled": false,
        "api_key": "YOUR_TAVILY_API_KEY",
        "max_results": 5
      },
      "duckduckgo": {
        "enabled": true,
        "max_results": 5
      },
      "perplexity": {
        "enabled": false,
        "api_key": "YOUR_PERPLEXITY_API_KEY",
        "max_results": 5
      },
      "searxng": {
        "enabled": false,
        "base_url": "http://your-searxng-instance:8888",
        "max_results": 5
      }
    }
  }
}

New: The model_list configuration format allows zero-code provider addition. See Model Configuration for details. request_timeout is optional and uses seconds. If omitted or set to <= 0, Malikclaw uses the default timeout (120s).

3. Get API Keys

  • LLM Provider: OpenRouter · Zhipu · Anthropic · OpenAI · Gemini
  • Web Search (optional):
    • Brave Search - Paid ($5/1000 queries, ~$5-6/month)
    • Perplexity - AI-powered search with chat interface
    • SearXNG - Self-hosted metasearch engine (free, no API key needed)
    • Tavily - Optimized for AI Agents (1000 requests/month)
    • DuckDuckGo - Built-in fallback (no API key required)

Note: See config.example.json for a complete configuration template.

4. Chat

malikclaw agent -m "What is 2+2?"

That's it! You have a working AI assistant in 2 minutes.


💬 Chat Apps

Talk to your malikclaw through Telegram, Discord, WhatsApp, Matrix, QQ, DingTalk, LINE, or WeCom

Note: All webhook-based channels (LINE, WeCom, etc.) are served on a single shared Gateway HTTP server (gateway.host:gateway.port, default 127.0.0.1:18790). There are no per-channel ports to configure. Note: Feishu uses WebSocket/SDK mode and does not use the shared HTTP webhook server.

Channel Setup
Telegram Easy (just a token)
Discord Easy (bot token + intents)
WhatsApp Easy (native: QR scan; or bridge URL)
Matrix Medium (homeserver + bot access token)
QQ Easy (AppID + AppSecret)
DingTalk Medium (app credentials)
LINE Medium (credentials + webhook URL)
WeCom AI Bot Medium (Token + AES key)
Telegram (Recommended)

1. Create a bot

  • Open Telegram, search @BotFather
  • Send /newbot, follow prompts
  • Copy the token

2. Configure

{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allow_from": ["YOUR_USER_ID"]
    }
  }
}

Get your user ID from @userinfobot on Telegram.

3. Run

malikclaw gateway

4. Telegram command menu (auto-registered at startup)

Malikclaw now keeps command definitions in one shared registry. On startup, Telegram will automatically register supported bot commands (for example /start, /help, /show, /list) so command menu and runtime behavior stay in sync. Telegram command menu registration remains channel-local discovery UX; generic command execution is handled centrally in the agent loop via the commands executor.

If command registration fails (network/API transient errors), the channel still starts and Malikclaw retries registration in the background.

Discord

1. Create a bot

2. Enable intents

  • In the Bot settings, enable MESSAGE CONTENT INTENT
  • (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data

3. Get your User ID

  • Discord Settings → Advanced → enable Developer Mode
  • Right-click your avatar → Copy User ID

4. Configure

{
  "channels": {
    "discord": {
      "enabled": true,
      "token": "YOUR_BOT_TOKEN",
      "allow_from": ["YOUR_USER_ID"]
    }
  }
}

5. Invite the bot

  • OAuth2 → URL Generator
  • Scopes: bot
  • Bot Permissions: Send Messages, Read Message History
  • Open the generated invite URL and add the bot to your server

Optional: Group trigger mode

By default the bot responds to all messages in a server channel. To restrict responses to @-mentions only, add:

{
  "channels": {
    "discord": {
      "group_trigger": { "mention_only": true }
    }
  }
}

You can also trigger by keyword prefixes (e.g. !bot):

{
  "channels": {
    "discord": {
      "group_trigger": { "prefixes": ["!bot"] }
    }
  }
}

6. Run

malikclaw gateway
WhatsApp (native via whatsmeow)

Malikclaw can connect to WhatsApp in two ways:

  • Native (recommended): In-process using whatsmeow. No separate bridge. Set "use_native": true and leave bridge_url empty. On first run, scan the QR code with WhatsApp (Linked Devices). Session is stored under your workspace (e.g. workspace/whatsapp/). The native channel is optional to keep the default binary small; build with -tags whatsapp_native (e.g. make build-whatsapp-native or go build -tags whatsapp_native ./cmd/...).
  • Bridge: Connect to an external WebSocket bridge. Set bridge_url (e.g. ws://localhost:3001) and keep use_native false.

Configure (native)

{
  "channels": {
    "whatsapp": {
      "enabled": true,
      "use_native": true,
      "session_store_path": "",
      "allow_from": []
    }
  }
}

If session_store_path is empty, the session is stored in &lt;workspace&gt;/whatsapp/. Run malikclaw gateway; on first run, scan the QR code printed in the terminal with WhatsApp → Linked Devices.

QQ

1. Create a bot

2. Configure

{
  "channels": {
    "qq": {
      "enabled": true,
      "app_id": "YOUR_APP_ID",
      "app_secret": "YOUR_APP_SECRET",
      "allow_from": []
    }
  }
}

Set allow_from to empty to allow all users, or specify QQ numbers to restrict access.

3. Run

malikclaw gateway
DingTalk

1. Create a bot

  • Go to Open Platform
  • Create an internal app
  • Copy Client ID and Client Secret

2. Configure

{
  "channels": {
    "dingtalk": {
      "enabled": true,
      "client_id": "YOUR_CLIENT_ID",
      "client_secret": "YOUR_CLIENT_SECRET",
      "allow_from": []
    }
  }
}

Set allow_from to empty to allow all users, or specify DingTalk user IDs to restrict access.

3. Run

malikclaw gateway
Matrix

1. Prepare bot account

  • Use your preferred homeserver (e.g. https://matrix.org or self-hosted)
  • Create a bot user and obtain its access token

2. Configure

{
  "channels": {
    "matrix": {
      "enabled": true,
      "homeserver": "https://matrix.org",
      "user_id": "@your-bot:matrix.org",
      "access_token": "YOUR_MATRIX_ACCESS_TOKEN",
      "allow_from": []
    }
  }
}

3. Run

malikclaw gateway

For full options (device_id, join_on_invite, group_trigger, placeholder, reasoning_channel_id), see Matrix Channel Configuration Guide.

LINE

1. Create a LINE Official Account

  • Go to LINE Developers Console
  • Create a provider → Create a Messaging API channel
  • Copy Channel Secret and Channel Access Token

2. Configure

{
  "channels": {
    "line": {
      "enabled": true,
      "channel_secret": "YOUR_CHANNEL_SECRET",
      "channel_access_token": "YOUR_CHANNEL_ACCESS_TOKEN",
      "webhook_path": "/webhook/line",
      "allow_from": []
    }
  }
}

LINE webhook is served on the shared Gateway server (gateway.host:gateway.port, default 127.0.0.1:18790).

3. Set up Webhook URL

LINE requires HTTPS for webhooks. Use a reverse proxy or tunnel:

# Example with ngrok (gateway default port is 18790)
ngrok http 18790

Then set the Webhook URL in LINE Developers Console to https://your-domain/webhook/line and enable Use webhook.

4. Run

malikclaw gateway

In group chats, the bot responds only when @mentioned. Replies quote the original message.

WeCom (企业微信)

Malikclaw supports three types of WeCom integration:

Option 1: WeCom Bot (Bot) - Easier setup, supports group chats Option 2: WeCom App (Custom App) - More features, proactive messaging, private chat only Option 3: WeCom AI Bot (AI Bot) - Official AI Bot, streaming replies, supports group & private chat

See WeCom AI Bot Configuration Guide for detailed setup instructions.

Quick Setup - WeCom Bot:

1. Create a bot

  • Go to WeCom Admin Console → Group Chat → Add Group Bot
  • Copy the webhook URL (format: https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=xxx)

2. Configure

{
  "channels": {
    "wecom": {
      "enabled": true,
      "token": "YOUR_TOKEN",
      "encoding_aes_key": "YOUR_ENCODING_AES_KEY",
      "webhook_url": "https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=YOUR_KEY",
      "webhook_path": "/webhook/wecom",
      "allow_from": []
    }
  }
}

WeCom webhook is served on the shared Gateway server (gateway.host:gateway.port, default 127.0.0.1:18790).

Quick Setup - WeCom App:

1. Create an app

  • Go to WeCom Admin Console → App Management → Create App
  • Copy AgentId and Secret
  • Go to "My Company" page, copy CorpID

2. Configure receive message

  • In App details, click "Receive Message" → "Set API"
  • Set URL to http://your-server:18790/webhook/wecom-app
  • Generate Token and EncodingAESKey

3. Configure

{
  "channels": {
    "wecom_app": {
      "enabled": true,
      "corp_id": "wwxxxxxxxxxxxxxxxx",
      "corp_secret": "YOUR_CORP_SECRET",
      "agent_id": 1000002,
      "token": "YOUR_TOKEN",
      "encoding_aes_key": "YOUR_ENCODING_AES_KEY",
      "webhook_path": "/webhook/wecom-app",
      "allow_from": []
    }
  }
}

4. Run

malikclaw gateway

Note: WeCom webhook callbacks are served on the Gateway port (default 18790). Use a reverse proxy for HTTPS.

Quick Setup - WeCom AI Bot:

1. Create an AI Bot

  • Go to WeCom Admin Console → App Management → AI Bot
  • In the AI Bot settings, configure callback URL: http://your-server:18791/webhook/wecom-aibot
  • Copy Token and click "Random Generate" for EncodingAESKey

2. Configure

{
  "channels": {
    "wecom_aibot": {
      "enabled": true,
      "token": "YOUR_TOKEN",
      "encoding_aes_key": "YOUR_43_CHAR_ENCODING_AES_KEY",
      "webhook_path": "/webhook/wecom-aibot",
      "allow_from": [],
      "welcome_message": "Hello! How can I help you?"
    }
  }
}

3. Run

malikclaw gateway

Note: WeCom AI Bot uses streaming pull protocol — no reply timeout concerns. Long tasks (>30 seconds) automatically switch to response_url push delivery.

🌐 Community & Connect

Join the MalikClaw community and connect with the developer:

⚙️ Configuration

Config file: ~/.malikclaw/config.json

Environment Variables

You can override default paths using environment variables. This is useful for portable installations, containerized deployments, or running malikclaw as a system service. These variables are independent and control different paths.

Variable Description Default Path
MALIKCLAW_CONFIG Overrides the path to the configuration file. This directly tells malikclaw which config.json to load, ignoring all other locations. ~/.malikclaw/config.json
MALIKCLAW_HOME Overrides the root directory for malikclaw data. This changes the default location of the workspace and other data directories. ~/.malikclaw

Examples:

# Run malikclaw using a specific config file
# The workspace path will be read from within that config file
MALIKCLAW_CONFIG=/etc/malikclaw/production.json malikclaw gateway

# Run malikclaw with all its data stored in /opt/malikclaw
# Config will be loaded from the default ~/.malikclaw/config.json
# Workspace will be created at /opt/malikclaw/workspace
MALIKCLAW_HOME=/opt/malikclaw malikclaw agent

# Use both for a fully customized setup
MALIKCLAW_HOME=/srv/malikclaw MALIKCLAW_CONFIG=/srv/malikclaw/main.json malikclaw gateway

Workspace Layout

Malikclaw stores data in your configured workspace (default: ~/.malikclaw/workspace):

~/.malikclaw/workspace/
├── sessions/          # Conversation sessions and history
├── memory/           # Long-term memory (MEMORY.md)
├── state/            # Persistent state (last channel, etc.)
├── cron/             # Scheduled jobs database
├── skills/           # Custom skills
├── AGENTS.md         # Agent behavior guide
├── HEARTBEAT.md      # Periodic task prompts (checked every 30 min)
├── IDENTITY.md       # Agent identity
├── SOUL.md           # Agent soul
└── USER.md           # User preferences

Skill Sources

By default, skills are loaded from:

  1. ~/.malikclaw/workspace/skills (workspace)
  2. ~/.malikclaw/skills (global)
  3. <current-working-directory>/skills (builtin)

For advanced/test setups, you can override the builtin skills root with:

export MALIKCLAW_BUILTIN_SKILLS=/path/to/skills

Unified Command Execution Policy

  • Generic slash commands are executed through a single path in pkg/agent/loop.go via commands.Executor.
  • Channel adapters no longer consume generic commands locally; they forward inbound text to the bus/agent path. Telegram still auto-registers supported commands at startup.
  • Unknown slash command (for example /foo) passes through to normal LLM processing.
  • Registered but unsupported command on the current channel (for example /show on WhatsApp) returns an explicit user-facing error and stops further processing.

🔒 Security Sandbox

Malikclaw runs in a sandboxed environment by default. The agent can only access files and execute commands within the configured workspace.

Default Configuration

{
  "agents": {
    "defaults": {
      "workspace": "~/.malikclaw/workspace",
      "restrict_to_workspace": true
    }
  }
}
Option Default Description
workspace ~/.malikclaw/workspace Working directory for the agent
restrict_to_workspace true Restrict file/command access to workspace

Protected Tools

When restrict_to_workspace: true, the following tools are sandboxed:

Tool Function Restriction
read_file Read files Only files within workspace
write_file Write files Only files within workspace
list_dir List directories Only directories within workspace
edit_file Edit files Only files within workspace
append_file Append to files Only files within workspace
exec Execute commands Command paths must be within workspace

Additional Exec Protection

Even with restrict_to_workspace: false, the exec tool blocks these dangerous commands:

  • rm -rf, del /f, rmdir /s — Bulk deletion
  • format, mkfs, diskpart — Disk formatting
  • dd if= — Disk imaging
  • Writing to /dev/sd[a-z] — Direct disk writes
  • shutdown, reboot, poweroff — System shutdown
  • Fork bomb :(){ :|:& };:

Error Examples

[ERROR] tool: Tool execution failed
{tool=exec, error=Command blocked by safety guard (path outside working dir)}
[ERROR] tool: Tool execution failed
{tool=exec, error=Command blocked by safety guard (dangerous pattern detected)}

Disabling Restrictions (Security Risk)

If you need the agent to access paths outside the workspace:

Method 1: Config file

{
  "agents": {
    "defaults": {
      "restrict_to_workspace": false
    }
  }
}

Method 2: Environment variable

export MALIKCLAW_AGENTS_DEFAULTS_RESTRICT_TO_WORKSPACE=false

⚠️ Warning: Disabling this restriction allows the agent to access any path on your system. Use with caution in controlled environments only.

Security Boundary Consistency

The restrict_to_workspace setting applies consistently across all execution paths:

Execution Path Security Boundary
Main Agent restrict_to_workspace
Subagent / Spawn Inherits same restriction ✅
Heartbeat tasks Inherits same restriction ✅

All paths share the same workspace restriction — there's no way to bypass the security boundary through subagents or scheduled tasks.

Heartbeat (Periodic Tasks)

Malikclaw can perform periodic tasks automatically. Create a HEARTBEAT.md file in your workspace:

# Periodic Tasks

- Check my email for important messages
- Review my calendar for upcoming events
- Check the weather forecast

The agent will read this file every 30 minutes (configurable) and execute any tasks using available tools.

Async Tasks with Spawn

For long-running tasks (web search, API calls), use the spawn tool to create a subagent:

# Periodic Tasks

## Quick Tasks (respond directly)

- Report current time

## Long Tasks (use spawn for async)

- Search the web for AI news and summarize
- Check email and report important messages

Key behaviors:

Feature Description
spawn Creates async subagent, doesn't block heartbeat
Independent context Subagent has its own context, no session history
message tool Subagent communicates with user directly via message tool
Non-blocking After spawning, heartbeat continues to next task

How Subagent Communication Works

Heartbeat triggers
    ↓
Agent reads HEARTBEAT.md
    ↓
For long task: spawn subagent
    ↓                           ↓
Continue to next task      Subagent works independently
    ↓                           ↓
All tasks done            Subagent uses "message" tool
    ↓                           ↓
Respond HEARTBEAT_OK      User receives result directly

The subagent has access to tools (message, web_search, etc.) and can communicate with the user independently without going through the main agent.

Configuration:

{
  "heartbeat": {
    "enabled": true,
    "interval": 30
  }
}
Option Default Description
enabled true Enable/disable heartbeat
interval 30 Check interval in minutes (min: 5)

Environment variables:

  • MALIKCLAW_HEARTBEAT_ENABLED=false to disable
  • MALIKCLAW_HEARTBEAT_INTERVAL=60 to change interval

Providers

Note

Groq provides free voice transcription via Whisper. If configured, audio messages from any channel will be automatically transcribed at the agent level.

Provider Purpose Get API Key
gemini LLM (Gemini direct) aistudio.google.com
zhipu LLM (Zhipu direct) bigmodel.cn
volcengine LLM(Volcengine direct) volcengine.com
openrouter LLM (recommended, access to all models) openrouter.ai
anthropic LLM (Claude direct) console.anthropic.com
openai LLM (GPT direct) platform.openai.com
deepseek LLM (DeepSeek direct) platform.deepseek.com
qwen LLM (Qwen direct) dashscope.console.aliyun.com
groq LLM + Voice transcription (Whisper) console.groq.com
cerebras LLM (Cerebras direct) cerebras.ai
vivgrid LLM (Vivgrid direct) vivgrid.com

Model Configuration (model_list)

What's New? Malikclaw now uses a model-centric configuration approach. Simply specify vendor/model format (e.g., zhipu/glm-4.7) to add new providers—zero code changes required!

This design also enables multi-agent support with flexible provider selection:

  • Different agents, different providers: Each agent can use its own LLM provider
  • Model fallbacks: Configure primary and fallback models for resilience
  • Load balancing: Distribute requests across multiple endpoints
  • Centralized configuration: Manage all providers in one place

📋 All Supported Vendors

Vendor model Prefix Default API Base Protocol API Key
OpenAI openai/ https://api.openai.com/v1 OpenAI Get Key
Anthropic anthropic/ https://api.anthropic.com/v1 Anthropic Get Key
智谱 AI (GLM) zhipu/ https://open.bigmodel.cn/api/paas/v4 OpenAI Get Key
DeepSeek deepseek/ https://api.deepseek.com/v1 OpenAI Get Key
Google Gemini gemini/ https://generativelanguage.googleapis.com/v1beta OpenAI Get Key
Groq groq/ https://api.groq.com/openai/v1 OpenAI Get Key
Moonshot moonshot/ https://api.moonshot.cn/v1 OpenAI Get Key
通义千问 (Qwen) qwen/ https://dashscope.aliyuncs.com/compatible-mode/v1 OpenAI Get Key
NVIDIA nvidia/ https://integrate.api.nvidia.com/v1 OpenAI Get Key
Ollama ollama/ http://localhost:11434/v1 OpenAI Local (no key needed)
OpenRouter openrouter/ https://openrouter.ai/api/v1 OpenAI Get Key
LiteLLM Proxy litellm/ http://localhost:4000/v1 OpenAI Your LiteLLM proxy key
VLLM vllm/ http://localhost:8000/v1 OpenAI Local
Cerebras cerebras/ https://api.cerebras.ai/v1 OpenAI Get Key
VolcEngine (Doubao) volcengine/ https://ark.cn-beijing.volces.com/api/v3 OpenAI Get Key
神算云 shengsuanyun/ https://router.shengsuanyun.com/api/v1 OpenAI -
BytePlus byteplus/ https://ark.ap-southeast.bytepluses.com/api/v3 OpenAI Get Key
Vivgrid vivgrid/ https://api.vivgrid.com/v1 OpenAI Get Key
LongCat longcat/ https://api.longcat.chat/openai OpenAI Get Key
ModelScope (魔搭) modelscope/ https://api-inference.modelscope.cn/v1 OpenAI Get Token
Antigravity antigravity/ Google Cloud Custom OAuth only
GitHub Copilot github-copilot/ localhost:4321 gRPC -

Basic Configuration

{
  "model_list": [
    {
      "model_name": "ark-code-latest",
      "model": "volcengine/ark-code-latest",
      "api_key": "sk-your-api-key"
    },
    {
      "model_name": "gpt-5.4",
      "model": "openai/gpt-5.4",
      "api_key": "sk-your-openai-key"
    },
    {
      "model_name": "claude-sonnet-4.6",
      "model": "anthropic/claude-sonnet-4.6",
      "api_key": "sk-ant-your-key"
    },
    {
      "model_name": "glm-4.7",
      "model": "zhipu/glm-4.7",
      "api_key": "your-zhipu-key"
    }
  ],
  "agents": {
    "defaults": {
      "model": "gpt-5.4"
    }
  }
}

Vendor-Specific Examples

OpenAI

{
  "model_name": "gpt-5.4",
  "model": "openai/gpt-5.4",
  "api_key": "sk-..."
}

VolcEngine (Doubao)

{
  "model_name": "ark-code-latest",
  "model": "volcengine/ark-code-latest",
  "api_key": "sk-..."
}

智谱 AI (GLM)

{
  "model_name": "glm-4.7",
  "model": "zhipu/glm-4.7",
  "api_key": "your-key"
}

DeepSeek

{
  "model_name": "deepseek-chat",
  "model": "deepseek/deepseek-chat",
  "api_key": "sk-..."
}

Anthropic (with API key)

{
  "model_name": "claude-sonnet-4.6",
  "model": "anthropic/claude-sonnet-4.6",
  "api_key": "sk-ant-your-key"
}

Run malikclaw auth login --provider anthropic to paste your API token.

Anthropic Messages API (native format)

For direct Anthropic API access or custom endpoints that only support Anthropic's native message format:

{
  "model_name": "claude-opus-4-6",
  "model": "anthropic-messages/claude-opus-4-6",
  "api_key": "sk-ant-your-key",
  "api_base": "https://api.anthropic.com"
}

Use anthropic-messages protocol when:

  • Using third-party proxies that only support Anthropic's native /v1/messages endpoint (not OpenAI-compatible /v1/chat/completions)
  • Connecting to services like MiniMax, Synthetic that require Anthropic's native message format
  • The existing anthropic protocol returns 404 errors (indicating the endpoint doesn't support OpenAI-compatible format)

Note: The anthropic protocol uses OpenAI-compatible format (/v1/chat/completions), while anthropic-messages uses Anthropic's native format (/v1/messages). Choose based on your endpoint's supported format.

Ollama (local)

{
  "model_name": "llama3",
  "model": "ollama/llama3"
}

Custom Proxy/API

{
  "model_name": "my-custom-model",
  "model": "openai/custom-model",
  "api_base": "https://my-proxy.com/v1",
  "api_key": "sk-...",
  "request_timeout": 300
}

LiteLLM Proxy

{
  "model_name": "lite-gpt4",
  "model": "litellm/lite-gpt4",
  "api_base": "http://localhost:4000/v1",
  "api_key": "sk-..."
}

Malikclaw strips only the outer litellm/ prefix before sending the request, so proxy aliases like litellm/lite-gpt4 send lite-gpt4, while litellm/openai/gpt-4o sends openai/gpt-4o.

Load Balancing

Configure multiple endpoints for the same model name—Malikclaw will automatically round-robin between them:

{
  "model_list": [
    {
      "model_name": "gpt-5.4",
      "model": "openai/gpt-5.4",
      "api_base": "https://api1.example.com/v1",
      "api_key": "sk-key1"
    },
    {
      "model_name": "gpt-5.4",
      "model": "openai/gpt-5.4",
      "api_base": "https://api2.example.com/v1",
      "api_key": "sk-key2"
    }
  ]
}

Migration from Legacy providers Config

The old providers configuration is deprecated but still supported for backward compatibility.

Old Config (deprecated):

{
  "providers": {
    "zhipu": {
      "api_key": "your-key",
      "api_base": "https://open.bigmodel.cn/api/paas/v4"
    }
  },
  "agents": {
    "defaults": {
      "provider": "zhipu",
      "model": "glm-4.7"
    }
  }
}

New Config (recommended):

{
  "model_list": [
    {
      "model_name": "glm-4.7",
      "model": "zhipu/glm-4.7",
      "api_key": "your-key"
    }
  ],
  "agents": {
    "defaults": {
      "model": "glm-4.7"
    }
  }
}

For detailed migration guide, see docs/migration/model-list-migration.md.

Provider Architecture

Malikclaw routes providers by protocol family:

  • OpenAI-compatible protocol: OpenRouter, OpenAI-compatible gateways, Groq, Zhipu, and vLLM-style endpoints.
  • Anthropic protocol: Claude-native API behavior.
  • Codex/OAuth path: OpenAI OAuth/token authentication route.

This keeps the runtime lightweight while making new OpenAI-compatible backends mostly a config operation (api_base + api_key).

Zhipu

1. Get API key and base URL

2. Configure

{
  "agents": {
    "defaults": {
      "workspace": "~/.malikclaw/workspace",
      "model": "glm-4.7",
      "max_tokens": 8192,
      "temperature": 0.7,
      "max_tool_iterations": 20
    }
  },
  "providers": {
    "zhipu": {
      "api_key": "Your API Key",
      "api_base": "https://open.bigmodel.cn/api/paas/v4"
    }
  }
}

3. Run

malikclaw agent -m "Hello"
Full config example
{
  "agents": {
    "defaults": {
      "model": "anthropic/claude-opus-4-5"
    }
  },
  "session": {
    "dm_scope": "per-channel-peer",
    "backlog_limit": 20
  },
  "providers": {
    "openrouter": {
      "api_key": "sk-or-v1-xxx"
    },
    "groq": {
      "api_key": "gsk_xxx"
    }
  },
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "123456:ABC...",
      "allow_from": ["123456789"]
    },
    "discord": {
      "enabled": true,
      "token": "",
      "allow_from": [""]
    },
    "whatsapp": {
      "enabled": false,
      "bridge_url": "ws://localhost:3001",
      "use_native": false,
      "session_store_path": "",
      "allow_from": []
    },
    "feishu": {
      "enabled": false,
      "app_id": "cli_xxx",
      "app_secret": "xxx",
      "encrypt_key": "",
      "verification_token": "",
      "allow_from": []
    },
    "qq": {
      "enabled": false,
      "app_id": "",
      "app_secret": "",
      "allow_from": []
    }
  },
  "tools": {
    "web": {
      "brave": {
        "enabled": false,
        "api_key": "BSA...",
        "max_results": 5
      },
      "duckduckgo": {
        "enabled": true,
        "max_results": 5
      },
      "perplexity": {
        "enabled": false,
        "api_key": "",
        "max_results": 5
      },
      "searxng": {
        "enabled": false,
        "base_url": "http://localhost:8888",
        "max_results": 5
      }
    },
    "cron": {
      "exec_timeout_minutes": 5
    }
  },
  "heartbeat": {
    "enabled": true,
    "interval": 30
  }
}

CLI Reference

Command Description
malikclaw onboard Initialize config & workspace
malikclaw agent -m "..." Chat with the agent
malikclaw agent Interactive chat mode
malikclaw gateway Start the gateway
malikclaw status Show status
malikclaw cron list List all scheduled jobs
malikclaw cron add ... Add a scheduled job

Scheduled Tasks / Reminders

Malikclaw supports scheduled reminders and recurring tasks through the cron tool:

  • One-time reminders: "Remind me in 10 minutes" → triggers once after 10min
  • Recurring tasks: "Remind me every 2 hours" → triggers every 2 hours
  • Cron expressions: "Remind me at 9am daily" → uses cron expression

Jobs are stored in ~/.malikclaw/workspace/cron/ and processed automatically.

🤝 Contribute & Roadmap

PRs welcome! The codebase is intentionally small and readable. 🤗

See our full Community Roadmap.

Connect with the community:

🐛 Troubleshooting

Web search says "API key configuration issue"

This is normal if you haven't configured a search API key yet. Malikclaw will provide helpful links for manual searching.

Search Provider Priority

Malikclaw automatically selects the best available search provider in this order:

  1. Perplexity (if enabled and API key configured) - AI-powered search with citations
  2. Brave Search (if enabled and API key configured) - Privacy-focused paid API ($5/1000 queries)
  3. SearXNG (if enabled and base_url configured) - Self-hosted metasearch aggregating 70+ engines (free)
  4. DuckDuckGo (if enabled, default fallback) - No API key required (free)

Web Search Configuration Options

Option 1 (Best Results): Perplexity AI Search

{
  "tools": {
    "web": {
      "perplexity": {
        "enabled": true,
        "api_key": "YOUR_PERPLEXITY_API_KEY",
        "max_results": 5
      }
    }
  }
}

Option 2 (Paid API): Get an API key at https://brave.com/search/api ($5/1000 queries, ~$5-6/month)

{
  "tools": {
    "web": {
      "brave": {
        "enabled": true,
        "api_key": "YOUR_BRAVE_API_KEY",
        "max_results": 5
      }
    }
  }
}

Option 3 (Self-Hosted): Deploy your own SearXNG instance

{
  "tools": {
    "web": {
      "searxng": {
        "enabled": true,
        "base_url": "http://your-server:8888",
        "max_results": 5
      }
    }
  }
}

Benefits of SearXNG:

  • Zero cost: No API fees or rate limits
  • Privacy-focused: Self-hosted, no tracking
  • Aggregate results: Queries 70+ search engines simultaneously
  • Perfect for cloud VMs: Solves datacenter IP blocking issues (Oracle Cloud, GCP, AWS, Azure)
  • No API key needed: Just deploy and configure the base URL

Option 4 (No Setup Required): DuckDuckGo is enabled by default as fallback (no API key needed)

Add the key to ~/.malikclaw/config.json if using Brave:

{
  "tools": {
    "web": {
      "brave": {
        "enabled": false,
        "api_key": "YOUR_BRAVE_API_KEY",
        "max_results": 5
      },
      "duckduckgo": {
        "enabled": true,
        "max_results": 5
      },
      "perplexity": {
        "enabled": false,
        "api_key": "YOUR_PERPLEXITY_API_KEY",
        "max_results": 5
      },
      "searxng": {
        "enabled": false,
        "base_url": "http://your-searxng-instance:8888",
        "max_results": 5
      }
    }
  }
}

Getting content filtering errors

Some providers (like Zhipu) have content filtering. Try rephrasing your query or use a different model.

Telegram bot says "Conflict: terminated by other getUpdates"

This happens when another instance of the bot is running. Make sure only one malikclaw gateway is running at a time.


📝 API Key Comparison

Service Free Tier Use Case
OpenRouter 200K tokens/month Multiple models (Claude, GPT-4, etc.)
Volcengine CodingPlan ¥9.9/first month Multiple SOTA models (Doubao, DeepSeek, etc.)
Zhipu 200K tokens/month GLM models with generous free tier
Brave Search Paid ($5/1000 queries) Web search functionality
SearXNG Unlimited (self-hosted) Privacy-focused metasearch (70+ engines)
Groq Free tier available Fast inference (Llama, Mixtral)
Cerebras Free tier available Fast inference (Llama, Qwen, etc.)
LongCat Up to 5M tokens/day Fast inference (free tier)
ModelScope 2000 requests/day Free inference (Qwen, GLM, DeepSeek, etc.)

MalikClaw Eagle Logo - The Edge AI Champion for Linux and Android

Built with 🦅 by Muhammad Abdullah Athar

About

Ultra-lightweight AI Agent & Assistant for $10 SBCs & Android. Urdu-first, Go-based, and privacy-focused. 99% smaller memory than typical gateways.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors