Complete walkthrough from installing PicoClaw to configuring Telegram/QQ channels, including systemd service setup, proxy configuration, and troubleshooting common issues.
PicoClaw Raspberry Pi Setup and Configuration Guide
PicoClaw Raspberry Pi Setup and Configuration Guide
Complete walkthrough from installation to configuring Telegram/QQ channels
Environment Information
- Device: Raspberry Pi (aarch64)
- OS: Debian GNU/Linux 13 (trixie)
- IP: 192.168.10.44
- PicoClaw Version: v0.2.4
Part 1: Install PicoClaw
1.1 Download Binary Files
Since GitHub downloads can be slow, I recommend downloading locally (on Mac) and uploading to the Raspberry Pi:
# Download on Mac
wget https://github.com/sipeed/picoclaw/releases/download/v0.2.4/picoclaw_Linux_arm64.tar.gz
# Extract
tar xzf picoclaw_Linux_arm64.tar.gz
# Upload to Raspberry Pi
scp picoclaw picoclaw-launcher picoclaw-launcher-tui [email protected]:~/1.2 Initialize Configuration
ssh [email protected]
# Initialize PicoClaw
./picoclaw onboardAfter initialization, the following directory structure is created:
~/.picoclaw/
├── config.json # Main configuration file
├── .security.yml # Sensitive info (API keys, tokens)
├── workspace/ # Working directory
└── logs/ # Log filesPart 2: Configure LLM Providers
2.1 Edit Configuration File
vim ~/.picoclaw/config.jsonAdd model_list:
{
"agents": {
"defaults": {
"model_name": "kimi-k2.5",
"max_tool_iterations": 100,
"max_tokens": 8192,
"tool_feedback": {
"enabled": false
}
}
},
"model_list": [
{
"model_name": "kimi-k2.5",
"model": "moonshot/kimi-k2.5",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "qwen3.5-plus",
"model": "qwen/qwen3.5-plus",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "qwen3-max",
"model": "qwen/qwen3-max-2026-01-23",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "qwen3-coder-plus",
"model": "qwen/qwen3-coder-plus",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "glm-5",
"model": "zhipu/glm-5",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "glm-4.7",
"model": "zhipu/glm-4.7",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "minimax-m2.5",
"model": "minimax/MiniMax-M2.5",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "nvidia-kimi-k2.5",
"model": "nvidia/moonshotai/kimi-k2.5"
}
]
}2.2 Configure API Keys
Edit .security.yml:
vim ~/.picoclaw/.security.ymlAdd model API keys (replace with your own API keys):
model_list:
kimi-k2.5:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
qwen3.5-plus:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
qwen3-max:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
qwen3-coder-plus:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
glm-5:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
glm-4.7:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
minimax-m2.5:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
nvidia-kimi-k2.5:0:
api_keys:
- nvapi-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your NVIDIA API key
channels: {}
web: {}
skills: {}Part 3: Configure Channels
3.1 Telegram Configuration
3.1.1 Get Bot Token
- Search for
@BotFatherin Telegram - Send
/newbotto create a new bot - Follow the prompts to set name and username
- Copy the token (format:
123456789:ABCdefGHIjklMNOpqrsTUVwxyz)
3.1.2 Configure config.json
{
"channels": {
"telegram": {
"enabled": true,
"proxy": "http://192.168.10.105:7890",
"allow_from": [],
"group_trigger": {
"*": {
"require_mention": true
}
},
"typing": {
"enabled": true
},
"placeholder": {
"enabled": false,
"text": "Thinking... 💭"
},
"streaming": {
"enabled": true,
"throttle_seconds": 3,
"min_growth_chars": 200
},
"use_markdown_v2": true
}
}
}3.1.3 Configure security.yml
⚠️ Important: Telegram uses the token field (not bot_token)
channels:
telegram:
token: "1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmno" # Your Telegram Bot Token3.2 QQ Configuration
3.2.1 Get AppID and Secret
- Visit QQ Open Platform
- Create a bot application
- Get
AppIDandAppSecret
3.2.2 Configure config.json
{
"channels": {
"qq": {
"enabled": true,
"app_id": "1234567890",
"allow_from": null,
"group_trigger": {
"*": {
"require_mention": true,
"ignore_other_mentions": true,
"tool_policy": "restricted",
"history_limit": 50
}
},
"max_message_length": 0,
"max_base64_file_size_mib": 0,
"send_markdown": false,
"reasoning_channel_id": "",
"app_secret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Your QQ App Secret
}
}
}3.2.3 Configure security.yml
channels:
qq:
app_secret: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Your QQ App SecretPart 4: Install Required Dependencies
4.1 Install git
export http_proxy=http://192.168.10.105:7890
export https_proxy=http://192.168.10.105:7890
sudo apt-get update
sudo apt-get install -y git4.2 Install Node.js
export http_proxy=http://192.168.10.105:7890
export https_proxy=http://192.168.10.105:7890
# Download Node.js
cd /tmp
curl -fsSL -o node.tar.xz "https://nodejs.org/dist/v20.11.1/node-v20.11.1-linux-arm64.tar.xz"
# Extract and install
tar xJf node.tar.xz
sudo cp -r node-v20.11.1-linux-arm64/* /usr/local/
# Verify
node --version # v20.11.1
npm --version # 10.2.4
npx --version # 10.2.4Part 5: Configure Systemd Service (Recommended)
Using systemd service allows PicoClaw to start automatically on boot and supports auto-restart.
5.1 Create Service File
Create /etc/systemd/system/picoclaw.service:
sudo tee /etc/systemd/system/picoclaw.service << EOF
[Unit]
Description=PicoClaw Gateway
After=network.target
[Service]
Type=simple
User=raspberry
WorkingDirectory=/home/raspberry
Environment="HTTP_PROXY=http://192.168.10.105:7890"
Environment="HTTPS_PROXY=http://192.168.10.105:7890"
Environment="http_proxy=http://192.168.10.105:7890"
Environment="https_proxy=http://192.168.10.105:7890"
Environment="HOME=/home/raspberry"
Environment="USER=raspberry"
ExecStart=/home/raspberry/picoclaw gateway
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF5.2 Start Service
# Reload systemd
sudo systemctl daemon-reload
# Enable auto-start on boot
sudo systemctl enable picoclaw
# Start service
sudo systemctl start picoclaw
# Check status
sudo systemctl status picoclaw5.3 Management Commands
# Check status
sudo systemctl status picoclaw
# Stop service
sudo systemctl stop picoclaw
# Start service
sudo systemctl start picoclaw
# Restart service
sudo systemctl restart picoclaw
# View logs
sudo journalctl -u picoclaw -f5.4 Verify Running
When the service starts successfully, you'll see:
● picoclaw.service - PicoClaw Gateway
Loaded: loaded (/etc/systemd/system/picoclaw.service; enabled)
Active: active (running) since ...
Main PID: xxxx (picoclaw)
CGroup: /system.slice/picoclaw.service
└─xxxx /home/raspberry/picoclaw gatewayAnd also:
✓ Channels enabled: [telegram feishu discord qq]
✓ Gateway started on 0.0.0.0:18790Part 6: Enable Tools
6.1 Enable exec Tool (Execute Shell Commands)
Edit config.json:
{
"tools": {
"exec": {
"enabled": true,
"enable_deny_patterns": true,
"allow_remote": true,
"custom_deny_patterns": null,
"custom_allow_patterns": null,
"timeout_seconds": 60
}
}
}6.2 Enable Other Common Tools
{
"tools": {
"skills": {
"enabled": true
},
"web": {
"enabled": true
},
"cron": {
"enabled": true
},
"spawn": {
"enabled": true
},
"subagent": {
"enabled": true
}
}
}Part 7: Start Gateway
7.1 Start Commands
# Foreground start (for debugging)
./picoclaw gateway
# Background start (for production)
nohup ./picoclaw gateway > /dev/null 2>&1 &
# Start with logging
nohup ./picoclaw gateway > ~/.picoclaw/logs/gateway.log 2>&1 &7.2 Verify Startup
# Check process
ps aux | grep picoclaw
# Check port
netstat -tlnp | grep 18790
# View logs
tail -f ~/.picoclaw/logs/gateway.logWhen successfully started, you'll see:
✓ Channels enabled: [telegram qq]
✓ Gateway started on 0.0.0.0:18790Part 8: Common Issues and Solutions
8.1 Telegram Not Responding
Issue: Gateway is running but Telegram doesn't reply
Solution:
- Check if proxy is configured correctly
- Confirm
.security.ymlusestokennotbot_token - Verify Bot Token is correct
# Test Telegram API connectivity
curl "https://api.telegram.org/bot<YOUR_TOKEN>/getMe"8.2 Telegram Showing Tool Call Format
Issue: Replies showing write_file JSON format
Solution: Disable tool_feedback
{
"agents": {
"defaults": {
"tool_feedback": {
"enabled": false
}
}
}
}8.3 max_tool_iterations Error
Issue: I've reached max_tool_iterations without a final response
Solution: Increase the limit
{
"agents": {
"defaults": {
"max_tool_iterations": 100,
"max_tokens": 8192
}
}
}8.4 Channel Not Enabling
Issue: enabled_channels=0
Solution:
- Check if channel's
enabledistrue - Check if credential field names in
.security.ymlare correct - Confirm credentials are not empty
Part 9: Complete Configuration Reference
9.1 config.json
{
"session": {
"dm_scope": "per-channel-peer"
},
"version": 1,
"agents": {
"defaults": {
"workspace": "/home/raspberry/.picoclaw/workspace",
"restrict_to_workspace": false,
"allow_read_outside_workspace": false,
"provider": "",
"model_name": "kimi-k2.5",
"max_tokens": 8192,
"max_tool_iterations": 100,
"summarize_message_threshold": 0,
"summarize_token_percent": 0,
"steering_mode": "one-at-a-time",
"subturn": {
"max_depth": 0,
"max_concurrent": 0,
"default_timeout_minutes": 0,
"default_token_budget": 0,
"concurrency_timeout_sec": 0
},
"tool_feedback": {
"enabled": false,
"max_args_length": 300
}
}
},
"model_list": [
{
"model_name": "kimi-k2.5",
"model": "moonshot/kimi-k2.5",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "qwen3.5-plus",
"model": "qwen/qwen3.5-plus",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "qwen3-max",
"model": "qwen/qwen3-max-2026-01-23",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "qwen3-coder-plus",
"model": "qwen/qwen3-coder-plus",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "glm-5",
"model": "zhipu/glm-5",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "glm-4.7",
"model": "zhipu/glm-4.7",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "minimax-m2.5",
"model": "minimax/MiniMax-M2.5",
"api_base": "https://coding.dashscope.aliyuncs.com/v1"
},
{
"model_name": "nvidia-kimi-k2.5",
"model": "nvidia/moonshotai/kimi-k2.5"
}
],
"channels": {
"telegram": {
"enabled": true,
"base_url": "",
"proxy": "http://192.168.10.105:7890",
"allow_from": null,
"group_trigger": {},
"typing": {
"enabled": true
},
"placeholder": {
"enabled": false,
"text": "Thinking... 💭"
},
"streaming": {
"enabled": true,
"throttle_seconds": 3,
"min_growth_chars": 200
},
"reasoning_channel_id": "",
"use_markdown_v2": true
},
"qq": {
"enabled": true,
"app_id": "1234567890",
"allow_from": null,
"group_trigger": {},
"max_message_length": 0,
"max_base64_file_size_mib": 0,
"send_markdown": false,
"reasoning_channel_id": "",
"app_secret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
},
"gateway": {
"host": "0.0.0.0",
"port": 18790,
"hot_reload": false,
"log_level": "fatal"
},
"tools": {
"exec": {
"enabled": true,
"enable_deny_patterns": true,
"allow_remote": true,
"custom_deny_patterns": null,
"custom_allow_patterns": null,
"timeout_seconds": 60
},
"skills": {
"enabled": true
},
"web": {
"enabled": true
},
"cron": {
"enabled": true
},
"spawn": {
"enabled": true
},
"subagent": {
"enabled": true
}
}
}9.2 .security.yml
model_list:
glm-4.7:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
glm-5:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
kimi-k2.5:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
minimax-m2.5:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
nvidia-kimi-k2.5:0:
api_keys:
- nvapi-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your NVIDIA API key
qwen3-coder-plus:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
qwen3-max:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
qwen3.5-plus:0:
api_keys:
- sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your Bailian API key
channels:
telegram:
token: "1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmno" # Your Telegram Bot Token
qq:
app_secret: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Your QQ App Secret
web: {}
skills: {}Part 10: Usage Tips
10.1 Direct Command Line Chat
# Single query
./picoclaw agent -m "Hello" --model kimi-k2.5
# Interactive chat
./picoclaw agent10.2 Install Skill
# Search for skill
./picoclaw skills search "web scraping"
# Install skill
./picoclaw skills install <skill-name>
# Manual clone install
cd ~/.picoclaw/workspace
git clone https://github.com/example/skill.git10.3 Check Status
./picoclaw status
./picoclaw version
./picoclaw modelPart 11: Summary
Through this tutorial, you have completed:
- ✅ PicoClaw installation on Raspberry Pi
- ✅ 8 LLM model configurations
- ✅ Telegram channel configuration (with proxy)
- ✅ QQ channel configuration
- ✅ Git and Node.js environment installation
- ✅ Enabled exec tool for shell command support
- ✅ Resolved tool iteration limit issues
- ✅ Fixed Telegram display format issues
Now you can interact with PicoClaw through Telegram and QQ to perform various AI tasks!
⚠️ Note: All API keys, tokens, and secrets in this tutorial are placeholders. Please replace them with your own real credentials.