OpenClaw Unrestricted Exec Permissions in Production — How to Fix (2026)
Running OpenClaw with unrestricted exec permissions is dangerous in production. Learn how to configure sandboxing and restrict code execution.
The Risk of Unrestricted Exec
OpenClaw's code execution feature lets your AI agent run commands and scripts. In development, unrestricted access is convenient. In production, it's a critical security vulnerability — a prompt injection attack could give an attacker full shell access to your server.
How to Restrict Exec Permissions
1. Disable Exec Entirely (Safest)
{
"exec": {
"enabled": false
}
}2. Use Sandboxed Mode
{
"exec": {
"enabled": true,
"mode": "sandboxed",
"timeout": 30000,
"allowedCommands": [
"node",
"python3",
"curl"
],
"blockedCommands": [
"rm",
"sudo",
"chmod",
"chown",
"dd",
"mkfs"
],
"maxOutputSize": "1MB",
"workDir": "/tmp/openclaw-sandbox"
}
}3. Docker Isolation (Recommended)
Run OpenClaw in a Docker container with restricted capabilities:
services:
openclaw:
image: ghcr.io/openclaw/openclaw:latest
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp:size=100M
volumes:
- ./data:/app/data4. Run as Non-Root User
# Create a dedicated user
useradd -r -s /bin/false openclaw
# Set permissions
chown -R openclaw:openclaw /opt/openclaw/data
# Run as the dedicated user
su -s /bin/sh openclaw -c "openclaw start"Frequently Asked Questions
What are exec permissions in OpenClaw?
Exec permissions control whether the AI agent can execute system commands, run scripts, or access the filesystem. When unrestricted, the agent can run any command on your server, which is a major security risk in production.
What is the safest exec configuration?
The safest configuration is "exec": "none" which completely disables code execution. If your agent needs to run code, use "exec": "sandboxed" which runs commands in an isolated environment with limited filesystem access.
Can users exploit unrestricted exec to access my server?
Yes. Through prompt injection attacks, a malicious user could potentially trick the AI agent into executing harmful commands like deleting files, exfiltrating data, or installing malware on your server.
How do I test if my exec permissions are properly restricted?
Try asking your agent to run a command like "ls /etc/passwd" or "cat /proc/cpuinfo". If it succeeds in an unrestricted way, your permissions need tightening.