Skip to content

Configuration

Configure SuperClaw for your environment and workflows.


Initialize Configuration

Create a configuration file with default settings:

superclaw init

This creates ~/.superclaw/config.yaml.


Configuration File

Full Example

# ~/.superclaw/config.yaml

# Default target for attacks
default_target: "ws://127.0.0.1:18789"

# LLM settings for Bloom scenario generation
llm:
  provider: "anthropic"  # openai, anthropic, google, ollama
  model: "claude-sonnet-4"

# Logging configuration
logging:
  level: "INFO"           # DEBUG, INFO, WARNING, ERROR
  file: "~/.superclaw/superclaw.log"

# Safety settings (guardrails)
safety:
  require_authorization: true   # Require token for remote targets
  local_only: true              # Block non-localhost targets
  max_concurrent_attacks: 5     # Limit parallel attack threads

Minimal Example

# ~/.superclaw/config.yaml
default_target: "ws://127.0.0.1:18789"

Environment Variables

Environment variables override config file settings:

Variable Description Example
SUPERCLAW_TARGET Default target URL ws://127.0.0.1:18789
SUPERCLAW_AUTH_TOKEN Authorization token for remote targets (provided by target system admin) your-secret-token
SUPERCLAW_LLM_PROVIDER LLM provider for scenario generation openai, anthropic
SUPERCLAW_LOG_LEVEL Logging verbosity DEBUG, INFO

LLM Provider API Keys

For scenario generation with Bloom:

Provider Environment Variable
OpenAI OPENAI_API_KEY
Anthropic ANTHROPIC_API_KEY
Google GOOGLE_API_KEY

LLM Provider Setup

OpenAI

export OPENAI_API_KEY="sk-..."

# Generate scenarios
superclaw generate scenarios --behavior prompt_injection

# CodeOptiX evaluation
superclaw codeoptix evaluate --llm-provider openai

Anthropic

export ANTHROPIC_API_KEY="sk-ant-..."

# Generate scenarios
superclaw generate scenarios --behavior prompt_injection

# CodeOptiX evaluation
superclaw codeoptix evaluate --llm-provider anthropic

Ollama (Local)

# Start Ollama with a model
ollama run llama3

# Configure in config.yaml
# llm:
#   provider: "ollama"
#   model: "llama3"

Safety Settings

SuperClaw includes guardrails to prevent misuse.

Local-Only Mode (Default: Enabled)

When enabled, only localhost targets are allowed:

safety:
  local_only: true

Allowed targets: - ws://127.0.0.1:18789 - ws://localhost:18789

Blocked targets: - ws://remote-server.com:18789 - ws://192.168.1.100:18789

Authorization for Remote Targets

To test remote targets, disable local_only and set an auth token:

safety:
  local_only: false
  require_authorization: true
export SUPERCLAW_AUTH_TOKEN="your-secret-token"
superclaw attack openclaw --target ws://remote-server.com:18789

Token Management

SuperClaw does not issue or manage SUPERCLAW_AUTH_TOKEN. This token is an authentication credential that must be generated by and obtained from the remote system you are testing. Ensure you handle this credential securely.

Remote Testing

Only test remote systems with explicit written authorization from the system owner.


Per-Command Configuration

Override config settings via CLI flags:

# Override target
superclaw attack openclaw --target ws://custom:18789

# Override behaviors
superclaw attack openclaw --behaviors prompt-injection-resistance,sandbox-isolation

# Override output format
superclaw audit openclaw --report-format sarif --output results

Scan Configuration

Check your configuration for security issues:

superclaw scan config

This checks for:

Issue Severity Description
Public targets HIGH Non-localhost default targets
Insecure WebSocket MEDIUM Using ws:// instead of wss://
Missing auth HIGH No authorization for remote targets
Weak logging LOW Debug logging in production
Missing LLM config INFO No LLM provider for Bloom

Configuration Precedence

Settings are applied in this order (later overrides earlier):

  1. Default values โ€” Built-in defaults
  2. Config file โ€” ~/.superclaw/config.yaml
  3. Environment variables โ€” SUPERCLAW_*
  4. CLI flags โ€” --target, --behaviors, etc.

Next Steps