Configuration Guide¶
Complete guide to configuring CodeOptiX.
Configuration Methods¶
CodeOptiX can be configured via: 1. Configuration files (YAML/JSON) 2. Environment variables 3. Python code 4. Command-line arguments
Configuration File Format¶
YAML Configuration¶
Create codeoptix.yaml:
adapter:
llm_config:
provider: openai
model: gpt-5.2
api_key: ${OPENAI_API_KEY}
evaluation:
scenario_generator:
num_scenarios: 3
use_bloom: true
use_full_bloom: true
static_analysis:
bandit: true
test_runner:
coverage: true
evolution:
max_iterations: 3
population_size: 3
minibatch_size: 2
proposer:
use_gepa: true
model: gpt-5.2
JSON Configuration¶
Create codeoptix.json:
{
"adapter": {
"llm_config": {
"provider": "openai",
"model": "gpt-5.2",
"api_key": "${OPENAI_API_KEY}"
}
},
"evaluation": {
"scenario_generator": {
"num_scenarios": 3,
"use_bloom": true
}
}
}
Adapter Configuration¶
LLM Configuration¶
adapter:
llm_config:
provider: openai # or "anthropic", "google"
model: gpt-5.2
api_key: ${OPENAI_API_KEY}
prompt: "You are a helpful coding assistant."
Supported Providers¶
openai: OpenAI GPT modelsanthropic: Anthropic Claude modelsgoogle: Google Gemini models
Evaluation Configuration¶
Scenario Generator¶
evaluation:
scenario_generator:
num_scenarios: 3 # Number of scenarios per behavior
use_bloom: true # Use Bloom-style generation
use_full_bloom: true # Full Bloom integration
num_variations: 2 # Variations per scenario
model: gpt-5.2 # LLM model for generation
Static Analysis¶
Test Runner¶
LLM Evaluator¶
Evolution Configuration¶
Evolution Parameters¶
evolution:
max_iterations: 3 # Maximum iterations
population_size: 3 # Candidates per iteration
minibatch_size: 2 # Scenarios per evaluation
improvement_threshold: 0.05 # Minimum improvement
Proposer Configuration¶
evolution:
proposer:
use_gepa: true # Use GEPA for proposal
model: gpt-5.2 # LLM model
temperature: 0.7 # Generation temperature
Behavior Configuration¶
Behavior-Specific Config¶
behaviors:
insecure-code:
severity: high
enabled: true
strict_mode: true
vacuous-tests:
severity: medium
enabled: true
Environment Variables¶
API Keys¶
Configuration Override¶
Python Configuration¶
Programmatic Configuration¶
config = {
"evaluation": {
"scenario_generator": {
"num_scenarios": 5
}
}
}
engine = EvaluationEngine(adapter, llm_client, config=config)
Configuration Precedence¶
- Command-line arguments (highest priority)
- Configuration file
- Environment variables
- Default values (lowest priority)
Best Practices¶
1. Use Configuration Files¶
Store configuration in files for reproducibility:
2. Use Environment Variables for Secrets¶
Never commit API keys:
3. Version Control Configurations¶
Commit configuration files (without secrets):
Next Steps¶
- Python API Guide - Use configuration in Python
- CLI Usage Guide - Command-line configuration
- GitHub Actions Guide - CI/CD configuration