Skip to content

SuperOpt

BETA This documentation is actively evolving. Features and APIs may change.
SuperOpt Logo

Agentic Environment Optimization for Autonomous AI Agents

PyPI CI License GitHub stars Paper

Built by Superagentic AI

🌟 What is Agent Environment Optimization?

❌ Traditional AI Optimization

  • Model retraining is expensive and slow
  • Limited by fixed training datasets
  • Prompt optimization alone is insufficient
  • No coordination between prompts, tools, and memory
  • Agents can't learn from their own failures

✅ Agent Environment Optimization

  • Optimize the entire agent environment as a unified system
  • Treat prompts, tools, retrieval, and memory as optimization targets
  • Automatic failure diagnosis and routing to appropriate optimizers
  • Continuous learning from execution traces
  • Stability guarantees prevent oscillation and ensure convergence

🏗️ Core Architecture

🎯

SuperController

Intelligent orchestrator that analyzes failures and routes optimization tasks

📝

SuperPrompt

Evolutionary prompt optimization using reflective mutation techniques

🔧

SuperReflexion

Tool schema repair and constraint generation for robust execution

🔍

SuperRAG

Retrieval system optimization and parameter tuning

🧠

SuperMem

Advanced memory management with conflict resolution

⚡ Quick Start

SuperOpt automatically learns from your agent's failures and successes, continuously improving prompts, tools, and memory. This same approach scales from simple demos to complex production agents.

Try It Now: Copy and run the complete example below to see SuperOpt in action!

pip install superopt
# Complete working example - copy this entire code block
from superopt import SuperOpt, AgenticEnvironment
from superopt.core.environment import PromptConfig, ToolSchema
from superopt.core.trace import ExecutionTrace, ToolCall

# Define your agent's environment
environment = AgenticEnvironment(
    prompts=PromptConfig(system_prompt="You are a helpful coding assistant."),
    tools={
        "edit_file": ToolSchema(
            name="edit_file",
            description="Edit a file at a specific line",
            arguments={"file": "str", "line": "int"},
        ),
    },
)

# Create the learning optimizer
optimizer = SuperOpt(environment)

# Simulate a failure (your agent tried to edit line 0)
trace = ExecutionTrace(
    task_description="Edit line 0 in test.py",
    success=False,
)
trace.tool_errors.append(ToolCall(
    tool_name="edit_file",
    arguments={"file": "test.py", "line": 0},
    error_message="Line numbers must be 1-indexed",
))

print("Before SuperOpt:")
print(optimizer.environment.tools['edit_file'].description)
print()

# SuperOpt learns and fixes the problem automatically!
optimizer.step(trace)

print("After SuperOpt learned from the failure:")
print(optimizer.environment.tools['edit_file'].description)

Save this as test_superopt.py and run python test_superopt.py to see SuperOpt automatically fix the tool schema!

Real-World Applications: This same approach scales to production agents handling customer support, code generation, data analysis, API integrations, and complex workflows. Every user interaction becomes a learning opportunity!

🎯 Key Benefits of Agent Environment Optimization

🏗️

Unified Environment Optimization

Optimize prompts, tools, retrieval, and memory as a coordinated system, not isolated components.

🎯

Intelligent Failure Diagnosis

Automatically classify failures and route them to the appropriate optimizer for precise fixes.

🔄

Continuous Self-Improvement

Agents learn and adapt from every interaction using execution traces as supervision signals.

No Model Retraining Required

All improvements happen at the environment level, enabling fast iteration without expensive training.

🛡️

Stability Guarantees

Hierarchy of mutability prevents destructive updates and ensures reliable convergence.

🔌

Framework Agnostic

Works with any AI agent framework through modular adapters and standardized interfaces.

📚 Learn More