📗 SuperSpec DSL Reference
SuperSpec DSL Complete Reference
Universal agent specification language for all 6 frameworks
📋 Complete SuperSpec Schema
Top-Level Structure
apiVersion: agent/v1 # REQUIRED - Schema version
kind: AgentSpec # REQUIRED - Object type
metadata: # REQUIRED - Agent identity
spec: # REQUIRED - Agent specification
🏷️ Metadata Section
The metadata section defines the agent's identity and basic properties.
| Field | Required | Type | Description |
name |
✅ Yes | string | Human-readable agent name |
id |
✅ Yes | string | Unique identifier (a-z, 0-9, -, _) |
version |
✅ Yes | string | Semantic versioning (e.g., "1.0.0") |
namespace |
Optional | string | Logical grouping namespace |
level |
Optional | oracles | genies | Agent tier level (default: oracles) |
description |
Optional | string | Brief agent description |
Example Metadata
metadata:
name: sentiment_analyzer
id: sentiment_analyzer
namespace: testing
version: 1.0.0
level: oracles
description: Analyzes sentiment of text input
tags:
- nlp
- sentiment-analysis
- text-processing
🎯 Spec Section
The spec section contains all agent configuration.
Required Fields
| Field | Description |
target_framework |
Framework choice: dspy, openai, crewai, google-adk, microsoft, deepagents |
language_model |
LLM configuration (provider, model, API settings) |
input_fields |
List of input field specifications |
output_fields |
List of output field specifications |
feature_specifications |
BDD scenarios for evaluation (universal across all frameworks) |
🤖 Language Model Configuration
Configure your LLM for any provider.
Ollama (Local, Recommended)
spec:
language_model:
provider: ollama
model: llama3
api_base: http://localhost:11434
temperature: 0.7
max_tokens: 1000
OpenAI
spec:
language_model:
provider: openai
model: gpt-4o-mini
api_key: ${OPENAI_API_KEY}
temperature: 0.7
max_tokens: 1000
Google Gemini
spec:
language_model:
provider: google
model: gemini-2.0-flash
api_key: ${GOOGLE_API_KEY}
temperature: 0.7
Azure OpenAI
spec:
language_model:
provider: azure
model: gpt-4
api_key: ${AZURE_OPENAI_API_KEY}
api_base: ${AZURE_OPENAI_ENDPOINT}
api_version: "2024-02-15-preview"
MLX (Apple Silicon)
spec:
language_model:
provider: mlx
model: mlx-community/Llama-3-8B-Instruct-4bit
temperature: 0.7
👤 Persona Configuration
Define your agent's personality and approach.
| Field | Description |
role |
Agent's role (e.g., "AI Research Assistant") |
goal |
Agent's primary objective |
backstory |
Agent's background context (multi-line supported) |
instructions |
Direct instructions for the agent (OpenAI SDK, Google ADK, Microsoft style) |
reasoning.steps |
List of reasoning steps the agent should follow |
Example Persona (DSPy/CrewAI Style)
spec:
persona:
role: AI Research Assistant
goal: Help researchers find and analyze academic papers
backstory: |
You are an experienced research assistant with expertise in
literature review and academic analysis. You help researchers
discover relevant papers and extract key insights.
reasoning:
steps:
- Understand the research question
- Search for relevant papers
- Analyze key findings
- Summarize insights
Example Persona (OpenAI SDK/Google ADK/Microsoft Style)
spec:
persona:
instructions: |
You are a helpful AI assistant that provides accurate,
well-researched responses to user queries.
Your approach:
1. Understand the user's question
2. Gather relevant information
3. Formulate a clear response
4. Provide helpful insights
📥 Input & Output Fields
Define the agent's interface with strongly typed fields.
Input Fields
spec:
input_fields:
- name: query
type: str
description: User's question or request
required: true
- name: context
type: str
description: Additional context for the query
required: false
default: ""
Output Fields
spec:
output_fields:
- name: response
type: str
description: Agent's response to the query
- name: confidence
type: float
description: Confidence score (0.0-1.0)
default: 0.0
🎯 Tasks Configuration (CrewAI/DeepAgents)
Define tasks for multi-agent or complex workflow frameworks.
spec:
tasks:
- name: research_task
description: |
Research the given topic and gather relevant information
from multiple sources.
expected_output: |
A comprehensive summary with key findings, citations,
and recommendations for further reading.
agent: researcher # Optional: assign to specific agent
🔌 Tools Configuration
Add tools for your agent to use.
Built-in Tools
spec:
tools:
- name: web_search
type: builtin
enabled: true
config:
max_results: 5
- name: calculator
type: builtin
enabled: true
Custom Tools
spec:
tools:
- name: custom_api
type: custom
function: my_module.my_function
description: Calls custom API endpoint
parameters:
endpoint: https://api.example.com
api_key: ${API_KEY}
🧠 Memory Configuration
Configure short-term and long-term memory.
spec:
memory:
enabled: true
backend: sqlite
short_term:
max_size: 100
retention_policy: lru
long_term:
enabled: true
semantic_search: true
embedding_model: sentence-transformers/all-MiniLM-L6-v2
📚 RAG Configuration
Enable Retrieval-Augmented Generation.
Vector Database RAG
spec:
rag:
enabled: true
vector_db:
type: chromadb
collection_name: my_knowledge
persist_directory: ./data/chroma
config:
top_k: 5
chunk_size: 512
chunk_overlap: 50
MCP Protocol RAG
spec:
rag:
enabled: true
mcp:
enabled: true
servers:
- name: filesystem
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/docs"]
- name: git
command: npx
args: ["-y", "@modelcontextprotocol/server-git", "--repository", "/path/to/repo"]
config:
top_k: 5
✅ Feature Specifications (BDD Scenarios)
Define behavior-driven test scenarios (universal across all frameworks).
spec:
feature_specifications:
scenarios:
- name: Simple greeting
input:
query: "Hello, how are you?"
expected_output:
response: "I am doing well"
expected_keywords:
- hello
- well
- assistant
- name: Domain-specific query
input:
query: "What is quantum entanglement?"
expected_output:
response: "quantum entanglement explanation"
expected_keywords:
- quantum
- entanglement
- particles
🧬 Optimization Configuration
Configure GEPA optimizer (universal across all frameworks).
spec:
optimization:
optimizer:
name: GEPA
params:
metric: answer_exact_match
auto: medium # light, medium, intensive
reflection_lm: qwen3:8b
reflection_minibatch_size: 3
skip_perfect_score: true
add_format_failure_as_feedback: true
Optimization Modes
| Mode | Iterations | Time | Best For |
| light | 3-5 | 5-10 min | Quick testing, rapid prototyping |
| medium | 10-15 | 15-30 min | Production agents, balanced quality/speed |
| intensive | 20-30 | 30-60 min | Critical agents, maximum performance |
🔧 Framework-Specific Sections
DSPy-Specific
spec:
target_framework: dspy
dspy:
signature_type: ChainOfThought # or Predict, ReAct
max_retries: 3
CrewAI-Specific
spec:
target_framework: crewai
tasks:
- name: research
description: Conduct research on the topic
expected_output: Detailed research summary
OpenAI SDK-Specific
spec:
target_framework: openai
persona:
instructions: |
You are a helpful assistant that provides accurate responses.
📊 Complete Example: Multi-Framework Agent
Here's a complete SuperSpec that can be compiled to ANY framework:
apiVersion: agent/v1
kind: AgentSpec
metadata:
name: research_assistant
id: research_assistant
namespace: research
version: 1.0.0
level: genies
description: AI research assistant that helps find and analyze information
spec:
target_framework: dspy # Change to: openai, crewai, google-adk, microsoft, deepagents
language_model:
provider: ollama
model: llama3
api_base: http://localhost:11434
temperature: 0.7
persona:
role: AI Research Assistant
goal: Help users find and analyze relevant information
backstory: |
You are an experienced research assistant with expertise in
information retrieval and analysis.
reasoning:
steps:
- Understand the research question
- Search for relevant information
- Analyze and synthesize findings
- Present clear, actionable insights
input_fields:
- name: query
type: str
description: Research question or topic
output_fields:
- name: response
type: str
description: Research findings and analysis
tools:
- name: web_search
type: builtin
enabled: true
rag:
enabled: true
vector_db:
type: chromadb
collection_name: research_docs
config:
top_k: 5
memory:
enabled: true
short_term:
max_size: 50
feature_specifications:
scenarios:
- name: Basic research query
input:
query: "What is machine learning?"
expected_output:
response: "machine learning explanation"
expected_keywords:
- machine learning
- algorithms
- data
optimization:
optimizer:
name: GEPA
params:
metric: answer_exact_match
auto: medium
🎯 Field Type Reference
| Type | Python Type | Description |
str |
str |
Text string |
int |
int |
Integer number |
float |
float |
Floating point number |
bool |
bool |
Boolean (true/false) |
list |
List |
Array of items |