LogFire Integration with Pydantic AI
LogFire is an observability platform built by the Pydantic team that provides comprehensive tracing, logging, and metrics for your Pydantic AI agents.
๐ฏ Overview
SuperOptiX includes native LogFire integration for Pydantic AI agents, allowing you to:
- Trace agent executions with full visibility into LLM calls
- Monitor tool usage (MCP tools, regular tools)
- Track token usage and costs automatically
- View conversation history with rich formatting
- Analyze performance metrics (latency, success rates)
- Debug issues with detailed span information
The integration is opt-in and works gracefully when LogFire is not configured.
๐ฆ Installation
LogFire is available as a separate optional dependency to avoid conflicts with other frameworks:
Basic Installation
# Install Pydantic AI support
pip install "superoptix[frameworks-pydantic-ai]"
# Install LogFire observability (separate, optional)
pip install "superoptix[logfire]"
Or install both together:
pip install "superoptix[frameworks-pydantic-ai,logfire]"
Installation with all Extra
Important: LogFire is NOT included in superoptix[all] due to dependency conflicts:
# This installs everything EXCEPT LogFire
pip install "superoptix[all]"
# LogFire must be installed separately if needed
pip install "superoptix[logfire]"
# โ ๏ธ WARNING: Installing both [all,logfire] will FAIL
# because 'all' includes google-adk which conflicts with LogFire
Why LogFire is separate:
- LogFire requires opentelemetry-sdk>=1.39.0,<1.40.0
- google-adk (included in [all]) requires opentelemetry-sdk==1.37.0 (exact version)
- These versions are incompatible
Safe combinations:
- superoptix[frameworks-pydantic-ai,logfire] - Works!
- superoptix[frameworks-openai,logfire] - Works!
- superoptix[all] - Works! (LogFire not included)
- superoptix[all,logfire] - Fails! (google-adk conflict)
What gets installed:
- pydantic-ai==1.31.0 (from frameworks-pydantic-ai)
- logfire==4.15.0 โจ (from logfire extra)
โ๏ธ Configuration
Playbook Configuration
๐ Where to Configure LogFire in Your Playbook:
The logfire configuration MUST be placed directly under the spec: section, at the same level as other top-level configurations like language_model, persona, tasks, etc.
Playbook Structure:
apiVersion: agent/v1
kind: AgentSpec
metadata:
name: ...
spec: โ LogFire goes here
logfire: CORRECT LOCATION
enabled: true
language_model: โ Same level
...
persona: โ Same level
...
โ ๏ธ Important: Do NOT place it under optimization, evaluation, or other nested sections!
Enable LogFire in your agent playbook:
apiVersion: agent/v1
kind: AgentSpec
metadata:
name: Developer Assistant
version: "1.0.0"
spec:
# LogFire configuration - place it here in the spec section
logfire:
enabled: true # Default: true (auto-detects if LogFire is available)
# Other spec configurations (same level)
language_model:
provider: ollama
model: llama3.1:8b
api_base: http://localhost:11434
persona:
role: Software Developer
goal: Write clean code
# ... rest of your spec configuration
Configuration Options:
enabled: true- Enable LogFire instrumentation (default)enabled: false- Disable LogFire even if installed
Important Notes:
- โ ๏ธ Place
logfireconfiguration directly underspec:(not underoptimizationor other sections) - If
logfiresection is omitted, it defaults toenabled: true(auto-detect) - The configuration is read when the agent is initialized
Code Configuration
LogFire must be configured before the agent is initialized:
Option 1: Cloud Dashboard (Recommended for Production)
# Authenticate with LogFire (one-time setup)
logfire auth
# LogFire auto-configures after auth
# No additional code needed!
Option 2: Local Backend (For Development)
Configure LogFire to send traces to a local OTLP-compatible observability backend:
import os
import logfire
# Set OTLP endpoint for your local backend
os.environ['OTEL_EXPORTER_OTLP_TRACES_ENDPOINT'] = 'http://localhost:4318/v1/traces'
logfire.configure(
service_name='my-superoptix-agent',
send_to_logfire=False # Don't send to cloud
)
Note: Make sure your local backend supports OTLP HTTP/Protobuf encoding.
๐ Usage
Basic Usage
-
Configure LogFire (if using cloud):
logfire auth -
Run your agent:
super agent run developer --goal "Write a Python function to validate emails" -
Traces are captured automatically! โจ
Compile and Run Example
# Initialize project
super init my_project
cd my_project
# Pull agent
super agent pull developer
# Enable LogFire in playbook (edit playbook YAML)
# Add to spec section:
# spec:
# logfire:
# enabled: true
# Authenticate with LogFire (one-time setup)
logfire auth
# Compile with Pydantic AI
super agent compile developer --framework pydantic-ai
# Run agent (LogFire traces captured automatically)
super agent run developer --goal "Implement a REST API endpoint"
# View traces at https://logfire.pydantic.dev
๐ Step-by-Step Playbook Configuration:
- Open your agent's playbook:
swe/agents/developer/playbook/developer_playbook.yaml - Add
logfiresection underspec::
spec:
logfire: # โ Add this section
enabled: true # โ Enable LogFire
language_model: # โ Other configs at same level
...
๐ Viewing Traces
Option 1: LogFire Cloud Dashboard (Recommended)
-
Authenticate (if not done already):
logfire auth -
Run your agent:
super agent run developer --goal "your task" -
View traces:
- Open: https://logfire.pydantic.dev
- Navigate to your project
- Click on "Traces" or "Live" section
- Search for your agent executions
What you'll see: - ๐ต Agent execution spans - ๐ฌ LLM conversation history - ๐ง Tool invocations (MCP tools, etc.) - โฑ๏ธ Performance metrics - ๐ฐ Token usage and costs - Errors and exceptions
Option 2: Other OTLP-Compatible Backends
LogFire uses OpenTelemetry, so you can export to any OTLP-compatible backend:
import os
import logfire
# Set OTLP endpoint for your preferred backend
os.environ['OTEL_EXPORTER_OTLP_TRACES_ENDPOINT'] = 'http://your-backend:4318/v1/traces'
logfire.configure(
service_name='my-agent',
send_to_logfire=False
)
Note: Make sure your OTLP-compatible backend supports HTTP/Protobuf encoding (not gRPC).
๐ What Gets Traced
When LogFire is enabled, the following are automatically captured:
Agent Execution
- Agent initialization
- Input processing
- Output generation
- Execution duration
LLM Interactions
- Model calls (requests/responses)
- Full conversation history
- Token usage
- Cost calculations
- Latency metrics
Tool Usage
- MCP tool invocations
- Tool parameters and results
- Tool execution time
- Success/failure status
Structured Output
- Validation events
- Field extraction
- Output formatting
Errors
- Exception traces
- Error messages
- Stack traces
- Context information
๐๏ธ Advanced Configuration
Custom Service Name
import logfire
logfire.configure(
service_name='my-custom-service-name',
service_version='1.0.0',
environment='production'
)
Filtering and Sampling
import logfire
logfire.configure(
sampling={
'default': 0.5 # Sample 50% of traces
},
min_level='info' # Only log info level and above
)
Scrubbing Sensitive Data
import logfire
logfire.configure(
scrubbing={
'patterns': [
r'password=\w+',
r'api_key=\w+'
]
}
)
๐ Troubleshooting
Traces Not Appearing
Issue: Traces don't show up in LogFire dashboard.
Solutions:
1. Verify LogFire is authenticated: logfire auth
2. Check if LogFire is configured: LogFire should be configured before agent initialization
3. Verify logfire.enabled: true in playbook (or omit it, defaults to true)
4. Check network connectivity (for cloud dashboard)
ImportError: No module named 'logfire'
Issue: LogFire is not installed.
Solution:
pip install "superoptix[frameworks-pydantic-ai]"
# OR
pip install logfire==4.15.0
Instrumentation Not Working
Issue: Agent runs but LogFire doesn't capture traces.
Solutions:
1. Ensure LogFire is configured before agent initialization
2. Check that logfire.enabled: true in playbook
3. Verify agent was compiled after LogFire was configured
4. Re-compile agent: super agent compile developer --framework pydantic-ai
Graceful Fallback
If LogFire is not installed or not configured, the integration silently skips instrumentation. Your agent will work normally without errors.
This is intentional behavior - LogFire is optional and won't break your workflow.
๐ Example Playbook
Complete example with LogFire enabled, showing exact placement in the playbook:
apiVersion: agent/v1
kind: AgentSpec
metadata:
name: Developer Assistant
version: "1.0.0"
spec:
# LogFire Configuration - MUST be under spec: (same level as other configs)
logfire:
enabled: true # Auto-detects if LogFire is available and configured
# Model Configuration
language_model:
provider: ollama
model: llama3.1:8b
api_base: http://localhost:11434
# Input/Output Fields
input_fields:
- name: feature_requirement
type: string
description: Description of feature to implement
output_fields:
- name: implementation
type: string
description: Code implementation
# Persona Configuration
persona:
role: Software Developer
goal: Write clean, efficient code
# Tasks, evaluation, optimization, etc.
tasks:
- name: implement_feature
instruction: Implement the requested feature
# ... rest of your configuration
โ ๏ธ Common Mistakes to Avoid:
Wrong - LogFire under wrong section:
spec:
optimization:
logfire: # WRONG - don't put it here
enabled: true
Wrong - LogFire outside spec:
metadata:
logfire: # WRONG - must be under spec:
enabled: true
spec:
language_model:
...
Correct - LogFire directly under spec:
spec:
logfire: # CORRECT - directly under spec:
enabled: true
language_model:
...
๐ Resources
- LogFire Documentation: https://logfire.pydantic.dev/docs/
- LogFire Dashboard: https://logfire.pydantic.dev
- Pydantic AI Documentation: https://ai.pydantic.dev/
- OpenTelemetry: https://opentelemetry.io/
๐ก Best Practices
- Use Cloud Dashboard for Production: Authenticate with
logfire authfor production deployments - Configure Before Initialization: Always configure LogFire before creating agents
- Monitor Costs: LogFire tracks token usage and costs - useful for budgeting
- Use Service Names: Set meaningful
service_namefor better trace organization
๐ Summary
LogFire integration in SuperOptiX provides:
- Zero-configuration - Works out of the box when LogFire is installed
- Graceful fallback - No errors if LogFire is not available
- Rich observability - Full visibility into agent execution
- Production-ready - Works with LogFire cloud or any OTLP backend
- Framework-native - Built specifically for Pydantic AI
Enable LogFire in your playbook and start getting insights into your agent behavior! ๐