π¦ Ollama Demo Agent
The Ollama Demo Agent showcases easy local model management with Ollama in SuperOptiX. This demo focuses specifically on how to configure and use Ollama models for simple, reliable local inference.
π― What This Demo Shows
This demo demonstrates:
- π¦ Ollama Model Integration: How to configure Ollama models in SuperOptiX
- π Easy Model Management: Simple model installation and management
- π Local Model Usage: Running models completely offline
- βοΈ Playbook Configuration: How to set up Ollama in agent playbooks
π Setup Ollama Model
1. Install Ollama
Bash
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh
# Or download from https://ollama.ai/download for Windows
2. Install Ollama Model
3. Start Ollama Server
4. Pull and Run the Demo
Bash
# Pull the Ollama demo agent
super agent pull ollama_demo
# Compile the agent
super agent compile ollama_demo
# Run the agent
super agent run ollama_demo --goal "What are the key features of Ollama?"
π§ Ollama Configuration in Playbook
The Ollama demo showcases how to configure Ollama models in the agent playbook:
Language Model Configuration
YAML
language_model:
location: local
provider: ollama
model: llama3.2:8b
api_base: http://localhost:11434
temperature: 0.7
max_tokens: 2048
Key Configuration Points:
- π―
provider: ollama
: Specifies Ollama as the model backend - π€
model
: The Ollama model identifier - π
api_base
: Ollama server endpoint (default: http://localhost:11434) - π‘οΈ
temperature
: Controls response creativity (0.7 = balanced) - π
max_tokens
: Maximum response length
π¦ The Ollama Advantage
Ollama makes local AI accessible to everyone. It's the simplest way to run powerful language models on your own machine:
- π― One Command Setup: Install any model with a single command
- π Seamless Updates: Models update automatically in the background
- π Cross-Platform: Works perfectly on macOS, Linux, and Windows
- π True Local: Runs completely offline once installed
- β‘ Lightning Fast: Optimized for your local hardware
- π‘ Beginner Friendly: Perfect for newcomers to local AI
π§ Customizing Ollama Configuration
Change Model
Edit agents/ollama_demo/playbook/ollama_demo_playbook.yaml
:
Adjust Performance Settings
Use Different Port
π¨ Troubleshooting Ollama
Common Issues
-
Ollama Server Not Running
-
Model Not Installed
-
Performance Issues
- Ensure sufficient RAM (8GB+ recommended)
- Close other resource-intensive applications
- Consider using smaller models for faster responses
Getting Help
Bash
# Check agent status
super agent inspect ollama_demo
# View agent logs
super agent logs ollama_demo
# Get Ollama help
ollama --help
π Related Resources
- Ollama Setup Guide - Complete Ollama setup instructions
- Model Management - Managing Ollama models
- Agent Development - Building custom agents
π Next Steps
- Try Other Model Backends: Explore MLX, HuggingFace, or LM Studio demos