Skip to content

πŸ¦™ Ollama Demo Agent

The Ollama Demo Agent showcases easy local model management with Ollama in SuperOptiX. This demo focuses specifically on how to configure and use Ollama models for simple, reliable local inference.

🎯 What This Demo Shows

This demo demonstrates:

  • πŸ¦™ Ollama Model Integration: How to configure Ollama models in SuperOptiX
  • πŸš€ Easy Model Management: Simple model installation and management
  • 🏠 Local Model Usage: Running models completely offline
  • βš™οΈ Playbook Configuration: How to set up Ollama in agent playbooks

πŸš€ Setup Ollama Model

1. Install Ollama

Bash
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh

# Or download from https://ollama.ai/download for Windows

2. Install Ollama Model

Bash
# Install the Ollama model used in this demo
super model install llama3.2:8b

3. Start Ollama Server

Bash
# Start Ollama server (runs on port 11434 by default)
ollama serve

4. Pull and Run the Demo

Bash
# Pull the Ollama demo agent
super agent pull ollama_demo

# Compile the agent
super agent compile ollama_demo

# Run the agent
super agent run ollama_demo --goal "What are the key features of Ollama?"

πŸ”§ Ollama Configuration in Playbook

The Ollama demo showcases how to configure Ollama models in the agent playbook:

Language Model Configuration

YAML
language_model:
  location: local
  provider: ollama
  model: llama3.2:8b
  api_base: http://localhost:11434
  temperature: 0.7
  max_tokens: 2048

Key Configuration Points:

  • 🎯 provider: ollama: Specifies Ollama as the model backend
  • πŸ€– model: The Ollama model identifier
  • 🌐 api_base: Ollama server endpoint (default: http://localhost:11434)
  • 🌑️ temperature: Controls response creativity (0.7 = balanced)
  • πŸ“ max_tokens: Maximum response length

πŸ¦™ The Ollama Advantage

Ollama makes local AI accessible to everyone. It's the simplest way to run powerful language models on your own machine:

  • 🎯 One Command Setup: Install any model with a single command
  • πŸ”„ Seamless Updates: Models update automatically in the background
  • 🌍 Cross-Platform: Works perfectly on macOS, Linux, and Windows
  • 🏠 True Local: Runs completely offline once installed
  • ⚑ Lightning Fast: Optimized for your local hardware
  • πŸ’‘ Beginner Friendly: Perfect for newcomers to local AI

πŸ”§ Customizing Ollama Configuration

Change Model

Edit agents/ollama_demo/playbook/ollama_demo_playbook.yaml:

YAML
language_model:
  model: llama3.2:3b  # Different Ollama model
  api_base: http://localhost:11434

Adjust Performance Settings

YAML
language_model:
  temperature: 0.9  # More creative responses
  max_tokens: 4096  # Longer responses

Use Different Port

YAML
language_model:
  api_base: http://localhost:8080  # Custom port

🚨 Troubleshooting Ollama

Common Issues

  1. Ollama Server Not Running

    Bash
    # Check if Ollama server is running
    curl http://localhost:11434/api/tags
    
    # Start Ollama server
    ollama serve
    

  2. Model Not Installed

    Bash
    # Check installed Ollama models
    super model list --backend ollama
    
    # Install the required model
    super model install llama3.2:8b
    

  3. Performance Issues

  4. Ensure sufficient RAM (8GB+ recommended)
  5. Close other resource-intensive applications
  6. Consider using smaller models for faster responses

Getting Help

Bash
# Check agent status
super agent inspect ollama_demo

# View agent logs
super agent logs ollama_demo

# Get Ollama help
ollama --help

πŸ”— Next Steps

  1. Try Other Model Backends: Explore MLX, HuggingFace, or LM Studio demos