Skip to content

πŸ€— HuggingFace Demo Agent

The HuggingFace Demo Agent showcases advanced NLP capabilities with HuggingFace models in SuperOptiX. This demo focuses specifically on how to configure and use HuggingFace models for sophisticated language understanding and generation.

🎯 What This Demo Shows

This demo demonstrates:

  • πŸ€— HuggingFace Model Integration: How to configure HuggingFace models in SuperOptiX
  • 🧠 Advanced NLP Capabilities: Access to cutting-edge transformer models
  • 🏠 Local Model Usage: Running models completely offline
  • βš™οΈ Playbook Configuration: How to set up HuggingFace in agent playbooks

πŸš€ Setup HuggingFace Model

1. Install HuggingFace Dependencies

Bash
# Install HuggingFace dependencies
pip install "superoptix[huggingface]"

2. Install HuggingFace Model

Bash
# Install the HuggingFace model used in this demo
super model install -b huggingface microsoft/Phi-4

3. Start HuggingFace Server

Bash
# Start HuggingFace server on port 8001
super model server huggingface microsoft/Phi-4 --port 8001

4. Pull and Run the Demo

Bash
# Pull the HuggingFace demo agent
super agent pull huggingface_demo

# Compile the agent
super agent compile huggingface_demo

# Run the agent
super agent run huggingface_demo --goal "What are the key features of HuggingFace?"

πŸ”§ HuggingFace Configuration in Playbook

The HuggingFace demo showcases how to configure HuggingFace models in the agent playbook:

Language Model Configuration

YAML
language_model:
  location: local
  provider: huggingface
  model: microsoft/Phi-4
  api_base: http://localhost:8001
  temperature: 0.7
  max_tokens: 2048

Key Configuration Points:

  • 🎯 provider: huggingface: Specifies HuggingFace as the model backend
  • πŸ€– model: The HuggingFace model identifier
  • 🌐 api_base: HuggingFace server endpoint (default: http://localhost:8001)
  • 🌑️ temperature: Controls response creativity (0.7 = balanced)
  • πŸ“ max_tokens: Maximum response length

πŸ€— HuggingFace: The NLP Powerhouse

HuggingFace is the go-to platform for state-of-the-art natural language processing. It offers unparalleled access to the latest AI research:

  • πŸ† State-of-the-Art: Access to cutting-edge transformer models and architectures
  • πŸ“š Model Library: Thousands of pre-trained models for every NLP task
  • πŸ”§ Custom Models: Support for your own fine-tuned models and research
  • πŸ§ͺ Research Ready: Perfect for academic research and experimentation
  • πŸ”“ Open Source Models: Most models are open source and freely available
  • 🌐 Open Source: Backed by the largest NLP community in the world

πŸ”§ Customizing HuggingFace Configuration

Change Model

Edit agents/huggingface_demo/playbook/huggingface_demo_playbook.yaml:

YAML
language_model:
  model: microsoft/DialoGPT-small  # Different HuggingFace model
  api_base: http://localhost:8001

Adjust Performance Settings

YAML
language_model:
  temperature: 0.5  # More precise responses
  max_tokens: 4096  # Longer responses

Use Different Port

YAML
language_model:
  api_base: http://localhost:9001  # Custom port

🚨 Troubleshooting HuggingFace

Common Issues

  1. HuggingFace Server Not Running

    Bash
    # Check if HuggingFace server is running
    curl http://localhost:8001/health
    
    # Start HuggingFace server
    super model server huggingface microsoft/Phi-4 --port 8001
    

  2. Model Not Installed

    Bash
    # Check installed HuggingFace models
    super model list --backend huggingface
    
    # Install the required model
    super model install -b huggingface microsoft/Phi-4
    

  3. Performance Issues

  4. Ensure sufficient GPU memory for large models
  5. Close other resource-intensive applications
  6. Consider using smaller models for faster responses

Getting Help

Bash
# Check agent status
super agent inspect huggingface_demo

# View agent logs
super agent logs huggingface_demo

# Get HuggingFace help
super model server --help

πŸŽ‰ Next Steps

After exploring the HuggingFace demo:

  1. Try Other Model Backends: Explore MLX, Ollama, or LM Studio demos
  2. Customize: Modify the playbook for your specific HuggingFace needs
  3. Build Your Own: Use this as a template for your custom HuggingFace agent

Ready to explore advanced NLP? Start with the HuggingFace demo! πŸš€