π MLX Demo Agent
The MLX Demo Agent showcases Apple Silicon optimization with MLX models in SuperOptiX. This demo focuses specifically on how to configure and use MLX models for native Apple Silicon performance.
π― What This Demo Shows
This demo demonstrates:
- π MLX Model Integration: How to configure MLX models in SuperOptiX
- β‘ Apple Silicon Optimization: Native performance on Apple Silicon Macs
- π Local Model Usage: Running models completely offline
- βοΈ Playbook Configuration: How to set up MLX in agent playbooks
π Setup MLX Model
1. Install MLX Dependencies
2. Install MLX Model
Bash
# Install the MLX model used in this demo
super model install -b mlx mlx-community/Llama-3.2-3B-Instruct-4bit
3. Start MLX Server
Bash
# Start MLX server on port 8000
super model server mlx mlx-community/Llama-3.2-3B-Instruct-4bit --port 8000
4. Pull and Run the Demo
Bash
# Pull the MLX demo agent
super agent pull mlx_demo
# Compile the agent
super agent compile mlx_demo
# Run the agent
super agent run mlx_demo --goal "What are the key features of MLX?"
π§ MLX Configuration in Playbook
The MLX demo showcases how to configure MLX models in the agent playbook:
Language Model Configuration
YAML
language_model:
location: local
provider: mlx
model: mlx-community/Llama-3.2-3B-Instruct-4bit
api_base: http://localhost:8000
temperature: 0.7
max_tokens: 2048
Key Configuration Points:
- π―
provider: mlx
: Specifies MLX as the model backend - π€
model
: The MLX model identifier - π
api_base
: MLX server endpoint (default: http://localhost:8000) - π‘οΈ
temperature
: Controls response creativity (0.7 = balanced) - π
max_tokens
: Maximum response length
π Why Choose MLX?
MLX is Apple's native machine learning framework, designed specifically for Apple Silicon Macs. It offers:
- β‘ Native Performance: Leverages Apple's Metal Performance Shaders for blazing-fast inference
- π Battery Efficient: Optimized power consumption perfect for MacBook users
- πΎ Memory Smart: Efficient memory usage with 4-bit quantized models
- π Completely Local: No internet required after model download
- π Instant Start: Quick model loading and inference times
π§ Customizing MLX Configuration
Change Model
Edit agents/mlx_demo/playbook/mlx_demo_playbook.yaml
:
YAML
language_model:
model: mlx-community/phi-2 # Different MLX model
api_base: http://localhost:8000
Adjust Performance Settings
Use Different Port
π¨ Troubleshooting MLX
Common Issues
-
MLX Server Not Running
-
Model Not Installed
-
Apple Silicon Required
- MLX only works on Apple Silicon Macs (M1, M2, M3)
- Use Ollama for Intel Macs
Getting Help
Bash
# Check agent status
super agent inspect mlx_demo
# View agent logs
super agent logs mlx_demo
# Get MLX help
super model server --help
π Related Documentation
- Model Management - Managing MLX models
- Agent Development - Building custom agents
π Next Steps
- Try Other Model Backends: Explore Ollama, HuggingFace, or LM Studio demos