๐ Getting Started¶
Welcome to RLM Code, the Research Playground and Evaluation OS for Recursive Language Model (RLM) agentic systems. RLM Code provides a unified TUI-based development environment for building, benchmarking, and optimizing agent workflows through slash commands and natural language.
๐งช What is RLM Code?¶
RLM Code implements the Recursive Language Model paradigm from the 2025 "Recursive Language Models" paper. It extends the paper's concepts with:
- ๐ง Context-as-variable: Context is stored as a REPL variable rather than in the token window, enabling unbounded output and token-efficient processing
- ๐ Deep recursion: Support for recursion depth > 1, exceeding the paper's original limitation
- ๐ Multi-paradigm execution: Pure RLM, CodeAct, and Traditional paradigms with side-by-side comparison
- ๐ Pluggable observability: MLflow, OpenTelemetry, LangSmith, LangFuse, and Logfire integrations
- ๐ฆ Sandbox runtimes: Local, Docker, Apple Container, Modal, E2B, and Daytona execution environments
๐ฏ Problem Focus¶
RLM (the method) addresses long-context reasoning. RLM Code is the tooling layer for researchers who want to implement, evaluate, and operate that workflow.
RLM Code is optimized for workflows where:
- Context is too large to fit comfortably in one prompt.
- You need programmatic inspection and decomposition instead of full-context prompt injection.
- You want to compare recursive symbolic execution against harness-style and direct baselines under the same benchmark suite.
For detailed mode behavior and neutral tradeoff guidance, see Execution Patterns.
๐ Where to Go Next¶
| Guide | Description |
|---|---|
| ๐งญ Start Here (Simple) | Plain-language onboarding: what this is, what to install, and safe first run |
| ๐ฆ Installation | System requirements, package installation, optional dependencies, and verification |
| โก Quick Start | Launch the TUI, connect a model, run your first benchmark, explore the Research tab |
| ๐งโ๐ฌ Researcher Onboarding | Researcher-first workflows and complete command handbook |
| ๐ป CLI Reference | Complete reference for the entry point and all 50+ slash commands |
| โ๏ธ Configuration | Full rlm_config.yaml schema, environment variables, and ConfigManager API |
โก Quick Overview¶
# Install
pip install rlm-code
# Launch the unified TUI
rlm-code
# Connect to a model and run a benchmark
/connect anthropic claude-opus-4-6
/rlm bench preset=dspy_quick
/rlm bench compare candidate=latest baseline=previous
๐ First Time?
Start with the ๐ฆ Installation guide to set up your environment, then follow the โก Quick Start for a hands-on walkthrough.
๐ฅ๏ธ Unified TUI
RLM Code ships a single TUI with 5 tabs: ๐ RLM, ๐ Files, ๐ Details, โก Shell, and ๐ฌ Research. Use rlm-code to launch, and press Ctrl+5 to access the Research tab for experiment tracking, benchmarks, and session replay.