Skip to content

๐Ÿš€ Getting Started

Welcome to RLM Code, the Research Playground and Evaluation OS for Recursive Language Model (RLM) agentic systems. RLM Code provides a unified TUI-based development environment for building, benchmarking, and optimizing agent workflows through slash commands and natural language.


๐Ÿงช What is RLM Code?

RLM Code implements the Recursive Language Model paradigm from the 2025 "Recursive Language Models" paper. It extends the paper's concepts with:

  • ๐Ÿง  Context-as-variable: Context is stored as a REPL variable rather than in the token window, enabling unbounded output and token-efficient processing
  • ๐Ÿ” Deep recursion: Support for recursion depth > 1, exceeding the paper's original limitation
  • ๐Ÿ”€ Multi-paradigm execution: Pure RLM, CodeAct, and Traditional paradigms with side-by-side comparison
  • ๐Ÿ“Š Pluggable observability: MLflow, OpenTelemetry, LangSmith, LangFuse, and Logfire integrations
  • ๐Ÿ“ฆ Sandbox runtimes: Local, Docker, Apple Container, Modal, E2B, and Daytona execution environments

๐ŸŽฏ Problem Focus

RLM (the method) addresses long-context reasoning. RLM Code is the tooling layer for researchers who want to implement, evaluate, and operate that workflow.

RLM Code is optimized for workflows where:

  • Context is too large to fit comfortably in one prompt.
  • You need programmatic inspection and decomposition instead of full-context prompt injection.
  • You want to compare recursive symbolic execution against harness-style and direct baselines under the same benchmark suite.

For detailed mode behavior and neutral tradeoff guidance, see Execution Patterns.


๐Ÿ“š Where to Go Next

Guide Description
๐Ÿงญ Start Here (Simple) Plain-language onboarding: what this is, what to install, and safe first run
๐Ÿ“ฆ Installation System requirements, package installation, optional dependencies, and verification
โšก Quick Start Launch the TUI, connect a model, run your first benchmark, explore the Research tab
๐Ÿง‘โ€๐Ÿ”ฌ Researcher Onboarding Researcher-first workflows and complete command handbook
๐Ÿ’ป CLI Reference Complete reference for the entry point and all 50+ slash commands
โš™๏ธ Configuration Full rlm_config.yaml schema, environment variables, and ConfigManager API

โšก Quick Overview

# Install
pip install rlm-code

# Launch the unified TUI
rlm-code

# Connect to a model and run a benchmark
/connect anthropic claude-opus-4-6
/rlm bench preset=dspy_quick
/rlm bench compare candidate=latest baseline=previous

๐Ÿ†• First Time?

Start with the ๐Ÿ“ฆ Installation guide to set up your environment, then follow the โšก Quick Start for a hands-on walkthrough.

๐Ÿ–ฅ๏ธ Unified TUI

RLM Code ships a single TUI with 5 tabs: ๐Ÿ” RLM, ๐Ÿ“ Files, ๐Ÿ“‹ Details, โšก Shell, and ๐Ÿ”ฌ Research. Use rlm-code to launch, and press Ctrl+5 to access the Research tab for experiment tracking, benchmarks, and session replay.