This guide takes you from zero to productive in 10 minutes. By the end, you'll have lean-ctx installed, configured in your editor, and see real token savings.
Prerequisites
- One of: Cursor, Claude Code, Codex CLI, Gemini CLI, Windsurf, Pi, or any MCP-compatible editor
- A project directory to work with (any language)
- macOS, Linux, or Windows (WSL recommended)
Step 1: Install lean-ctx (2 min)
Pick your preferred method:
Option A: Cargo (recommended)
cargo install lean-ctx
lean-ctx --version
→ lean-ctx 3.3.4 Option B: Pre-built Binary
# macOS (Apple Silicon)
curl -L https://github.com/yvgude/lean-ctx/releases/latest/download/lean-ctx-aarch64-apple-darwin.tar.gz | tar xz
sudo mv lean-ctx /usr/local/bin/
# Linux (x64)
curl -L https://github.com/yvgude/lean-ctx/releases/latest/download/lean-ctx-x86_64-unknown-linux-gnu.tar.gz | tar xz
sudo mv lean-ctx /usr/local/bin/ Option C: Docker
docker pull ghcr.io/yvgude/lean-ctx:latest Verify Installation
lean-ctx doctor
→ ✓ Binary: v3.3.4
✓ Config: ~/.lean-ctx/config.toml (created)
✓ Cache directory: OK
✓ Tree-sitter: 18 languages loaded
...
All 11 checks passed Step 2: Configure Your Editor (3 min)
Cursor
Add to your .cursor/mcp.json (project-level) or global settings:
{
"mcpServers": {
"lean-ctx": {
"command": "lean-ctx",
"args": ["serve", "--stdio"]
}
}
} Claude Code
claude mcp add lean-ctx lean-ctx serve --stdio Gemini CLI
Add to ~/.gemini/settings.json:
{
"mcpServers": {
"lean-ctx": {
"command": "lean-ctx",
"args": ["serve", "--stdio"]
}
}
} Codex CLI
# Codex uses .codex/config.yaml
mcp_servers:
lean-ctx:
command: lean-ctx
args: ["serve", "--stdio"] Full IDE setup guide for all 21 supported editors →
Step 3: Your First Compressed Read (2 min)
Open your project in the editor and ask the AI to read a file. Instead of the native file read, lean-ctx intercepts it:
What you'll see:
# Agent calls: ctx_read path="src/server.ts"
→ F1=server.ts 262L
deps: express, cors, helmet, morgan
exports: app, startServer
[full file content...]
[0 tok saved - first read] Now read it again:
# Agent calls: ctx_read path="src/server.ts"
→ F1=server.ts cached 2t 262L
[5,842 tok saved (99%)] 99% savings on the second read! The file content is cached with a BLAKE3 hash. The agent gets a 13-token stub that confirms the file is unchanged.
Try map mode:
# Agent calls: ctx_read path="src/server.ts" mode="map"
→ F1=server.ts [262L]
deps: express, cors, helmet, morgan
exports: app, startServer
API:
fn startServer(port:n):void
fn setupMiddleware(app:Express):void
fn setupRoutes(app:Express):void
[4,973 tok saved (85%)] Step 4: Your First Compressed Shell Command (2 min)
Ask the AI to run a shell command. lean-ctx automatically compresses the output:
# Agent calls: ctx_shell command="git status"
→ main ↑0
staged: +auth.ts ~server.ts
unstaged: ~config.ts
untracked: test.ts
[534 tok saved (87%)] # Agent calls: ctx_shell command="npm test"
→ 42 passed, 2 failed (3.2s)
FAIL auth.test.ts:23 "should validate token"
Expected: true, Received: false
[1,800 tok saved (90%)] Step 5: Check Your Token Savings (1 min)
# Agent calls: ctx_gain
→ Session gain: 88.4% (11,149 tokens saved)
File reads: 91.2% (8,815 tok saved, 3 reads, 1 cache hit)
Shell output: 78.3% (2,334 tok saved, 2 commands)
Total cost saved: ~$0.11 (at Claude Sonnet rates) Next Steps
- Learn all 10 read modes - the decision tree helps you pick the right one
- 5-session workflow blueprint - complete project workflow patterns
- Understand caching & compression - how the layers work together
- Explore all 58 tools - each tool has its own reference page
- Customize configuration - config.toml, environment variables, filters