58 Compression Tools.
Zero Configuration.
lean-ctx implements the Model Context Protocol (MCP) - the open standard for AI tool integrations. Built-in tools get compression-aware replacements that strip noise before it reaches the LLM.
How MCP Works.
The Model Context Protocol lets AI tools call external servers for data. lean-ctx intercepts these calls and compresses responses automatically.
AI Tool
Cursor, Claude Code, Crush, Copilot…
lean-ctx MCP
Compresses data automatically
LLM
Sees only signal, no noise
What your AI needs.
File & Code
up to 99% savingsCore replacements for file reads, directory exploration, shell commands, and code search. Tree-sitter powered AST compression preserves structure while eliminating noise.
Autonomous Intelligence
self-configuring, zero setupRuns autonomously: auto-preloads context, deduplicates files, provides related-file hints, and picks the optimal compression - all without explicit commands. Enabled by default.
Claude Code Integration
lean-ctx detects Claude Code and automatically adapts its behavior to work within Claude's constraints:
- Auto-condensed instructions - MCP instructions are compressed to <2048 characters for Claude Code's truncation limit
- Full rules file - Complete instruction set installed to
~/.claude/rules/lean-ctx.md(no character cap) - Agent Skills - Auto-installed to
~/.claude/skills/lean-ctx/with setup script for zero-config onboarding - Self-healing env.sh - Shell environment is re-injected if Docker or container rebuilds remove it
Session & Monitoring
Memory across chatsPersistent session state, context checkpoints, and real-time analytics. Track token savings, manage cache, and generate compression reports.
ctx_gain- Query token savings, cost breakdowns, GainScore, task classifications, and per-agent statistics programmatically during a session
Memory & Multi-Agent
Permanent project knowledgeBuild persistent knowledge bases that survive across sessions and agents. Project-level memory, agent coordination, and codebase overviews.
10 Read Modes for every situation.
Not every file read needs full content. Choose the mode that matches your intent - or let ctx_smart_read pick automatically.
| Mode | What it returns | When to use |
|---|---|---|
auto | Best mode for context | Default - lean-ctx picks optimal strategy based on file type, size, and task |
full | Complete file, cached for re-reads (~13 tokens) | Files you will edit |
map | Dependency graph + exports + key signatures | Context-only files you need to understand |
signatures | API surface only - function signatures, types | Understanding interfaces and contracts |
diff | Changed lines only vs. cached version | After editing - verify your changes |
aggressive | Syntax stripped, maximum compression | Large files where you need the gist |
entropy | Shannon + Jaccard filtering for unique content | Finding non-repetitive, high-information lines |
task | Knowledge-graph aware, task-filtered content with dependency context | Reading files relevant to a specific task - uses project graph + IB filter |
reference | Cross-reference context | Related types, callers, and dependencies for the target symbol |
lines:N-M | Read only lines N through M (1-based, inclusive) | Large files - read a specific range |
F1=server.rs [342L] deps: tokio, serde, tower, axum exports: start_server, AppState, Config API: § AppState { db: Pool, cache: Cache, config: Config } § Config { port: u16, host: String, max_conn: usize } fn async start_server(config: Config) → Result<()> fn async handle_request(state: AppState, req: Request) → Response fn configure_routes(state: AppState) → Router [2,847 tok saved (93%)]
Explore every tool in detail.
Full API reference with parameters, examples, and advanced usage for all 58 MCP tools.