MCP Server

58 Compression Tools.
Zero Configuration.

lean-ctx implements the Model Context Protocol (MCP) - the open standard for AI tool integrations. Built-in tools get compression-aware replacements that strip noise before it reaches the LLM.

58 tools 10 read modes 99% max (cached re-reads)
Protocol

How MCP Works.

The Model Context Protocol lets AI tools call external servers for data. lean-ctx intercepts these calls and compresses responses automatically.

AI Tool

Cursor, Claude Code, Crush, Copilot…

lean-ctx MCP

Compresses data automatically

LLM

Sees only signal, no noise

58 Tools, 4 Categories

What your AI needs.

File & Code

up to 99% savings

Core replacements for file reads, directory exploration, shell commands, and code search. Tree-sitter powered AST compression preserves structure while eliminating noise.

ctx_read ctx_multi_read ctx_tree ctx_shell ctx_search

Autonomous Intelligence

self-configuring, zero setup

Runs autonomously: auto-preloads context, deduplicates files, provides related-file hints, and picks the optimal compression - all without explicit commands. Enabled by default.

ctx_smart_read ctx_delta ctx_fill ctx_intent ctx_context ctx_graph ctx_dedup ctx_response ctx_discover ctx_impact ctx_architecture

Claude Code Integration

lean-ctx detects Claude Code and automatically adapts its behavior to work within Claude's constraints:

  • Auto-condensed instructions - MCP instructions are compressed to <2048 characters for Claude Code's truncation limit
  • Full rules file - Complete instruction set installed to ~/.claude/rules/lean-ctx.md (no character cap)
  • Agent Skills - Auto-installed to ~/.claude/skills/lean-ctx/ with setup script for zero-config onboarding
  • Self-healing env.sh - Shell environment is re-injected if Docker or container rebuilds remove it
zero-config, self-healing

Session & Monitoring

Memory across chats

Persistent session state, context checkpoints, and real-time analytics. Track token savings, manage cache, and generate compression reports.

ctx_session ctx_compress ctx_analyze ctx_gain ctx_benchmark ctx_metrics ctx_wrapped ctx_cache ctx_heatmap ctx_cost
  • ctx_gain - Query token savings, cost breakdowns, GainScore, task classifications, and per-agent statistics programmatically during a session

Memory & Multi-Agent

Permanent project knowledge

Build persistent knowledge bases that survive across sessions and agents. Project-level memory, agent coordination, and codebase overviews.

ctx_knowledge ctx_agent ctx_overview ctx_preload ctx_task ctx_share
ctx_read

10 Read Modes for every situation.

Not every file read needs full content. Choose the mode that matches your intent - or let ctx_smart_read pick automatically.

Mode What it returns When to use
auto Best mode for context Default - lean-ctx picks optimal strategy based on file type, size, and task
full Complete file, cached for re-reads (~13 tokens) Files you will edit
map Dependency graph + exports + key signatures Context-only files you need to understand
signatures API surface only - function signatures, types Understanding interfaces and contracts
diff Changed lines only vs. cached version After editing - verify your changes
aggressive Syntax stripped, maximum compression Large files where you need the gist
entropy Shannon + Jaccard filtering for unique content Finding non-repetitive, high-information lines
task Knowledge-graph aware, task-filtered content with dependency context Reading files relevant to a specific task - uses project graph + IB filter
reference Cross-reference context Related types, callers, and dependencies for the target symbol
lines:N-M Read only lines N through M (1-based, inclusive) Large files - read a specific range
ctx_read server.rs --mode map
F1=server.rs [342L]
  deps: tokio, serde, tower, axum
  exports: start_server, AppState, Config
  API:
    §  AppState { db: Pool, cache: Cache, config: Config }
    §  Config { port: u16, host: String, max_conn: usize }
    fn async start_server(config: Config) → Result<()>
    fn async handle_request(state: AppState, req: Request) → Response
    fn configure_routes(state: AppState) → Router
  [2,847 tok saved (93%)]
Dive deeper

Explore every tool in detail.

Full API reference with parameters, examples, and advanced usage for all 58 MCP tools.