Full Transparency

Known
Limitations

We believe in honest documentation. Here's what lean-ctx does well - and where its boundaries are.

Compression

Compression Limits

Token savings vary by file type, content complexity, and read mode. Here's what to expect in practice.

Expected savings by scenario

Scenario Savings
First read (code files) 60–95%
Cached re-read Up to 99%
Small files (<10 lines) Minimal
Binary files Skipped
Novel shell output Pass-through

Content-Dependent Savings

Actual savings depend on code density, comment ratio, and repetition. Well-structured code with clear signatures compresses better than dense, uncommented one-liners.

Cached Re-reads

When lean-ctx has already seen a file and it hasn't changed, re-reads cost roughly 13 tokens regardless of file size. This is the source of the 99% figure.

Languages

Language Support

lean-ctx uses tree-sitter for AST-aware compression. Coverage varies by language - here's the full picture.

18 Tree-sitter languages with full AST support
95+ Shell tool patterns recognized
10 Read modes per file type

Language support tiers

Tier Languages Compression
Full AST Rust, TypeScript, Python, Go, Java, C, C++, C#, Ruby, PHP, Swift, Kotlin, Scala, Lua, Zig, Elixir, Haskell, OCaml Signature-aware pruning
Basic All other languages Line-based compression

Basic tier still delivers meaningful compression - it just can't extract function signatures or prune AST nodes. Most files still see 40–70% savings through deduplication and entropy filtering.

Architecture

Architecture Constraints

lean-ctx is designed with specific architectural trade-offs. Understanding them helps set the right expectations.

MCP Requirement

lean-ctx runs as an MCP server. Your AI agent must support the Model Context Protocol to use it.

Works with: Claude Code, Cursor, Codex, Gemini CLI, and more.

Single Project Scope

Each lean-ctx instance is scoped to one project root. Multi-repo workflows require separate instances.

Workaround: run one instance per repo in your workspace.

Memory Scaling

The in-memory cache grows with your project. Very large monorepos (100k+ files) may benefit from tuning cache limits.

Configurable via lean-ctx settings.

Honest Context

Not a Replacement

lean-ctx optimizes how context is delivered to the LLM. It does not replace the fundamentals of good engineering.

Important to understand
  • Good prompting practices - clear, specific instructions still matter
  • Proper code organization - well-structured code compresses better
  • Version control - lean-ctx doesn't manage code changes or history

lean-ctx optimizes delivery, not replaces fundamentals.

The full picture

See what lean-ctx can do

Now that you know the boundaries, explore what lean-ctx delivers within them - security guarantees, performance benchmarks, and competitive comparisons.