The Context OS for
AI Development
The intelligent layer between your AI and your codeNine pillars. One runtime. LeanCTX manages the complete lifecycle of AI context, from file reads to verified outputs.
AI agents lose context constantly
Every AI coding agent faces the same fundamental challenge: they re-read entire files when they only need a function signature, they parse raw shell output that could be compressed by 95%, and they forget everything the moment a session ends. The result is wasted tokens, slow responses, and unreliable outputs.
What is a Context OS?
A Context OS is the infrastructure layer between your AI tools and your codebase. It controls what files are read, how shell output is compressed, what knowledge is remembered across sessions, and whether the final output meets quality standards. Think of it like an operating system, but for AI context instead of hardware resources.
How It Works
Every context request flows through LeanCTX's graph-powered deterministic pipeline. The system classifies intent, scores relevance with Multi-Edge BFS and RRF Fusion, compresses with mode-specific algorithms, and verifies outputs before delivery. Every step is reproducible and auditable.
Input
Receives file reads, shell commands, and search queries from any AI tool via MCP or HTTP.
Intent
Classifies the task type and selects the optimal processing strategy for each request.
Relevance
Filters content to only task-relevant information using AST analysis, entropy scoring, and Multi-Edge Graph traversal across imports, calls, type references, and test links.
Compress
Applies intelligent compression with mode-specific algorithms, caching, and delta encoding.
Verify
Checks outputs for hallucinated paths, broken imports, secret leaks, and policy violations.
Deliver
Returns compressed, verified context to the AI tool via MCP, HTTP API, or SDK.
One Tool, Three Ways to Connect
lean-ctx automatically selects the optimal integration mode for each agent. CLI-Redirect eliminates MCP schema overhead entirely, Hybrid combines the best of both, and full MCP provides maximum tool access.
lean-ctx read src/auth.ts -m map MCP cache + CLI shell/search 58 tools via MCP + lazy tool set Always-On Context Runtime
The lean-ctx daemon runs as a background service via Unix Domain Socket. It provides persistent session state, instant cache hits, and automatic startup during setup. On update, the daemon restarts automatically with the new binary. Stale PID/socket files are cleaned up proactively, and all connections have built-in timeouts — no manual management needed.
29+ Agents, Automatically Configured
lean-ctx detects installed agents and configures the optimal integration mode for each. CLI-Redirect for agents with shell access, Hybrid for mixed environments, Full MCP for protocol-only agents.
| Agent | CLI-Redirect | Hybrid | MCP | Setup |
|---|---|---|---|---|
| Cursor | ● | – | – | lean-ctx init --agent cursor |
| Claude Code | – | ● | – | lean-ctx init --agent claude |
| Codex | – | ● | – | lean-ctx init --agent codex |
| OpenCode | – | ● | – | lean-ctx init --agent opencode |
| Gemini CLI | ● | – | – | lean-ctx init --agent gemini |
| CRUSH | – | ● | – | lean-ctx init --agent crush |
| Hermes | – | ● | – | lean-ctx init --agent hermes |
| Pi | – | ● | – | lean-ctx init --agent pi |
| Qoder | – | ● | – | lean-ctx init --agent qoder |
| Windsurf | – | ● | – | lean-ctx init --agent windsurf |
| Copilot | – | ● | – | lean-ctx init --agent copilot |
| Amp | – | ● | – | lean-ctx init --agent amp |
| Cline | – | ● | – | lean-ctx init --agent cline |
| Roo Code | – | ● | – | lean-ctx init --agent roo |
| Kiro | – | ● | – | lean-ctx init --agent kiro |
| Antigravity | – | ● | – | lean-ctx init --agent antigravity |
| Amazon Q | – | ● | – | lean-ctx init --agent amazonq |
| Qwen | – | ● | – | lean-ctx init --agent qwen |
| Trae | – | ● | – | lean-ctx init --agent trae |
| Verdent | – | ● | – | lean-ctx init --agent verdent |
| JetBrains | – | – | ● | lean-ctx init --agent jetbrains |
| QoderWork | – | – | ● | lean-ctx init --agent qoderwork |
| VS Code | – | – | ● | lean-ctx init --agent vscode |
| Zed | – | – | ● | lean-ctx init --agent zed |
| Neovim | – | – | ● | lean-ctx init --agent neovim |
| Emacs | – | – | ● | lean-ctx init --agent emacs |
| Sublime Text | – | – | ● | lean-ctx init --agent sublime |
| Aider | – | – | ● | lean-ctx init --agent aider |
| Continue | – | – | ● | lean-ctx init --agent continue |
Mathematically Founded Context Selection
Every context item has a measurable potential value. LeanCTX uses Context Field Theory (CFT) to compute which files, functions, and knowledge facts belong in your AI's attention window — and which don't.
Context Potential Φ
The Phi function scores every context item in real time. Relevance, staleness, graph centrality, history, cost, and redundancy are combined into a single ranking score.
Context Handles
Sparse, lazy references to context items. Instead of loading full files, agents work with lightweight handles like @F1 or @K3 that expand on demand — saving tokens until content is actually needed.
Context Overlays
Reversible mutations on context state. Pin critical files, suppress noise, boost priority, or mark items as stale — all without modifying the source. Overlays stack and can be undone.
Context Compiler
Given a token budget and a task description, the compiler selects the optimal subset of context items using Φ-ranked greedy selection with redundancy penalties. The result is a minimal, verified context package.
Context Policy Engine
Declarative rules that govern context behavior. Auto-pin test files during TDD, suppress vendor directories, enforce token limits per file type, or mark outdated items — all configurable per project.
Full CLI & MCP Access
Every CFT operation is available via CLI commands and MCP tools. Use lean-ctx control, lean-ctx plan, lean-ctx compile from the terminal, or ctx_control, ctx_plan, ctx_compile via MCP.
Context Packages
Package, share, and reuse accumulated project context. Export knowledge, graph data, gotchas, and session findings as portable bundles. Auto-load packages on session start for instant domain expertise.
Nine Pillars. One Runtime.
Everything between your code and the AI, handled.
Smart I/O
Deterministic reads, shell compression, search - 99% fewer tokens
Intelligence
Intent routing, mode selection, adaptive pipeline
Memory
Sessions, project knowledge, graphs, handoffs
Governance
Roles, budgets, SLOs, workflow gates, policies
Verification
Lean4 formal proofs, claim-based verification, Quality Levels 0-4
Integrations
MCP, HTTP, SDK, 29+ IDEs, Cloud, Team Server
Shared Sessions
Workspace & channel-based session sharing across agents
Context Bus
Real-time event stream for context changes via SSE
SDK & API
TypeScript SDK and REST API for external integrations
See it in action
LeanCTX sits between your AI tool and your codebase. Every file read, shell command, and search query flows through the Context Kernel - compressed, cached, and verified before reaching the model.
Every output carries proof
LeanCTX generates proof artifacts for every session: which files were read, what was compressed, which checks passed, and how tokens were spent. This makes AI work auditable, replayable, and trustworthy.
Ready to get started?
Install lean-ctx in 60 seconds, auto-configure your editor, and start saving tokens immediately. No cloud, no config files to write manually.
See how to get startedGive your AI the context it deserves.
Nine pillars. One runtime. LeanCTX manages the complete lifecycle of AI context, from file reads to verified outputs.