Context OS

The Context OS for
AI Development

The intelligent layer between your AI and your codeNine pillars. One runtime. LeanCTX manages the complete lifecycle of AI context, from file reads to verified outputs.

The Problem

AI agents lose context constantly

Every AI coding agent faces the same fundamental challenge: they re-read entire files when they only need a function signature, they parse raw shell output that could be compressed by 95%, and they forget everything the moment a session ends. The result is wasted tokens, slow responses, and unreliable outputs.

Without LeanCTX
Read entire file (4,200 tokens)
Raw shell output (1,800 tokens)
Context lost between sessions
Unverified AI output shipped
With LeanCTX
Map mode: 180 tokens (96% saved)
Compressed: 42 tokens (98% saved)
Sessions + knowledge persist forever
Every output verified before delivery
Context OS

What is a Context OS?

A Context OS is the infrastructure layer between your AI tools and your codebase. It controls what files are read, how shell output is compressed, what knowledge is remembered across sessions, and whether the final output meets quality standards. Think of it like an operating system, but for AI context instead of hardware resources.

AI Agent
LeanCTX Context OS
I/O Intelligence Memory Verify
Your Code & Tools
Pipeline

How It Works

Every context request flows through LeanCTX's graph-powered deterministic pipeline. The system classifies intent, scores relevance with Multi-Edge BFS and RRF Fusion, compresses with mode-specific algorithms, and verifies outputs before delivery. Every step is reproducible and auditable.

01

Input

Receives file reads, shell commands, and search queries from any AI tool via MCP or HTTP.

02

Intent

Classifies the task type and selects the optimal processing strategy for each request.

03

Relevance

Filters content to only task-relevant information using AST analysis, entropy scoring, and Multi-Edge Graph traversal across imports, calls, type references, and test links.

04

Compress

Applies intelligent compression with mode-specific algorithms, caching, and delta encoding.

05

Verify

Checks outputs for hallucinated paths, broken imports, secret leaks, and policy violations.

06

Deliver

Returns compressed, verified context to the AI tool via MCP, HTTP API, or SDK.

Integration Modes

One Tool, Three Ways to Connect

lean-ctx automatically selects the optimal integration mode for each agent. CLI-Redirect eliminates MCP schema overhead entirely, Hybrid combines the best of both, and full MCP provides maximum tool access.

CLI-Redirect
Default for Cursor, Gemini CLI
Zero MCP overhead. The agent calls lean-ctx directly via shell — fastest mode with full compression.
lean-ctx read src/auth.ts -m map
Hybrid
For Codex, Windsurf, Amp, Antigravity
MCP for cached reads (13 tokens), CLI for shell commands and searches — best of both worlds.
MCP cache + CLI shell/search
Full MCP
For JetBrains, Copilot, Cline
All 58 tools via MCP protocol with lazy tool set — ideal for agents that require MCP.
58 tools via MCP + lazy tool set
Background Daemon

Always-On Context Runtime

The lean-ctx daemon runs as a background service via Unix Domain Socket. It provides persistent session state, instant cache hits, and automatic startup during setup. On update, the daemon restarts automatically with the new binary. Stale PID/socket files are cleaned up proactively, and all connections have built-in timeouts — no manual management needed.

lean-ctx serve --status
$ lean-ctx serve --status
daemon running (PID 4139)
socket /tmp/lean-ctx.sock
uptime 2h 14m
sessions 3 active
cache 247 entries (hit rate 94.2%)
memory 12.4 MB
Agent Support

29+ Agents, Automatically Configured

lean-ctx detects installed agents and configures the optimal integration mode for each. CLI-Redirect for agents with shell access, Hybrid for mixed environments, Full MCP for protocol-only agents.

Agent CLI-Redirect Hybrid MCP Setup
Cursor lean-ctx init --agent cursor
Claude Code lean-ctx init --agent claude
Codex lean-ctx init --agent codex
OpenCode lean-ctx init --agent opencode
Gemini CLI lean-ctx init --agent gemini
CRUSH lean-ctx init --agent crush
Hermes lean-ctx init --agent hermes
Pi lean-ctx init --agent pi
Qoder lean-ctx init --agent qoder
Windsurf lean-ctx init --agent windsurf
Copilot lean-ctx init --agent copilot
Amp lean-ctx init --agent amp
Cline lean-ctx init --agent cline
Roo Code lean-ctx init --agent roo
Kiro lean-ctx init --agent kiro
Antigravity lean-ctx init --agent antigravity
Amazon Q lean-ctx init --agent amazonq
Qwen lean-ctx init --agent qwen
Trae lean-ctx init --agent trae
Verdent lean-ctx init --agent verdent
JetBrains lean-ctx init --agent jetbrains
QoderWork lean-ctx init --agent qoderwork
VS Code lean-ctx init --agent vscode
Zed lean-ctx init --agent zed
Neovim lean-ctx init --agent neovim
Emacs lean-ctx init --agent emacs
Sublime Text lean-ctx init --agent sublime
Aider lean-ctx init --agent aider
Continue lean-ctx init --agent continue
Context Field Theory

Mathematically Founded Context Selection

Every context item has a measurable potential value. LeanCTX uses Context Field Theory (CFT) to compute which files, functions, and knowledge facts belong in your AI's attention window — and which don't.

Context Potential Φ

Φ(i,t) = w_R · R(i,t) + w_S · S(i) + w_G · G(i,t) + w_H · H(i) − w_C · C(i,v) − w_D · D(i, selected)

The Phi function scores every context item in real time. Relevance, staleness, graph centrality, history, cost, and redundancy are combined into a single ranking score.

R R — Task relevance score
S S — Structural importance (graph centrality)
G G — Recency gradient (time decay)
H H — Historical access frequency
C C — Token cost for the current model
D D — Redundancy with already selected items

Context Handles

Sparse, lazy references to context items. Instead of loading full files, agents work with lightweight handles like @F1 or @K3 that expand on demand — saving tokens until content is actually needed.

Context Overlays

Reversible mutations on context state. Pin critical files, suppress noise, boost priority, or mark items as stale — all without modifying the source. Overlays stack and can be undone.

Context Compiler

Given a token budget and a task description, the compiler selects the optimal subset of context items using Φ-ranked greedy selection with redundancy penalties. The result is a minimal, verified context package.

Context Policy Engine

Declarative rules that govern context behavior. Auto-pin test files during TDD, suppress vendor directories, enforce token limits per file type, or mark outdated items — all configurable per project.

Full CLI & MCP Access

Every CFT operation is available via CLI commands and MCP tools. Use lean-ctx control, lean-ctx plan, lean-ctx compile from the terminal, or ctx_control, ctx_plan, ctx_compile via MCP.

Context Packages

Package, share, and reuse accumulated project context. Export knowledge, graph data, gotchas, and session findings as portable bundles. Auto-load packages on session start for instant domain expertise.

Live Demo

See it in action

LeanCTX sits between your AI tool and your codebase. Every file read, shell command, and search query flows through the Context Kernel - compressed, cached, and verified before reaching the model.

ctx_read - map
ctx_read ({ path: "src/lib/auth.ts", mode: "map" })
exports authenticate(), validateToken(), refreshSession()
deps jsonwebtoken, bcrypt, redis
lines 247
original 4,200 tokens
compressed 180 tokens (96% saved)
cached 13 tokens on re-read
58 MCP Tools
95+ Shell Patterns
29+ Integrations
99% Token Savings
Verification

Every output carries proof

LeanCTX generates proof artifacts for every session: which files were read, what was compressed, which checks passed, and how tokens were spent. This makes AI work auditable, replayable, and trustworthy.

Ready to get started?

Install lean-ctx in 60 seconds, auto-configure your editor, and start saving tokens immediately. No cloud, no config files to write manually.

See how to get started

Give your AI the context it deserves.

Nine pillars. One runtime. LeanCTX manages the complete lifecycle of AI context, from file reads to verified outputs.