Replayability
How to make agent runs reproducible: stable inputs, explicit policies, session persistence, and verification gates.
What must stay stable
Replayability starts with stable inputs - the tool graph is only as deterministic as the environment.
- Same project root + allowed paths
- Same config & memory policy
- Same code + dependencies (lockfiles)
Policies as a contract
Memory and verification settings define what is persisted and which checks must hold.
# ~/.lean-ctx/config.toml
[memory]
policy = "balanced"
[verification]
enabled = true Tip: keep policy changes versioned and reviewable.
Session artifacts
Sessions capture what happened: tool calls, memory writes, relations, and outputs.
# Start a server with a fixed project root
lean-ctx serve --host 127.0.0.1 --port 8080 --project-root /path/to/repo # Example tool calls that leave an audit trail
ctx_session("load", { id: "..." })
ctx_knowledge("remember", { category: "...", key: "...", value: "..." }) CI gates
Treat clippy/tests + verification checks as non-optional quality gates.
# CI gates / local checks
cd rust
cargo fmt -- --check
cargo clippy --all-features -- -D warnings
# SSOT drift gate (manifest must be up-to-date)
cargo run -q --bin gen_mcp_manifest
git diff --exit-code ../website/generated/mcp-tools.json
# Core tests (deterministic + bounded)
cargo test --all-features -- --test-threads=1
# Lightweight regression checks (stable thresholds)
cargo test -q --test savings_verification
# Proof artifact (machine-readable attestation, no secrets)
cargo run -q --bin lean-ctx -- proof --summary --no-write Cookbook: end-to-end examples
Run real integrations against a running server (no mock data).
cd cookbook
npm ci
npm run memory-playground
npm run graph-explorer