Stop re-explaining your project to Claude every morning.
You open Claude Code on Tuesday and it asks the same three questions it asked on Monday. You paste the same paragraph about your stack. The model burns 2,000 tokens on re-onboarding before it writes a single line of useful code.
CTXone fixes this with an MCP server that sits between your AI coding tool and a local memory graph. Every fact you write once — "we picked SQLite over Postgres because we need zero-config", "BSL-1.1 for all new repos", "don't touch the migrations directory without checking with Priya first" — survives forever, across sessions, across branches, across tool switches.
The three commands that change everything
# 1. Install once
curl -sSL https://raw.githubusercontent.com/ctxone/ctxone/main/install.sh | sh
# 2. Wire into every AI tool you have, in one go
ctx init
# 3. Tell it something you'd otherwise have to repeat tomorrow
ctx remember "We use BSL-1.1 for all new repos" \
--importance high --context licensing ctx init auto-detects Claude Code, Cursor, VS
Code, Codex, and Gemini. For each one, it writes the right
MCP config file with a --agent-id flag so
ctx blame can tell you later which tool wrote
which fact. Next time you open any of those tools, the
memory is already there.
What changes
- Onboarding cost goes to zero. The model doesn't need to re-learn your stack because it loads pinned context automatically on every call.
- 5× fewer tokens per turn, day one. Recall returns only the facts relevant to the current question, not the entire memory file. The ratio climbs as your graph grows.
- You get a blame log.
ctx blame <path>shows exactly which tool (or which of your colleagues) wrote which fact, when, and why. - You can branch experiments.
ctx branch experiment --from mainlets you try a different set of facts without polluting your main memory.
What it looks like
After a week of use, a typical recall looks like this:
$ ctx recall "deployment strategy"
[pinned] Vision: ship a single binary, no orchestration
[fact] We deploy via docker compose, not k8s
[fact] Staging at staging.example.com, prod at example.com
[fact] Rollback: docker compose down; git checkout v0.X; up
[fact] Blue/green not worth it at this scale
_ctxone_stats: {
"ctx_tokens_sent": 180,
"ctx_tokens_estimated_flat": 900,
"ctx_savings_ratio": 5.0
} ctx_savings_ratio is the live number — this
particular recall sent 180 tokens instead of the 900 it
would have cost to dump the relevant slice of the memory
file. The ratio starts around 5× on a fresh graph and
climbs as you write more facts — every new one makes the
baseline bigger without making the targeted recall any
larger. On mature graphs we routinely see double-digit
savings; we just don't want to promise it until we can
measure it under your workload.
Ready to try it?
$ curl -sSL https://raw.githubusercontent.com/ctxone/ctxone/main/install.sh | sh Next: 5-minute quickstart · Full integration guide · The math behind the ratio