Model-agnostic persistent memory for LLM workflows. Facts, decisions, and context transfer seamlessly between Claude, Codex, Gemini — and OpenClaw agents.
You spend an hour with Claude building a product. You switch to Codex to iterate. Codex has no idea what you just decided. You switch to Gemini for a review. Gemini starts from scratch.
Every model handoff is a cold start. Every context switch loses work. Every session begins with "let me re-explain the entire project."
Contynu fixes this.
Memory flows seamlessly between Claude, Codex, and Gemini. Each model receives context in its optimal format — XML, Markdown, or structured text.
Give OpenClaw agents permanent memory. Survives compaction, model switches, and session boundaries. Addresses issues #5429, #25947, and more.
LLMs search the full memory archive on demand via MCP. Keyword search, time windows, kind filtering, and paginated results. Auto-registers with every CLI.
Compact project brief (~500 tokens) always in context. Deep recall via MCP when needed. 80% less context overhead than dumping everything into the prompt.
Install contynu, run it once — all your prior Codex and Gemini conversations are instantly searchable. Also imports Claude JSONL, ChatGPT exports, and plain text.
All data stays on your machine. Replace claude with contynu claude. Auto-detection, auto-registration, auto-hydration. Works immediately.
One install. One command prefix. All your models remember everything.
$ curl -fsSL https://github.com/alentra-dev/contynu/releases/latest/download/install.sh | sh
PS> irm https://github.com/alentra-dev/contynu/releases/latest/download/install.ps1 | iex
Prebuilt binaries for Linux, macOS, and Windows (x86_64 + ARM) — view all downloads
# Instead of running your LLM directly...
$ contynu claude # wraps Claude Code with persistent memory
$ contynu codex # wraps Codex CLI — picks up where Claude left off
$ contynu gemini # wraps Gemini CLI — has full context from both
# The MCP server auto-registers — LLMs can query directly:
> Use the search_memory tool to find what we decided about authentication
✓ Found: "Use HMAC-SHA256 for token signing" (from Claude session, importance: 0.85)
The contynu-openclaw plugin gives every OpenClaw agent permanent memory that survives compaction, model switches, and session boundaries. Zero changes to OpenClaw's codebase.
OpenClaw's context compaction is lossy — critical decisions, safety constraints, and project context get silently destroyed. Users have lost days of work (#5429). The dreaming system crashes (#61951). There's no session memory between restarts (#39885).
contynu checkpoints before compaction fires, writes importance-ranked facts to MEMORY.md, and gives agents MCP tools to search the full history on demand. One setup command, then it works silently in the background.
$ contynu openclaw setup
$ npm install -g contynu-openclaw
Your AI tools should remember what happened. Contynu makes sure they do.
We're opening contynu for Early Access. Install takes 30 seconds, works immediately with Claude, Codex, Gemini, and OpenClaw. We want your feedback.