Open Source · Early Access

Memory that
persists

Model-agnostic persistent memory for LLM workflows. Facts, decisions, and context transfer seamlessly between Claude, Codex, Gemini — and OpenClaw agents.

$ curl -fsSL https://...contynu/.../install.sh | sh click to copy

LLMs forget everything between sessions

You spend an hour with Claude building a product. You switch to Codex to iterate. Codex has no idea what you just decided. You switch to Gemini for a review. Gemini starts from scratch.

Every model handoff is a cold start. Every context switch loses work. Every session begins with "let me re-explain the entire project."

Contynu fixes this.

C
Claude decides on JWT auth with HMAC-SHA256
contynu captures the decision with provenance
Cx
Codex reads the decision, implements the auth middleware
contynu captures the implementation details
G
Gemini reviews with full context from both sessions

Built for how you actually work

Cross-Model Transfer

Memory flows seamlessly between Claude, Codex, and Gemini. Each model receives context in its optimal format — XML, Markdown, or structured text.

🐙

OpenClaw Plugin

Give OpenClaw agents permanent memory. Survives compaction, model switches, and session boundaries. Addresses issues #5429, #25947, and more.

🔍

MCP Memory Recall

LLMs search the full memory archive on demand via MCP. Keyword search, time windows, kind filtering, and paginated results. Auto-registers with every CLI.

Progressive Loading

Compact project brief (~500 tokens) always in context. Deep recall via MCP when needed. 80% less context overhead than dumping everything into the prompt.

📥

Auto-Import

Install contynu, run it once — all your prior Codex and Gemini conversations are instantly searchable. Also imports Claude JSONL, ChatGPT exports, and plain text.

🔒

Local-First & Zero Config

All data stays on your machine. Replace claude with contynu claude. Auto-detection, auto-registration, auto-hydration. Works immediately.

Get started in 30 seconds

One install. One command prefix. All your models remember everything.

Install — Linux / macOS
$ curl -fsSL https://github.com/alentra-dev/contynu/releases/latest/download/install.sh | sh
Install — Windows (PowerShell)
PS> irm https://github.com/alentra-dev/contynu/releases/latest/download/install.ps1 | iex

Prebuilt binaries for Linux, macOS, and Windows (x86_64 + ARM) — view all downloads

Use with any LLM CLI
# Instead of running your LLM directly...
$ contynu claude     # wraps Claude Code with persistent memory
$ contynu codex      # wraps Codex CLI — picks up where Claude left off
$ contynu gemini     # wraps Gemini CLI — has full context from both
Search memory from any model
# The MCP server auto-registers — LLMs can query directly:
> Use the search_memory tool to find what we decided about authentication
 Found: "Use HMAC-SHA256 for token signing" (from Claude session, importance: 0.85)

OpenClaw agents forget. contynu makes them remember.

The contynu-openclaw plugin gives every OpenClaw agent permanent memory that survives compaction, model switches, and session boundaries. Zero changes to OpenClaw's codebase.

The problem

OpenClaw's context compaction is lossy — critical decisions, safety constraints, and project context get silently destroyed. Users have lost days of work (#5429). The dreaming system crashes (#61951). There's no session memory between restarts (#39885).

The fix

contynu checkpoints before compaction fires, writes importance-ranked facts to MEMORY.md, and gives agents MCP tools to search the full history on demand. One setup command, then it works silently in the background.

Setup (one time)
$ contynu openclaw setup
$ npm install -g contynu-openclaw
Early Access

Stop re-explaining your project

Your AI tools should remember what happened. Contynu makes sure they do.

We're opening contynu for Early Access. Install takes 30 seconds, works immediately with Claude, Codex, Gemini, and OpenClaw. We want your feedback.