Persistent memory layer for AI coding agents.
One MCP server. Nine agents. Zero context loss.
中文文档 · Quick Start · Features · How It Works · Full Setup Guide
AI coding agents forget everything between sessions. Switch IDEs and context is gone. Memorix gives every agent a shared, persistent memory — decisions, gotchas, and architecture survive across sessions and tools.
Session 1 (Cursor): "Use JWT with refresh tokens, 15-min expiry" → stored as 🟤 decision
Session 2 (Claude Code): "Add login endpoint" → finds the decision → implements correctly
No re-explaining. No copy-pasting. No vendor lock-in.
npm install -g memorixAdd to your agent's MCP config:
Cursor · .cursor/mcp.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Claude Code
claude mcp add memorix -- memorix serveWindsurf · ~/.codeium/windsurf/mcp_config.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }VS Code Copilot · .vscode/mcp.json
{ "servers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Codex · ~/.codex/config.toml
[mcp_servers.memorix]
command = "memorix"
args = ["serve"]Kiro · .kiro/settings/mcp.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Antigravity · ~/.gemini/antigravity/mcp_config.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"], "env": { "MEMORIX_PROJECT_ROOT": "/your/project/path" } } } }OpenCode · ~/.config/opencode/config.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Gemini CLI · .gemini/settings.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Restart your agent. Done. No API keys, no cloud, no dependencies.
Note: Do NOT use
npx— it re-downloads each time and causes MCP timeout. Use global install.
| Memory | memorix_store · memorix_search · memorix_detail · memorix_timeline — 3-layer progressive disclosure with ~10x token savings |
| Sessions | memorix_session_start · memorix_session_end · memorix_session_context — auto-inject previous context on new sessions |
| Knowledge Graph | create_entities · create_relations · add_observations · search_nodes · open_nodes · read_graph — MCP Official Memory Server compatible |
| Workspace Sync | memorix_workspace_sync · memorix_rules_sync · memorix_skills — migrate MCP configs, rules, and skills across 9 agents |
| Maintenance | memorix_retention · memorix_consolidate · memorix_export · memorix_import — decay scoring, dedup, backup |
| Dashboard | memorix_dashboard — web UI with D3.js knowledge graph, observation browser, retention panel |
🎯 session-request · 🔴 gotcha · 🟡 problem-solution · 🔵 how-it-works · 🟢 what-changed · 🟣 discovery · 🟠 why-it-exists · 🟤 decision · ⚖️ trade-off
memorix hooks installCaptures decisions, errors, and gotchas automatically. Pattern detection in English + Chinese. Smart filtering (30s cooldown, skips trivial commands). Injects high-value memories at session start.
BM25 fulltext out of the box (~50MB RAM). Semantic search is opt-in to minimize resource usage:
# Enable semantic search (optional — requires 300-500MB RAM)
# Set in your MCP config env, or export before starting:
MEMORIX_EMBEDDING=fastembed # ONNX, fastest (~300MB)
MEMORIX_EMBEDDING=transformers # Pure JS (~500MB)
MEMORIX_EMBEDDING=off # Default — BM25 only, minimal resourcesInstall the provider you chose:
npm install -g fastembed # for MEMORIX_EMBEDDING=fastembed
npm install -g @huggingface/transformers # for MEMORIX_EMBEDDING=transformersBoth run 100% locally. Zero API calls.
┌─────────┐ ┌───────────┐ ┌────────────┐ ┌───────┐ ┌──────────┐
│ Cursor │ │ Claude │ │ Windsurf │ │ Codex │ │ +4 more │
│ │ │ Code │ │ │ │ │ │ │
└────┬────┘ └─────┬─────┘ └─────┬──────┘ └───┬───┘ └────┬─────┘
│ │ │ │ │
└─────────────┴──────┬───────┴──────────────┴───────────┘
│ MCP (stdio)
┌──────┴──────┐
│ Memorix │
│ MCP Server │
└──────┬──────┘
│
┌───────────────┼───────────────┐
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ Orama │ │ Knowledge │ │ Rules & │
│ Search │ │ Graph │ │ Workspace │
│ (BM25+Vec) │ │ (Entities) │ │ Sync │
└─────────────┘ └─────────────┘ └─────────────┘
│
~/.memorix/data/
(100% local, per-project isolation)
- Project isolation — auto-detected from
git remote, scoped search by default - Shared storage — all agents read/write the same
~/.memorix/data/, cross-IDE by design - Token efficient — 3-layer progressive disclosure: search → timeline → detail
| Mem0 | mcp-memory-service | Memorix | |
|---|---|---|---|
| Agents | SDK-based | 13+ (MCP) | 9 agents (MCP) |
| Cross-agent workspace sync | — | — | MCP configs, rules, skills, workflows |
| Knowledge graph | — | Yes | Yes (MCP Official compatible) |
| Hybrid search | — | Yes | Yes (BM25 + vector) |
| Token-efficient retrieval | — | — | 3-layer progressive disclosure |
| Auto-memory hooks | — | — | Yes (multi-language pattern detection) |
| Memory decay | — | Yes | Yes (exponential + immunity) |
| Web dashboard | Cloud | Yes | Yes (D3.js graph) |
| Privacy | Cloud | Local | 100% local |
| Cost | Per-call | $0 | $0 |
git clone https://github.com/AVIDS2/memorix.git
cd memorix && npm install
npm run dev # watch mode
npm test # 534 tests
npm run build # production build📚 Architecture · API Reference · Modules · Design Decisions
For AI systems:
llms.txt·llms-full.txt
Built on ideas from mcp-memory-service, MemCP, claude-mem, and Mem0.
Built by AVIDS2 · Star ⭐ if it helps your workflow
