English: current page
中文文档: README.zh-CN.md
Contributing: CONTRIBUTING.md | 中文贡献指南
Tutorial: docs/TUTORIAL.md | 中文辅助: docs/TUTORIAL.zh-CN.md
Memori-Vault is a local-first memory engine for personal and team knowledge. It combines semantic chunking, vector retrieval, and asynchronous Graph-RAG extraction on Ollama + SQLite, while keeping first-answer speed stable under background indexing.
- Local-first ingestion pipeline (
.md/.txt) with watcher + semantic chunking. - Structured retrieval pipeline with document routing, chunk retrieval, citations, and evidence output.
- Async indexing refactor:
- fast path for searchable chunks first
- deferred graph build in background queue
- Indexing strategy controls:
continuous | manual | scheduled- resource budget
low | balanced | fast - pause/resume + trigger reindex
- Settings center (right drawer):
- UI language and AI answer language (separate)
- model provider profiles (local Ollama / remote OpenAI-compatible)
- watch folder switching
- top-k retrieval control
- personalization (font, size, theme)
- Search scope selector:
- nested folder expand/collapse
- multi-select files/folders
- Source cards:
- markdown preview for
.md - expand/collapse
- open file location
- markdown preview for
- Local-first runtime and enterprise policy gates are implemented.
- Citation validity is currently strong in the checked-in offline regression corpus.
- Retrieval precision is not yet at a strong mixed-corpus bar.
core_docsoffline baseline:Top-1=0.6970on 6 indexed documentsrepo_mixedoffline baseline:Top-1=0.4773on 11 indexed documents
- These are small checked-in regression baselines, not a validated 50k-document accuracy result.
- Live local-model validation is still blocked on local Ollama / embedding availability on the current machine.
Current posture:
- docs-only retrieval is usable as an internal baseline
- mixed-corpus retrieval should still be treated as beta/internal validation, not as a finished accuracy claim
Details: docs/RETRIEVAL_BASELINE.md
- Desktop mode:
- Tauri shell + UI + IPC backend.
- Server mode:
memori-serverexposes HTTP APIs for local/browser access and private deployment.- The current product experience is desktop-first; browser-facing UI support is still being aligned with the server runtime.
- Single-tenant private deployment for engineering organizations.
- Preview auth/session entry plus API RBAC (
viewer/user/operator/admin). - Admin APIs for health, metrics, policy, audit, reindex, pause/resume.
- Model governance: local-first with remote egress allowlist enforcement.
- Deployment assets included (
deploy/systemd, env template, backup/restore scripts).
Current note:
- Enterprise deployment is available as a private deployment preview in
v0.3.0. - Auth/session flows are suitable for controlled internal environments first and will continue to harden in later releases.
Details: docs/enterprise.md
Workspace crates:
memori-vault: watch/debounce/event streammemori-parser: parse/chunkmemori-storage: SQLite + vector/graph/task metadatamemori-core: orchestration, retrieval, indexing workermemori-desktop: Tauri commands and desktop lifecyclememori-server: Axum APIsui: React + Vite + Tailwind v4 frontend
cargo fmt --all -- --check
cargo clippy --workspace -- -D warnings
cargo test --workspace
pnpm --dir ui run buildDesktop dev:
pnpm --dir ui run dev -- --host 127.0.0.1 --port 1420 --strictPort
cargo tauri dev -p memori-desktopServer dev:
cargo run -p memori-server
pnpm --dir ui run dev -- --host 127.0.0.1 --port 1420 --strictPort- Ollama local runtime is recommended for local provider mode.
- Remote provider mode is optional and user-configured.
- Enterprise policy can enforce
local_onlyor remote allowlist mode. - Legacy theme key
memori-theme-modeis migration-only; active key ismemori-theme.
Apache License 2.0.