Skip to content

FPSZ/Memori-Vault

Repository files navigation

Memori-Vault

English: current page
中文文档: README.zh-CN.md
Contributing: CONTRIBUTING.md | 中文贡献指南 Tutorial: docs/TUTORIAL.md | 中文辅助: docs/TUTORIAL.zh-CN.md

License: Apache 2.0 Rust 1.85+ CI

Memori-Vault is a local-first memory engine for personal and team knowledge. It combines semantic chunking, vector retrieval, and asynchronous Graph-RAG extraction on Ollama + SQLite, while keeping first-answer speed stable under background indexing.

Highlights (Current)

  • Local-first ingestion pipeline (.md / .txt) with watcher + semantic chunking.
  • Structured retrieval pipeline with document routing, chunk retrieval, citations, and evidence output.
  • Async indexing refactor:
    • fast path for searchable chunks first
    • deferred graph build in background queue
  • Indexing strategy controls:
    • continuous | manual | scheduled
    • resource budget low | balanced | fast
    • pause/resume + trigger reindex
  • Settings center (right drawer):
    • UI language and AI answer language (separate)
    • model provider profiles (local Ollama / remote OpenAI-compatible)
    • watch folder switching
    • top-k retrieval control
    • personalization (font, size, theme)
  • Search scope selector:
    • nested folder expand/collapse
    • multi-select files/folders
  • Source cards:
    • markdown preview for .md
    • expand/collapse
    • open file location

Current Validation Status

  • Local-first runtime and enterprise policy gates are implemented.
  • Citation validity is currently strong in the checked-in offline regression corpus.
  • Retrieval precision is not yet at a strong mixed-corpus bar.
    • core_docs offline baseline: Top-1=0.6970 on 6 indexed documents
    • repo_mixed offline baseline: Top-1=0.4773 on 11 indexed documents
  • These are small checked-in regression baselines, not a validated 50k-document accuracy result.
  • Live local-model validation is still blocked on local Ollama / embedding availability on the current machine.

Current posture:

  • docs-only retrieval is usable as an internal baseline
  • mixed-corpus retrieval should still be treated as beta/internal validation, not as a finished accuracy claim

Details: docs/RETRIEVAL_BASELINE.md

Runtime Modes

  1. Desktop mode:
  • Tauri shell + UI + IPC backend.
  1. Server mode:
  • memori-server exposes HTTP APIs for local/browser access and private deployment.
  • The current product experience is desktop-first; browser-facing UI support is still being aligned with the server runtime.

Enterprise (Private Deployment v1 Preview)

  • Single-tenant private deployment for engineering organizations.
  • Preview auth/session entry plus API RBAC (viewer/user/operator/admin).
  • Admin APIs for health, metrics, policy, audit, reindex, pause/resume.
  • Model governance: local-first with remote egress allowlist enforcement.
  • Deployment assets included (deploy/systemd, env template, backup/restore scripts).

Current note:

  • Enterprise deployment is available as a private deployment preview in v0.3.0.
  • Auth/session flows are suitable for controlled internal environments first and will continue to harden in later releases.

Details: docs/enterprise.md

Architecture

Workspace crates:

  • memori-vault: watch/debounce/event stream
  • memori-parser: parse/chunk
  • memori-storage: SQLite + vector/graph/task metadata
  • memori-core: orchestration, retrieval, indexing worker
  • memori-desktop: Tauri commands and desktop lifecycle
  • memori-server: Axum APIs
  • ui: React + Vite + Tailwind v4 frontend

Development Quick Start

cargo fmt --all -- --check
cargo clippy --workspace -- -D warnings
cargo test --workspace
pnpm --dir ui run build

Desktop dev:

pnpm --dir ui run dev -- --host 127.0.0.1 --port 1420 --strictPort
cargo tauri dev -p memori-desktop

Server dev:

cargo run -p memori-server
pnpm --dir ui run dev -- --host 127.0.0.1 --port 1420 --strictPort

Notes

  • Ollama local runtime is recommended for local provider mode.
  • Remote provider mode is optional and user-configured.
  • Enterprise policy can enforce local_only or remote allowlist mode.
  • Legacy theme key memori-theme-mode is migration-only; active key is memori-theme.

License

Apache License 2.0.

About

Local-first memory engine for personal and team knowledge, combining semantic chunking, vector retrieval, and asynchronous Graph-RAG on Ollama + SQLite. 本地优先的个人与团队记忆引擎,集成语义分块、向量检索与异步 Graph-RAG(Ollama + SQLite)。

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors