Confidant is an open-source, privacy-first AI assistant that provides offline mental health support as a supportive companion. All processing happens locally with no network access for queries.
Note: This project has evolved from an original Python-based Raspberry Pi implementation, then a web app, to the current desktop app (Tauri 2.0). See archive/README.md for information about previous implementations.
- Fully Offline: All AI processing happens client-side, no backend required
- Privacy-First: All processing is local; no data is sent to external servers
- Mental Health Companion: Supports gratitude, mindfulness, mood, stress, anxiety, and depression—with RAG over your knowledge base. Not a substitute for therapy or professional care.
- Local LLM: Runs optimized models locally (Llama-3.2-3B default, Mistral-7B option)
- Streaming responses: Chat replies stream token-by-token for a responsive experience
- RAG System: Retrieval Augmented Generation with ChromaDB and sentence-transformers
- Open Source: MIT license; fully auditable codebase
Confidant supports two optimized LLM models for mental health conversations:
- Llama-3.2-3B-Instruct (~2.5GB)
- Why: Smaller size for resource-constrained systems while maintaining good quality
- Best for: Most mental health conversations; fast and efficient
- Performance: Good quality with lower resource requirements
- Mistral-7B-Instruct v0.2 (~4.4GB)
- Why: Strong reasoning and contextual understanding
- Best for: Users who want deeper, more nuanced conversations
- Performance: Strong reasoning capabilities, optimized for complex discussions
Both models use GGUF quantization and run completely offline via llama.cpp. The Standard Model is the default for most users; the Enhanced Model is recommended for more complex discussions.
This repository contains:
desktop/– Desktop app (Tauri 2.0 + React + TypeScript + Rust), primary applicationlanding/– Marketing and download landing page (Next.js, shadcn/ui, Tailark). Built as a static site and deployed todocs/for GitHub Pages. The deploy workflow (.github/workflows/deploy-landing.yml) buildslanding/and copies output intodocs/on pushes that touchlanding/.scripts/– Python scripts for building knowledge bases and downloading modelsdocs/– Design and migration documentation; also the GitHub Pages root (serves the built landing site)archive/– Previous implementations (Python, web) and planning docs
The desktop app uses a Rust backend for Tauri and file operations, and calls Python (llama-cpp-python, ChromaDB, sentence-transformers) via subprocess for LLM inference, embeddings, and vector search. Chat responses stream token-by-token from the LLM to the UI.
Prerequisites:
- Rust (install from https://rustup.rs/)
- Node.js 18+ and npm
Development:
cd desktop
npm install
npm run devSee desktop/SETUP_INSTRUCTIONS.md for detailed setup and desktop/PYTHON_SETUP.md for Python dependencies (LLM, ChromaDB, embeddings).
Pre-built installers for macOS and Windows are available via GitHub Releases. The project includes a landing page (Next.js, in landing/) that is built and served from docs/ via GitHub Pages. Enable Pages with source Branch: main, Folder: /docs to serve the site.
We welcome contributions. Please read CONTRIBUTING.md for development setup and guidelines. Use the issue and pull request templates in .github/ when opening issues or PRs.
MIT License - see LICENSE file for details.