your ai, your rules.
local AI that scans your hardware, recommends the best models,
and runs them privately on your machine.
shipping with TurboQuant — run larger models on lower-spec hardware.
- scans your hardware — detects CPU, GPU, VRAM, and tells you exactly which models your machine can handle
- one-click model management — browse, download, switch between models. no terminal required
- threaded conversations — branch any message into a side thread without losing context
- fully private — everything runs locally through Ollama. nothing leaves your device. no accounts, no telemetry
requires macOS (Apple Silicon recommended), Ollama, and Node.js 18+.
git clone https://github.com/savka777/orbit.git
cd orbit/orbit
npm install
npm run dev:electronto build a .dmg:
npm run build:electronnow
- hardware detection + model recommendations
- one-click model download and management
- streaming chat with local models
- multi-conversation support
- threaded conversations
next
- TurboQuant integration — 6x KV cache compression, same hardware runs bigger models
- MCP tool support — connect your AI to files, browser, calendar, code execution
- uncensored model support — Dolphin and other unfiltered models
- real-time performance dashboard — tok/s, VRAM, temperature
later
- workspaces — different models, tools, and system prompts per context
- plugin marketplace for community MCP tools
- Windows and Linux
- local LoRA fine-tuning
we're integrating cutting-edge inference research directly into Orbit.
TurboQuant (Google Research, 2026) compresses the KV cache from 16-bit to 3-bit with zero quality loss. community ports to Apple Silicon MLX already show 42% memory reduction with perfect coherence. we're building this into Orbit's inference layer.
papers:
| runtime | Electron |
| frontend | React, TypeScript, Tailwind v4 |
| animation | Framer Motion, GSAP, Three.js |
| inference | Ollama |
| hardware | custom llmfit binary |
| build | Vite, electron-builder |
cd orbit/orbit
npm run dev # vite dev server
npm run dev:electron # electron + vite
npm run build # production build
npm run lint # eslintareas that need help:
- TurboQuant integration (PolarQuant + QJL in the inference layer)
- MCP client in Electron main process
- cross-platform testing (Windows, Linux)
- model format compatibility
fork it, branch it, PR it.
Orbit is source-available under BSL 1.1. You can view, fork, and contribute — but you can't use it to build competing products.
AI is becoming essential infrastructure. using it shouldn't mean sending your thoughts to someone else's server and paying monthly for the privilege.
Orbit runs on your hardware, under your control, with your choice of model. no filters you didn't ask for. no subscription. no data you can't delete.
your ai, your rules.
built by @savboj

