Skip to content

savka777/orbit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Orbit

your ai, your rules.

local AI that scans your hardware, recommends the best models,
and runs them privately on your machine.

shipping with TurboQuant — run larger models on lower-spec hardware.


Orbit

Orbit — Model Recommendations


what it does

  • scans your hardware — detects CPU, GPU, VRAM, and tells you exactly which models your machine can handle
  • one-click model management — browse, download, switch between models. no terminal required
  • threaded conversations — branch any message into a side thread without losing context
  • fully private — everything runs locally through Ollama. nothing leaves your device. no accounts, no telemetry

getting started

requires macOS (Apple Silicon recommended), Ollama, and Node.js 18+.

git clone https://github.com/savka777/orbit.git
cd orbit/orbit
npm install
npm run dev:electron

to build a .dmg:

npm run build:electron

roadmap

now

  • hardware detection + model recommendations
  • one-click model download and management
  • streaming chat with local models
  • multi-conversation support
  • threaded conversations

next

  • TurboQuant integration — 6x KV cache compression, same hardware runs bigger models
  • MCP tool support — connect your AI to files, browser, calendar, code execution
  • uncensored model support — Dolphin and other unfiltered models
  • real-time performance dashboard — tok/s, VRAM, temperature

later

  • workspaces — different models, tools, and system prompts per context
  • plugin marketplace for community MCP tools
  • Windows and Linux
  • local LoRA fine-tuning

research

we're integrating cutting-edge inference research directly into Orbit.

TurboQuant (Google Research, 2026) compresses the KV cache from 16-bit to 3-bit with zero quality loss. community ports to Apple Silicon MLX already show 42% memory reduction with perfect coherence. we're building this into Orbit's inference layer.

papers:


stack

runtime Electron
frontend React, TypeScript, Tailwind v4
animation Framer Motion, GSAP, Three.js
inference Ollama
hardware custom llmfit binary
build Vite, electron-builder

contributing

cd orbit/orbit
npm run dev              # vite dev server
npm run dev:electron     # electron + vite
npm run build            # production build
npm run lint             # eslint

areas that need help:

  • TurboQuant integration (PolarQuant + QJL in the inference layer)
  • MCP client in Electron main process
  • cross-platform testing (Windows, Linux)
  • model format compatibility

fork it, branch it, PR it.


license

Orbit is source-available under BSL 1.1. You can view, fork, and contribute — but you can't use it to build competing products.


mission

AI is becoming essential infrastructure. using it shouldn't mean sending your thoughts to someone else's server and paying monthly for the privilege.

Orbit runs on your hardware, under your control, with your choice of model. no filters you didn't ask for. no subscription. no data you can't delete.

your ai, your rules.


built by @savboj

About

your ai, your rules. — local AI desktop app with hardware-aware model matching, threaded conversations, and TurboQuant integration. no cloud, no subscription, no data leaving your device.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors