Turn copy‑paste into super‑powers.
Analyse, summarise and transform text and images in your clipboard using local LLMs – in one hot‑key – without sending your data anywhere.
TL;DR – Clipboard AI sits in your system tray, watches your clipboard and—when you hit a hot‑key or let it auto‑trigger—pipes the content through local LLMs served by Ollama. The result (summary, explanation, caption, code refactor, whatever) streams back in a sleek floating chat window that remembers context.
- Key Features
- Demo
- How It Works
- Installation
- Usage
- Configuration
- Supported Models
- Security & Privacy
- Troubleshooting
- Roadmap
- Contributing
- License
- Credits
-
System‑tray daemon – lives quietly in Windows & macOS menu bars; built with PyQt6 and
QSystemTrayIcon. -
Global hot‑keys – defaults:
Ctrl + Shift + U(text)Ctrl + Shift + O(image)Ctrl + Shift + .(notes) (all user‑configurable).
-
Auto vs Manual – let Clipboard AI auto‑process every copy or only when you press a key.
-
Multimodal – processes plain text, rich text and images (captured or files) with vision‑capable models.
-
Contextual chat – follow‑up questions stay in‑memory so you can iterate without re‑copying.
-
100 % Local – powered by Ollama models: no cloud calls, so your data never leaves the machine.
-
Cross‑platform packaging – single‑file executables via PyInstaller (Windows) and py2app (macOS) plus Inno Setup installer.
-
Extensible – modular worker threads; just swap models or add new prompts to extend functionality.
-
Private config – JSON settings stored in platform‑correct user config dir (thanks
appdirs).
flowchart LR
subgraph UI
Tray(System Tray)
Dialog(Floating Chat UI)
Settings(Settings Dialog)
end
Clipboard(Clipboard Monitor) -->|signals| Workers
Workers -->|threads| Ollama(Ollama API Service)
Ollama -->|stream| Dialog
Tray --> Dialog
Settings --> Tray
- ClipboardMonitor watches clipboard via Qt’s
QClipboardand emits signals when it detects new text/image. - Worker threads (TextWorker & ImageWorker) call Ollama’s REST endpoints, streaming tokens back.
- FloatingDialog displays a thinking indicator then streams the response; users can ask follow‑up questions that keep context.
- SystemTray menu lets you pause, switch Auto/Manual mode, open Settings, or quit.
| Dependency | Minimum | Notes |
|---|---|---|
| Python | 3.8 | Only needed if installing from source |
| Ollama | latest | brew install ollama / Windows MSI / Docker |
| GPU | Optional | CPU works; GPU + 8 GB VRAM recommended for >7B models |
- 🪟 Windows 10+ →
ClipboardAI-Setup-x.y.z.exe - 🍎 macOS 12+ →
Clipboard AI-x.y.z.dmg
Download from the Releases page, install, run. The tray icon appears on first launch.
git clone https://github.com/LikithMeruvu/Clipboard_ai.git
cd Clipboard_ai
python -m venv venv && source venv/bin/activate # Windows: venv\Scripts\activate
pip install -e .
ollama pull gemma3:latest # or deepseek-r1:8b, llama3, etc.
clipboard-ai # launch| Action | Default Shortcut | What Happens |
|---|---|---|
| Process latest copy | Ctrl + Shift + U |
Opens dialog; sends clipboard text to AI |
| Add notes to last copy | Ctrl + Shift + . |
Opens modal for extra instructions |
| Analyse image in clipboard | Ctrl + Shift + O |
Shows preview → enter question → get caption/analysis |
| Toggle Auto/Manual | Tray → Auto‑mode | Auto processes every new copy |
| Pause / Resume monitoring | Tray → Pause | Suspends clipboard hooks |
Hot‑keys are editable in Tray → Settings.
A JSON file is created on first run at:
- Windows
%APPDATA%\clipboard_ai\config.json - macOS
~/Library/Application Support/clipboard_ai/config.json - Linux
~/.config/clipboard_ai/config.json
Example:
| Purpose | Default | Alternatives |
|---|---|---|
| Text | gemma3:latest |
deepseek-r1:8b, llama3, any GPTQ/gguf text model |
| Vision | gemma3:latest |
llava, bakllava, clip‑cap, etc. |
Pull with:
ollama pull gemma3:latest
ollama pull deepseek-r1:8bSwitch models in Settings → Model Selection.
Clipboard AI never transmits clipboard data over the internet. All inference happens via Ollama on localhost.
No analytics, no telemetry, no 3rd‑party servers. You control the models and the data.
| Symptom | Fix |
|---|---|
| “Cannot connect to Ollama” | Run ollama serve; ensure config.json points to correct port |
| Hot‑key not working | Check for collisions with other apps; edit shortcut in Settings |
| Large image stalls | Images > 4K are down‑scaled automatically; ensure enough RAM/VRAM |
| PyInstaller build fails | Use Python 3.10+, upgrade PyInstaller, add --hidden-import=PyQt6.sip |
- Linux (Wayland/X11) build & AppImage
- Clipboard history sidebar
- Custom prompt templates
- Plugin SDK for speciality tasks (OCR, code lint, translation)
See the open issues for the full backlog.
Pull requests are welcome!
- Open an issue to discuss major changes.
- Fork →
git checkout -b feat/your-feature. pre-commit run --all-filesto pass formatting & lint.- Add/adjust unit tests.
- Submit PR – the CI will run on GitHub Actions.
By contributing you agree to follow our Contributor Covenant Code of Conduct (see CODE_OF_CONDUCT.md).
Clipboard AI is released under the MIT License – see LICENSE for full text.
| Role | Name / Project |
|---|---|
| Inspiration | Tom Preston‑Werner’s “README first” philosophy |
| LLM Runtime | Ollama |
| GUI Toolkit | Qt 6 via PyQt6 |
| Badges | Shields.io |
| Icons | HeroIcons |
| Maintainer | @LikithMeruvu |
Clipboard AI – because copy‑paste deserves super‑powers.
{ "processing_mode": "manual", "hotkey": "ctrl+shift+u", "notes_hotkey": "ctrl+shift+.", "image_hotkey": "ctrl+shift+o", "selected_model": "gemma3:latest", "image_model": "gemma3:latest", "ollama_host": "http://localhost:11434", "notification_duration": 5000 }