Brief overview
🔍 Added a small feature to zoom in using mouse selection. Handy for looking at deep branches #ChatCircuit
👉 August 22, 2024
Re-run all nodes in a branch
Chat Circuit now makes it possible to re-run a branch of your conversation with LLM with a different prompt. It supports all local LLMs running on @ollama
💾 👉 August 6, 2024
Generate Follow up questions
Implemented this idea in chat circuit. Here is a quick demo of the application along with generating follow up questions using #LLM August 20, 2024
Zoom in/out
🔍 Added a small feature to zoom in using mouse selection. Handy for looking at deep branches #ChatCircuit
👉 August 22, 2024
Minimap Support
#ChatCircuit Added a mini-map with the help of Sonnet 3.5 in @poe_platform.
Would have taken me days if not weeks to do it without any help. 🙏
~ 99% of code is written by Claude September 25, 2024
Export to JSON Canvas Document
Added option to export to #JSON Canvas document that can be imported by any supported application like @obsdmd / @KinopioClub
👉 September 26, 2024
Multi-Branch Conversations Create and manage multiple conversation branches seamlessly.
Contextual Forking Fork conversation branches with accurate context retention.
Save and Load Diagrams
Undo and Redo
Zoom and Pan
Re-run nodes in a branch
It is possible to re-run all the nodes in a branch after changing the prompt it any node in the list.
Chat Circuit supports multiple LLM providers through a flexible, provider-based architecture:
| Provider | Default Endpoint | Environment Variable | Description |
|---|---|---|---|
| Ollama | http://localhost:11434 |
OLLAMA_API_BASE |
Popular local LLM server |
| LMStudio | http://localhost:1234/v1 |
LMSTUDIO_API_BASE |
Desktop app for running LLMs locally |
| KoboldCpp | http://localhost:5001/v1 |
KOBOLDCPP_API_BASE |
Lightweight local inference server |
| Provider | Endpoint | Environment Variable | Description |
|---|---|---|---|
| OpenRouter | https://openrouter.ai/api/v1 |
OPENROUTER_API_KEY |
Access to multiple cloud LLMs (requires API key) |
All providers are auto-discovered at startup. The app will work with any combination of available providers.
Quick Start:
- Start your preferred local provider (Ollama, LMStudio, or KoboldCpp)
- Launch Chat Circuit - models will be automatically discovered
- Select a model from the dropdown in any conversation node
Configuring Provider Endpoints:
You can configure provider endpoints in two ways:
-
Via Configuration Dialog (Recommended):
- Open Configuration via menu:
Configuration > Models...or pressCtrl+, - Enter custom endpoints for Ollama, LMStudio, and KoboldCpp
- Settings are saved automatically and persist across sessions
- Open Configuration via menu:
-
Via Environment Variables (fallback if not set in UI):
# Example: Run with custom endpoints via environment variables OLLAMA_API_BASE="http://192.168.1.100:11434" \ LMSTUDIO_API_BASE="http://localhost:1234/v1" \ KOBOLDCPP_API_BASE="http://localhost:5001/v1" \ python3 main.py
See provider-config.example.sh for more detailed configuration examples.
Configuration Priority: UI Settings > Environment Variables > Defaults
To run this application, follow these steps:
Install dependencies
python3 -m pip install -r requirements.txtRun application
python3 main.pyThis application discovers available LLM models dynamically from multiple providers:
- Ollama: Discovers local models from
Ollamarunning athttp://localhost:11434(configurable viaOLLAMA_API_BASEenv var). - LMStudio: Discovers local models from
LMStudiorunning athttp://localhost:1234(configurable viaLMSTUDIO_API_BASEenv var). - KoboldCpp: Discovers local models from
KoboldCpprunning athttp://localhost:5001(configurable viaKOBOLDCPP_API_BASEenv var). - OpenRouter: Discovers free models from
OpenRouterwhenOPENROUTER_API_KEYis set (via Configuration dialog or env var).
All local providers (Ollama, LMStudio, KoboldCpp) use OpenAI-compatible APIs through LiteLLM.
Provider Configuration:
You can customize provider endpoints using environment variables:
export OLLAMA_API_BASE="http://localhost:11434" # Default Ollama endpoint
export LMSTUDIO_API_BASE="http://localhost:1234/v1" # Default LMStudio endpoint
export KOBOLDCPP_API_BASE="http://localhost:5001/v1" # Default KoboldCpp endpoint
export OPENROUTER_API_KEY="your-api-key" # OpenRouter API keyIf a provider fails to respond (e.g., server not running), the app shows a warning but continues with models from other providers.
If no models are discovered from any provider, the app shows an error; please ensure at least one provider is running.
Prefer the provided make targets for development and running:
make install # set up environment and pre-commit hooks
make check # run linters/formatters and pre-commit checks
make run # launch the application

