A terminal-based chat interface for Google's Gemini API and local Ollama models, built in Rust.
- Interactive Chat Mode: Real-time conversation with Gemini AI
- Agent Mode: Autonomous file operations with tool execution
- Streaming Responses: See responses as they're generated
- Multi-turn Conversations: Maintains conversation history for context
- Multiple Models: Seamlessly switch between Gemini (cloud) and Ollama (local) models
- Tool Calling: Expose local file-operation tools directly to Ollama models
- Session Management: Save and load chat sessions
- Rich Terminal UI: Colored output, progress indicators, and intuitive commands
- Configuration Management: Secure API key storage
- Homebrew Installation: Easy installation via
brew install
Full project docs live in the docs/ directory and can be built with mdBook:
mdbook serve docsThis command launches a local preview server at http://localhost:3000.
The GitHub Pages site hosts a signed APT repository for 64-bit Debian-based systems. Add the key and source, then install:
curl -fsSL https://tomatyss.github.io/chatter/apt/KEY.gpg | \
sudo gpg --dearmor -o /usr/share/keyrings/chatter-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/chatter-archive-keyring.gpg] \
https://tomatyss.github.io/chatter/apt stable main" | \
sudo tee /etc/apt/sources.list.d/chatter.list
sudo apt update
sudo apt install chatterReplace the base URL if you're working from a fork. The repository currently publishes amd64 builds produced by the automated release workflow.
brew tap tomatyss/chatter
brew install chattergit clone https://github.com/tomatyss/chatter.git
cd chatter
cargo build --release
sudo cp target/release/chatter /usr/local/bin/-
Get your Gemini API key from Google AI Studio
-
Configure the API key (required for the Gemini provider):
chatter config set-api-key
Or set it as an environment variable:
export GEMINI_API_KEY="your-api-key-here"
-
(Optional) For Ollama support, install and run Ollama:
# macOS example brew install ollama ollama serveBy default Chatter connects to
http://localhost:11434. You can change the endpoint in the configuration file under theollama.endpointfield.
Start an interactive chat session:
chatterThis opens a real-time chat interface where you can have conversations with Gemini AI.
Send a single message without entering interactive mode:
chatter "What is Rust programming language?"# Use a specific model
chatter --model gemini-2.5-pro "Explain quantum computing"
# Talk to a local Ollama model
chatter --provider ollama --model llama3.1 "Summarize the latest meeting notes"
# Set system instructions
chatter --system "You are a helpful coding assistant" "Help me with Rust"
# Load a previous session
chatter --load-session my-chat.json
# Auto-save the session
chatter --auto-saveIf you omit --provider, Chatter uses the provider stored in your configuration file (default is gemini).
Enable autonomous file operations with agent mode:
# In interactive chat, enable agent mode and set the working directory (optional)
/agent on
/agent allow-path .
# The AI can now execute file operations automatically relative to the current directory
You: Please read the file config.json and search for TODO comments in all Rust files
π§ AGENT: Executing tool: read_file
π Reading file content as requested
β
Successfully read 245 bytes from config.json
π§ AGENT: Executing tool: search_files
π Searching for files as requested
β
Found 3 matches in 12 files/agent on- Enable agent mode/agent off- Disable agent mode/agent status- Show agent status/agent history- Show tool execution history/agent tools- List available tools/agent config- Show agent configuration/agent allow-path <path>- Temporarily permit an additional directory/agent forbid-path <path>- Block access to a directory/agent help- Show agent help
- read_file - Read file contents
- write_file - Create or overwrite files
- update_file - Update files with targeted changes
- search_files - Search for patterns across files
- list_directory - List directory contents
- file_info - Get detailed file information
- Run any locally installed model exposed by Ollama with
--provider ollama --model <name> - When agent mode is enabled, Chatter automatically exposes its file-system tools to the model using Ollama's function-calling API. Tools operate relative to the current working directory by default; add extra directories with
/agent allow-pathas needed. - Tool results are sent back to the model and also summarized in the terminal so you can follow along
- The Ollama endpoint defaults to
http://localhost:11434; override it inconfig.jsonif your server runs elsewhere
While in interactive mode, you can use these commands:
/help- Show available commands/clear- Clear conversation history/save <filename>- Save current session/load <filename>- Load a session/model <name>- Switch models/system <instruction>- Set system instruction/history- Show conversation history/info- Show session informationexitorquit- Exit the chat
# Show current configuration
chatter config show
# Set API key interactively
chatter config set-api-key
# Reset configuration
chatter config resetgemini-2.5-flash(default)gemini-2.5-progemini-1.5-flashgemini-1.5-pro
- Any model installed via
ollama pull ...(e.g.llama3.1,qwen2.5-coder, etc.) - List available models with
ollama list - Select them with
--provider ollama --model <name>
$ chatter
π€ Chatter - Gemini AI Chat
Model: gemini-2.5-flash | Provider: Gemini | Session: a1b2c3d4
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Type 'exit' to quit, '/help' for commands
You: Hello! Can you help me learn Rust?
Gemini: Hello! I'd be happy to help you learn Rust! Rust is a systems programming
language that focuses on safety, speed, and concurrency. What specific aspect
of Rust would you like to start with?
You: What makes Rust special?
Gemini: Rust has several unique features that make it special:
1. **Memory Safety**: Rust prevents common bugs like null pointer dereferences...$ chatter "Write a simple 'Hello, World!' program in Rust"
fn main() {
println!("Hello, World!");
}
This is the simplest Rust program. The `main` function is the entry point...$ chatter --provider ollama --model llama3.1
π€ Chatter - Ollama AI Chat
Model: llama3.1 | Provider: Ollama | Session: f9e1c3b2
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Type 'exit' to quit, '/help' for commands
You: Could you read Cargo.toml and summarize the dependencies?
π§ TOOL Executing tool: read_file
β
Read 426 bytes from Cargo.toml
Ollama: Cargo.toml declares crates such as `reqwest`, `tokio`, `ratatui`, and `rustyline`.$ chatter --model gemini-2.5-pro "Explain the differences between ownership, borrowing, and lifetimes in Rust"Chatter stores its configuration in:
- macOS:
~/Library/Application Support/chatter/config.json - Linux:
~/.config/chatter/config.json - Windows:
%APPDATA%\\chatter\\config.json
Key fields:
provider:"gemini"(default) or"ollama"default_model: Model name used when--modelis not providedollama.endpoint: Base URL for the Ollama server (defaults tohttp://localhost:11434)
Session files are saved in the sessions/ subdirectory.
The Gemini API follows the multi-turn conversation format:
{
"contents": [
{
"role": "user",
"parts": [{"text": "Hello"}]
},
{
"role": "model",
"parts": [{"text": "Great to meet you. What would you like to know?"}]
},
{
"role": "user",
"parts": [{"text": "I have two dogs in my house. How many paws are in my house?"}]
}
]
}git clone https://github.com/tomatyss/chatter.git
cd chatter
cargo buildcargo testThe Publish Debian Package workflow builds the .deb with cargo deb, regenerates the APT metadata, signs the Release files, and commits the result to the gh-pages branch under apt/. To enable signing you need to add repository secrets:
APT_GPG_PRIVATE_KEYβ ASCII-armored private key used for signing.APT_GPG_PASSPHRASEβ Passphrase for the key (leave empty for an unprotected key).APT_GPG_KEY_IDβ Fingerprint or key ID exported as the public key (KEY.gpg).
Once the secrets exist, publishing a GitHub Release (tag v*) will automatically update the APT repository at https://tomatyss.github.io/chatter/apt.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Rust
- Uses Google's Gemini API
- Terminal UI powered by crossterm and ratatui