███████╗███████╗██████╗ ██████╗ ██╗ ██╗
██╔════╝██╔════╝██╔══██╗██╔═══██╗╚██╗██╔╝
█████╗ █████╗ ██████╔╝██║ ██║ ╚███╔╝
██╔══╝ ██╔══╝ ██╔══██╗██║ ██║ ██╔██╗
██║ ███████╗██║ ██║╚██████╔╝██╔╝ ██╗
╚═╝ ╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝
A blazing-fast PostgreSQL client built in Rust. No Electron. No JVM. No bloat.
Ferox runs under 50 MB and starts in under 200 ms — because your database client shouldn't be the bottleneck.
| Main editor | Dashboard |
|---|---|
![]() |
![]() |
| EXPLAIN ANALYZE | Join Builder |
|---|---|
![]() |
![]() |
Press Ctrl+I (or the AI button in the toolbar) to open the NL bar. Type plain English — Ferox fetches the live schema from your DB and sends it to the AI, so the generated query always uses your real tables and columns.
"show me the top 10 customers by total order value in the last 30 days"
Ferox sends the full live schema as context — the AI sees every table, column, and type in the connected database. No hallucinated table names.
Supported providers (configure via Settings → AI):
| Provider | Notes |
|---|---|
| Anthropic Claude | claude-haiku-4-5 by default — fast and cheap |
| Groq | llama-3.3-70b-versatile — free tier available |
| Ollama | Fully local, no API key, no data leaves your machine |
| OpenAI | gpt-4o-mini by default |
| Custom / OpenRouter | Any OpenAI-compatible endpoint via base URL override |
The generated SQL is placed directly in the active query editor — review it, tweak it, run it.
- Multi-tab query editor — Ctrl+T new tab, Ctrl+W close, right-click for Close / Close Others / Close All
- Per-table tabs — clicking a table opens it in its own tab; existing tabs are reused
- Schema browser — lazy-loaded tree: schemas → tables / views / mat-views / foreign tables, live filter
- Data browser — double-click any table or view to browse with server-side pagination & ORDER BY
- Inline editing — double-click a cell to edit, Enter to commit, Escape to cancel
- Persistent query history — last 500 queries, searchable, click to reload
- Multi-statement execution —
;-separated statements run in sequence; each SELECT result opens in its own tab - View DDL — right-click any view or materialized view → Show DDL
- EXPLAIN visualizer — tree view of query plans with cost, rows, and timing per node; optimization suggestions
- Safe mode transactions — DML wrapped in explicit BEGIN/COMMIT/ROLLBACK
- Export — CSV & JSON via native OS file dialog (no temp files)
- Script generation — right-click table → Generate SELECT / INSERT / UPDATE / DELETE scripts
- Join Builder — visual multi-table JOIN composer (
Query → Join Builder…) - Column statistics — right-click any column header → null %, distinct count, min/max length, top values
- SQL syntax highlighting — zero-dependency tokenizer, dark (
base16-ocean.dark) and light (InspiredGitHub) themes - SQL autocomplete — table names, column names, keywords
- Connection profiles — saved to
~/.config/ferox/config.toml; SSL modes + SSH tunnel supported - Multiple simultaneous connections — per-connection sidebar, tabs, and DB threads
- ER diagram — visual schema relationship viewer with FK arrows, pan/zoom, draggable nodes
- Database dashboard — table sizes, index stats, active connections with kill support
- F5 / Ctrl+Enter to run, Ctrl+C to cancel mid-query
- EN / TR localisation — full bilingual UI; language choice persists to config
| Metric | Ferox |
|---|---|
| RAM at idle | ~45 MB |
| Cold startup | < 200 ms |
| Binary size | ~7 MB |
Measured on Windows 10, release build with LTO.
Download the latest release for your platform from the Releases page.
| Platform | File |
|---|---|
| Windows 10+ | ferox-windows-x86_64.exe |
| macOS 12+ (Intel + Apple Silicon) | ferox-macos-universal |
| Linux x86_64 | ferox-linux-x86_64 |
macOS may block the binary with a "unidentified developer" warning because Ferox is not notarized. To allow it:
xattr -rd com.apple.quarantine /path/to/ferox-macos-universalThen double-click (or chmod +x and run from terminal).
# Prerequisites: Rust 1.75+ (https://rustup.rs)
git clone https://github.com/frkdrgt/ferox.git
cd ferox
cargo build --releaseBinary lands at target/release/ferox (or ferox.exe on Windows).
- Launch Ferox
- Connection → New Connection… — enter host, port, user, password, database
- Toggle SSL if needed (
preferworks for most setups) - Hit Connect — schema tree loads on the left
Type SQL in the editor, press F5 or Ctrl+Enter.
SELECT u.name, COUNT(o.id) AS orders
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.name
ORDER BY orders DESC;Or use the Join Builder (Query → Join Builder…) to construct joins visually.
| Shortcut | Action |
|---|---|
F5 / Ctrl+Enter |
Run query |
Ctrl+C |
Cancel running query |
Ctrl+T |
New tab |
Ctrl+W |
Close tab |
Ctrl+Tab |
Next tab |
Ctrl+Shift+Tab |
Previous tab |
F5 (sidebar focused) |
Refresh schema tree |
Profiles are stored automatically:
| Platform | Path |
|---|---|
| Windows | %APPDATA%\ferox\config.toml |
| macOS / Linux | ~/.config/ferox/config.toml |
[[connections]]
name = "prod-readonly"
host = "db.example.com"
port = 5432
user = "analyst"
password = "" # leave empty to prompt
database = "warehouse"
ssl = "require"
[language] # "en" or "tr"
language = "en"Query history lives at ~/.local/share/ferox/history.txt (max 500 entries).
Ferox uses three dedicated threads, zero shared mutable state between them:
┌──────────────────────────────────────────────────────────┐
│ UI Thread (eframe) │
│ egui immediate-mode rendering │
│ sidebar · tabs · join builder · NL bar │
└────────┬───────────┬──────────────┬───────────┬──────────┘
│ DbCommand │ DbEvent │ AiCommand │ AiEvent
▼ ▼ ▼ ▼
┌────────────────────┐ ┌─────────────────────────────────┐
│ DB Thread │ │ AI Thread (tokio) │
│ (tokio) │ │ reqwest · Anthropic / OpenAI │
│ tokio-postgres │ │ Groq · Ollama · custom │
│ native-tls │ │ Schema context fetched live │
│ async queries │ │ from DB before every request │
└────────────────────┘ └─────────────────────────────────┘
All communication goes through mpsc channels — the UI thread never blocks.
| Role | Crate |
|---|---|
| GUI framework | egui + eframe |
| Table widget | egui_extras |
| PostgreSQL driver | tokio-postgres |
| Async runtime | tokio (current-thread per worker thread) |
| TLS | native-tls + postgres-native-tls |
| SSH tunnel | russh |
| AI HTTP client | reqwest (native-tls, JSON) |
| SQL highlighting | custom zero-dependency tokenizer (src/ui/syntax.rs) |
| Config | serde + toml |
| File dialogs | rfd |
- Auto-complete — table names, column names, SQL keywords
- Database dashboard — table sizes, index bloat, active connections
- Multiple simultaneous connections — separate DB threads per connection
- SSH tunnel — connect through a jump host
- ER diagram — visual schema relationships
- Multi-statement queries — run multiple statements separated by
; - View DDL — right-click any view or materialized view to see its definition
- Safe mode transactions — explicit BEGIN/COMMIT/ROLLBACK for DML
- Join Builder — visual multi-table JOIN composer
- EN/TR localisation — full bilingual UI, language persists to config
- Settings & About — Settings menu with language switcher and About dialog
- Test Connection — verify credentials before connecting, from the connection dialog
- Close connection — disconnect and remove a connection from the sidebar with one click
- Connection status indicators — color-coded dots replacing broken emoji squares on Windows
- AI: Natural Language → SQL — multi-provider (Claude, Groq, Ollama, OpenAI, custom), live schema context
- Ctrl+A select-all in query editor
- Bookmarked queries — save & name frequently used SQL
- Dark / light theme toggle — runtime switch
- Result diff — compare two query results side-by-side
- CSV / JSON import — drag-and-drop data into a table
AI: Natural Language → SQL
- New AI worker thread (separate tokio runtime, non-blocking)
Ctrl+I/ AI button opens NL bar in the active query tab- Full live schema fetched from
information_schema.columnsbefore every request — AI sees every real table and column, never invents names - Multi-provider support out of the box:
- Anthropic Claude (
claude-haiku-4-5default) - Groq (
llama-3.3-70b-versatile, free tier) - Ollama (fully local, no key, no egress)
- OpenAI (
gpt-4o-mini) - Custom — any OpenAI-compatible endpoint via base URL
- Anthropic Claude (
Settings → AIpanel: provider selector, API key, model override, base URL override- Generated SQL lands directly in the active editor tab
Performance
display_indicescache — filter/sort indices computed once and cached with dirty flag; no O(n×cols) scan per frame- Content-aware initial column widths (
compute_col_widths) — samples 200 rows once per result set - Schema F5 refresh no-flash — stale table list stays visible with
↻badge until new data arrives; replaced atomically
Schema diff & function browser (v0.2.5.x)
- Schema snapshot diff: compare two points-in-time for any schema
- Function/procedure browser in the sidebar
- Multi-statement execution:
;-separated statements run in sequence, each result in its own tab - Column statistics panel: right-click any column header for null %, distinct count, min/max, top values
- Migrated to Rust 2024 edition
- Full EN/TR bilingual UI; language persists to config
- Settings menu with language switcher
- About dialog
- SSH tunnel support (russh)
- ER diagram viewer with FK arrows, pan/zoom, draggable nodes
- Visual Join Builder (
Query → Join Builder…) - Safe mode transactions
Bug reports, feature requests, and pull requests are welcome.
# Run against a local Postgres
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test postgres:16
# Dev build (faster compile, debug symbols)
cargo build
# Integration tests (requires Postgres on localhost:5432)
cargo test --test integrationPlease keep the UI thread non-blocking and all DB work behind DbCommand / DbEvent.
See CLAUDE.md for architecture notes.
This project is an experiment in vibe-coding — writing software primarily through conversation with an AI, rather than typing code by hand.
Every line of Rust in this repository was generated by Claude (Anthropic) via Claude Code. Every commit was authored through an AI session. The developer's role was to define what to build, review what came out, and decide what to do next — not to write the code itself.
Because it matters. If you're evaluating this project — as a tool, as a reference, or as a hiring signal — you deserve to know how it was made. Passing this off as hand-crafted Rust would be dishonest.
- Chose the goal: a lightweight, native PostgreSQL client as an alternative to DBeaver/DataGrip
- Picked the stack:
egui,tokio-postgres,russh— no Electron, no JVM - Defined the architecture: two-thread model,
mpscchannels, no shared mutable state between UI and DB - Wrote the
CLAUDE.mdspec that guided every session - Planned each feature phase, reviewed diffs, caught bugs, and made judgment calls
- Did not write the actual Rust
- Wrote all source files from scratch (
src/app.rs,src/db/,src/ui/, etc.) - Made architectural decisions within the constraints given
- Debugged compile errors iteratively
- Kept the codebase consistent across sessions using the
CLAUDE.mdcontext
Honestly: mostly yes, sometimes no. The architecture is clean and the UI thread never blocks. There are places where a seasoned Rust developer would've made different tradeoffs — but it compiles, it runs, and it does what it's supposed to do. It started from zero and grew to a multi-feature desktop app across a handful of sessions.
This isn't about whether AI-generated code is "real" code. It's about what's now possible when you pair a clear technical vision with a capable AI. Ferox exists because it was cheap enough — in time and effort — to just build the thing.
Whether that's exciting or unsettling probably says something about where you are in your relationship with these tools.
MIT — see LICENSE
Built with Rust because life's too short for slow database clients. Written by Claude because that's just where we are now.




