Vague prompts produce vague results. Specifier takes whatever rough idea you have and rewrites it into something detailed enough to actually build from — then lets you keep refining it in conversation until it says exactly what you mean.
It runs entirely locally through Ollama, so nothing leaves your machine.
- Python 3.10+
- Ollama installed and running
- At least one model pulled, e.g.
ollama pull llama3.1:8b
Install the Python dependency:
uv add ollamauv run specifier.pyOn startup, Specifier lists every model currently available in your Ollama installation with its size, family, and quantization. Enter the number next to the model you want, or press Enter to use the default (llama3.1:8b).
From there, type your rough prompt. Specifier rewrites it into a specific, actionable version. You can then keep nudging it — "make it focus on mobile", "add rate limiting requirements", "assume a solo developer" — and each update is folded into the prompt naturally. When you're done, type quit and the final prompt is printed to the terminal.
Smaller models (under 3B) tend to bleed scaffolding language into their output — phrases like "Here's a refined version of your prompt" that you didn't ask for. Very large or heavily fine-tuned models swing the other way, producing responses that are technically correct but frustratingly vague. The 7–13B range with moderate instruction tuning hits the balance this task needs: it follows the style constraints and still commits to specific details. llama3.1:8b is the default for that reason, though the model selector lets you swap it out and see for yourself.
echo 'alias specifier="uv run --project ~/specifier ~/specifier/specifier.py"' >> ~/.bash_alias
source ~/.bash_aliasNow you can run specifier from anywhere.