. .
.| |.
|| ||
.+====+.
| .''. |
|/ () \| "Would I be okay getting paged
(_`.__.'_) about this at 3am six months
//| |\\ from now?"
|| | | ||
`--' '--`
~~~~~~~~~~~~~~~~~
A CLI-first thinking tool that channels the calm, battle-tested wisdom of a Staff / Principal engineer — helping you review decisions, systems, and tradeoffs before you ship them.
The greybeard has been paged at 3am. They've watched confident decisions become production incidents. They've seen "we'll clean it up later" last five years. They're not here to block you — they're here to make sure you've thought it through.
This is not a linter. It won't yell at your variable names or enforce opinionated formatting.
This is a thinking partner. It models how Staff and Principal engineers reason about systems: failure modes, ownership, long-term cost, and the human impact of decisions. It asks the uncomfortable questions so your reviewer doesn't have to.
- Architecture Decisions — Sanity-check design docs and proposals
- Code Diffs — Review changes through a Staff-engineer lens
- Tradeoff Analysis — Surface operational risks, ownership gaps, maintenance burden
- Mentorship — Learn how experienced engineers think through problems
- Communication Coaching — Phrase feedback for specific audiences
| Mode | Purpose |
|---|---|
| review | Fast, direct Staff-level assessment (default) |
| mentor | Explain reasoning and thought process behind concerns |
| coach | Help phrase constructive feedback for a specific audience |
| self-check | Review your own thinking before sharing with others |
After running an analysis, ask follow-up questions, refine with additional context, and explore alternatives—all in a single conversation.
git diff main | greybeard analyze --interactive
> What happens if this fails in production?
> refine We're doing a 6-month rollout
> explore What if we used event sourcing instead?See the Interactive Mode Guide for workflows, tips, and examples.
10+ built-in perspectives (staff engineer, on-call, security, platform engineering, startup pragmatist, etc.). Write custom YAML packs for your team's values.
Runs as an MCP server compatible with Claude Desktop, Cursor, Zed, and any MCP-compatible tool. Bring greybeard into your IDE.
Works with OpenAI, Anthropic, Ollama, or LM Studio. Configure once, use anywhere.
# Using uv (recommended - faster)
uv pip install greybeard
# Or using pip
pip install greybeardWith optional extras:
uv pip install "greybeard[anthropic]" # Add Claude/Anthropic support
uv pip install "greybeard[all]" # Everythinggreybeard init # Interactive setup wizard
greybeard config show # See what's configuredThis creates ~/.greybeard/config.yaml with your LLM backend choice.
# Review a code diff
git diff main | greybeard analyze
# Review with a specific mode and pack
git diff main | greybeard analyze --mode mentor --pack oncall-future-you
# Run a self-check on a design decision
greybeard self-check --context "We're migrating auth mid-sprint"
# Get coaching on how to phrase feedback
greybeard coach --audience leadership --context "I think we're moving too fast"# Start an interactive REPL after initial analysis
cat design-doc.md | greybeard analyze --interactive
# Then ask follow-up questions, refine with context, explore alternatives
> What's the biggest operational risk?
> refine We have strong on-call practices with Datadog everywhere
> explore What if we kept the monolith for auth?📚 Full Documentation — docs and readthedocs
The simplest way to get feedback:
# Use default mode (review) and default pack from config
git diff main | greybeard analyze
# Or specify both
git diff main | greybeard analyze --mode mentor --pack staff-core
# Save output to a file
git diff main | greybeard analyze --output review.mdAsk follow-up questions and refine your thinking:
cat my-design.md | greybeard analyze --interactive --pack oncall-future-you
Running initial analysis...
[Initial analysis output]
Interactive Review Session. Type 'help' for commands or 'quit' to exit.
> What about failure recovery?
[greybeard responds with recovery implications]
> refine We're rolling out gradually over 6 months
[greybeard adjusts analysis based on timeline]
> explore What if we used event sourcing?
[greybeard compares to original approach]
> quitReview your own decision privately before presenting:
greybeard self-check --context "We're caching heavily with Redis"
# Returns thoughtful review of your assumptions and risksGet help phrasing a concern constructively:
greybeard coach --audience leadership --interactive \
--context "I'm worried we're shipping without enough integration testing"
# Initial response frames the concern clearly
# Then ask follow-ups to refine your message
> What if we added a kill switch?
> How do I explain this to non-technical stakeholders?For better analysis, give greybeard your project structure:
git diff main | greybeard analyze --repo . --context "microservices migration"
# Greybeard has access to README, git history, structure
# Responses are more grounded in your actual setupCreate a .yaml file for your team's values and review with it:
cat design-doc.md | greybeard analyze --pack ./my-team-pack.yamlSee Custom Packs below and Pack Schema for format.
Content packs define the perspective, tone, and heuristics used during review. They're plain YAML—human-editable, version-controllable, shareable.
| Pack | Perspective | Focus |
|---|---|---|
staff-core |
Staff Engineer | Ops, ownership, long-term cost |
oncall-future-you |
On-call engineer, 3am | Failure modes, pager noise, recovery |
mentor-mode |
Experienced mentor | Teaching, reasoning, growth |
solutions-architect |
Solutions Architect | Entity modeling, boundaries, fit-for-purpose |
platform-eng |
Platform Engineer | DX, abstractions, tool maturity, scaling |
security-reviewer |
AppSec Engineer | Auth, injection, secrets, overprivileged access |
startup-pragmatist |
Pragmatic Engineer | Complexity vs stage, reversibility, scope |
incident-postmortem |
SRE / On-call | Blameless analysis, root cause, action items |
idp-readiness |
Platform Engineering | IDP maturity, automation vs process |
data-migrations |
Migration Expert | Lock safety, zero-downtime, rollback, performance |
Each built-in pack includes an example file to test with:
# Test a pack against its example
cat packs/staff-core/STAFF-CORE-EXAMPLE.md | greybeard analyze --pack staff-core
# Try different modes
cat packs/mentor-mode/MENTOR-MODE-EXAMPLE.md | greybeard analyze --pack mentor-mode --mode mentor
# See all examples
ls packs/*-EXAMPLE.mdCreate a .yaml file with your own perspective:
name: my-team-pack
perspective: "Platform engineer at a Series B startup"
tone: "pragmatic, balancing shipping speed with sustainability"
focus_areas:
- "team capacity vs scope"
- "infrastructure complexity"
- "operational readiness"
heuristics:
- "ask: can we do this in 2 weeks?"
- "what's the blast radius if this breaks?"
- "does the team have context?"
communication_style: "clear, direct, assume good intent"
description: "Reviews for our team's operating philosophy"Then use it:
cat design-doc.md | greybeard analyze --pack ./my-team-pack.yamlShare and install packs from GitHub repos:
# Install all packs from a public repo
greybeard pack install github:someone/their-packs
# Install a single pack
greybeard pack install github:owner/repo/packs/my-pack.yaml
# List installed packs
greybeard pack list
# Remove a source
greybeard pack remove owner__repoInstalled packs are cached in ~/.greybeard/packs/ and work exactly like built-ins.
Create a public GitHub repo with a packs/ folder containing .yaml files. Anyone can install it:
greybeard pack install github:your-handle/your-pack-repoSee Packs Guide for detailed pack creation and best practices.
greybeard works with any LLM backend. Configure once with greybeard init:
| Backend | How | What You Need |
|---|---|---|
openai |
OpenAI API | OPENAI_API_KEY |
anthropic |
Anthropic API | ANTHROPIC_API_KEY + greybeard[anthropic] extra |
ollama |
Local (free) | Ollama running locally |
lmstudio |
Local (free) | LM Studio server running |
# Interactive setup
greybeard init
# Or set directly
greybeard config set llm.backend anthropic
greybeard config set llm.model claude-3-5-sonnet
greybeard config show # VerifyConfig lives at ~/.greybeard/config.yaml.
See Backends Guide for detailed setup for each backend.
Run greybeard as an MCP server in Claude Desktop, Cursor, Zed, or other MCP-compatible tools.
- Install greybeard:
uv pip install greybeard- Get the greybeard path:
which greybeard- Edit Claude config:
- macOS:
~/Library/Application\ Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
- macOS:
Add:
{
"mcpServers": {
"greybeard": {
"command": "/path/to/greybeard",
"args": ["mcp"]
}
}
}- Restart Claude Desktop. Now you can:
You: I drafted an architecture decision. Can you review it?
Claude: I'll review this with greybeard.
[calls greybeard review tool]
[returns analysis with risks, tradeoffs, questions]
Cursor, Zed, and any MCP-compatible tool work the same way. Point them at greybeard mcp (or use the full path from which greybeard).
See MCP Integration Guide for detailed setup and workflow examples.
Use the greybeard agent framework to build specialized decision-making tools:
from greybeard.common import BaseAgent
class MyAgent(BaseAgent):
def __init__(self):
super().__init__(name="my-agent", description="...")
def run(self, user_input: str) -> dict:
# Use research, interview, documentation capabilities
context = self.research.gather_file_context("file.txt")
response = self.llm.call(...)
return {"result": response}Available Capabilities:
research— Gather context from files, directories, git historyinterview— Multi-turn conversations with usersllm— Unified interface to all LLM backendsdocumentation— Format output as Markdown, JSON, YAML
See Creating Agents Guide and the template.
Planned Specialized Agents:
- Architecture Agent (v1.1) — Document architectural decisions (ADRs)
- SLO Agent (v1.2) — Analyze systems and recommend SLOs
- Tech Debt Agent (v1.3) — Scan code and prioritize technical debt
All output is structured Markdown:
## Summary
Your decision summary...
## Key Risks
- Risk 1
- Risk 2
## Tradeoffs
...
## Questions to Answer Before Proceeding
...
## Suggested Communication Language
...
_Assumptions made: ..._Save with --output filename.md. See Output Guide.
git clone https://github.com/btotharye/greybeard.git
cd greybeard
# Using Makefile (easiest)
make install-dev
make test
make help # see all commands
# Or using uv directly
uv pip install -e ".[dev]"
uv run pytestContent Packs (easiest, high value)
- Create a perspective your team or community needs
- See Packs Guide
Custom Agents
- Build specialized tools on top of the framework
- See Creating Agents Guide
Bug Reports & Features
Code Contributions
- See CONTRIBUTING.md for setup, testing, style
- Follow Code of Conduct
Community Packs
- Build a pack repo and share it
- Open an issue linking to it—we'll feature it
- Multi-backend — OpenAI, Anthropic, Ollama, LM Studio. Choose your tool.
- CLI-first — No web UI. Pipe in, pipe out. Unix philosophy.
- Stateless — No conversation history by default. Add
--contextfor prior context, or use--interactivefor stateful REPL. - Pack format — YAML for human editability and version control.
- MCP stdio — Simplest, most compatible tool integration.
- Minimal dependencies —
click,pyyaml,rich,python-dotenv, optionalopenai/anthropic.
- Getting Started — Installation, setup, first steps
- Guides — Interactive mode, packs, agents, backends, MCP, output
- Reference — CLI, config, pack schema
- Contributing — How to contribute
- Full Docs — Hosted documentation
MIT License — Use freely, modify, and distribute.
- 📚 Check the docs
- 💬 GitHub Discussions
- 🐛 Open an issue
"The greybeard isn't here to block you. They're here to make sure you've thought it through."
