A TypeScript CLI that assembles AI-powered prompts at runtime to guide you from "I have an idea" to working software. Scaffold walks you through 60 structured pipeline steps — organized into 16 phases — plus 10 utility tools, and the supported AI tools handle the research, planning, and implementation for you.
By the end, you'll have a fully planned, standards-documented, implementation-ready project with working code.
Scaffold is a composable meta-prompt pipeline built for Claude Code, Gemini, and other supported AI coding tools. If you have an idea for a software project but don't know where to start — or you want to make sure your project is set up with solid architecture, standards, and tests from day one — Scaffold guides you through every step.
Here's how it works:
-
Initialize — run
scaffold initin your project directory. The init wizard detects whether you're starting fresh (greenfield) or working with an existing codebase (brownfield), and lets you pick a methodology preset (deep, mvp, or custom). -
Run steps — each step is a composable meta-prompt (a short intent declaration in
content/pipeline/) that gets assembled at runtime into a full 7-section prompt. The assembly engine injects relevant knowledge base entries, project context from prior steps, methodology settings, and depth-appropriate instructions. -
Follow the dependency graph — Scaffold tracks which steps are complete, which are eligible, and which are blocked. Run
scaffold nextto see what's unblocked, orscaffold statusfor the full picture. Each step produces a specific artifact — a planning document, architecture decision, specification, or actual code.
You can run steps two ways:
- CLI:
scaffold run create-prd— the assembly engine builds a full prompt from the meta-prompt, knowledge base entries, and project context. Best for the structured pipeline with dependency tracking. - Runner skill: In Claude Code or Gemini, the scaffold-runner skill provides an interactive wrapper that surfaces decision points (depth level, strictness, optional sections) before execution instead of letting the AI pick defaults silently.
Either way, Scaffold constructs the prompt and the target AI tool does the work. The CLI tracks pipeline state and dependencies; the runner skill adds interactive decision surfacing on top.
Meta-prompts — Each pipeline step is defined as a short .md file in content/pipeline/ with YAML frontmatter (dependencies, outputs, knowledge entries) and a markdown body describing the step's intent. These are not the prompts Claude sees — they're assembled into full prompts at runtime.
Assembly engine — At execution time, Scaffold builds a 7-section prompt from: system metadata, the meta-prompt, knowledge base entries, project context (artifacts from prior steps), methodology settings, layered instructions, and depth-specific execution guidance.
Knowledge base — 60 domain expertise entries in content/knowledge/ organized in seven categories (core, product, review, validation, finalization, execution, tools) covering testing strategy, domain modeling, API design, security best practices, eval craft, TDD execution, task claiming, worktree management, release management, and more. These get injected into prompts based on each step's knowledge-base frontmatter field. Knowledge files with a ## Deep Guidance section are optimized for CLI assembly — only the deep guidance content is loaded, avoiding redundancy with the prompt text. Teams can add project-local overrides in .scaffold/knowledge/ that layer on top of the global entries.
Methodology presets — Three built-in presets control which steps run and how deep the analysis goes:
- deep (depth 5) — all steps enabled, exhaustive analysis
- mvp (depth 1) — 7 critical steps, get to code fast
- custom (depth 1-5) — you choose which steps to enable and how deep each one goes
Depth scale (1-5) — Controls how thorough each step's output is, from "focus on the core deliverable" (1) to "explore all angles, tradeoffs, and edge cases" (5). Depth resolves with 4-level precedence: CLI flag > step override > custom default > preset default.
Multi-model validation — At depth 4-5, all 19 review and validation steps can dispatch independent reviews to Codex and/or Gemini CLIs. Two independent models catch more blind spots than one. When both CLIs are available, findings are reconciled by confidence level (both agree = high confidence, single model P0 = still actionable). Auth is verified before every dispatch (codex login status, NO_BROWSER=true gemini -p "respond with ok"). See the Multi-Model Review section.
State management — Pipeline progress is tracked in .scaffold/state.json with atomic file writes and crash recovery. An advisory lock prevents concurrent runs. Decisions are logged to an append-only decisions.jsonl.
Dependency graph — Steps declare their prerequisites in frontmatter. Scaffold builds a DAG, runs topological sort (Kahn's algorithm), detects cycles, and computes which steps are eligible at any point.
Node.js (v18 or later)
- Install: https://nodejs.org or
brew install node - Verify:
node --version
Git
- Install: https://git-scm.com or
brew install git - Verify:
git --version
Claude Code The AI coding assistant that runs the assembled prompts. Claude Code is a command-line tool from Anthropic.
- Install:
npm install -g @anthropic-ai/claude-code - Verify:
claude --version - Docs: https://docs.anthropic.com/en/docs/claude-code
Codex CLI (for multi-model review) Independent code review from a different AI model. Used at depth 4-5 by all review steps.
- Install:
npm install -g @openai/codex - Requires: ChatGPT subscription (Plus/Pro/Team)
- Verify:
codex --version
Gemini CLI (for multi-model review) Independent review from Google's model. Can run alongside or instead of Codex.
- Install:
npm install -g @google/gemini-cli - Requires: Google account (free tier available)
- Verify:
gemini --version
mmr (multi-model review CLI) Automates dispatching, monitoring, and reconciling code reviews across multiple AI model CLIs. Works standalone or with Scaffold.
- Install:
npm install -g @zigrivers/mmr - Verify:
mmr --help - Setup:
mmr config init(auto-detects installed CLIs) - See: mmr — Multi-Model Review CLI
Playwright MCP (web apps only) Lets Claude control a real browser for visual testing and screenshots.
- Install:
claude mcp add playwright npx @playwright/mcp@latest
Scaffold has two parts that install separately:
- CLI (
scaffold) — the core tool. Install via npm or Homebrew. Use it from your terminal or from Claude Code with! scaffold run <step>. - Plugin — optional Claude Code plugin that auto-activates the scaffold runner and pipeline reference skills for interactive guidance.
Pick one:
npm (recommended)
npm install -g @zigrivers/scaffoldHomebrew
brew tap zigrivers/scaffold
brew install scaffoldVerify: scaffold version
Install the Scaffold plugin inside Claude Code for auto-activated skills:
/plugin marketplace add zigrivers/scaffold
/plugin install scaffold@zigrivers-scaffold
This gives you:
- Scaffold Runner skill — intelligent interactive wrapper that surfaces decision points (depth level, strictness, optional sections) before execution instead of letting Claude pick defaults silently
- Pipeline reference skill — shows pipeline ordering, dependencies, and phase structure
- Multi-model dispatch skill — correct invocation patterns for Codex and Gemini CLIs
Usage — just tell Claude Code what you want in natural language:
"Run the next scaffold step" → previews prompt, asks decisions, executes
"Run scaffold create-prd" → same for a specific step
"Where am I in the pipeline?" → shows progress and next eligible steps
"What's left?" → compact view of remaining steps only
"Skip design-system and add-e2e-testing" → batch skip with reason
"Is add-e2e-testing applicable?" → checks platform detection without running
"Use depth 3 for everything" → remembers preference for the session
The plugin is optional — everything it does can also be done with scaffold run <step> from the CLI. But you lose the interactive decision surfacing without the Scaffold Runner skill.
CLI-only users: If you prefer not to install the plugin, skills are installed automatically —
scaffold initsets them up, and any subsequent CLI command keeps them current after upgrades. No manualscaffold skill installneeded.
Gemini users:
scaffold buildkeeps a rootGEMINI.mdin sync with the shared runner instructions and generates.gemini/commands/scaffold/*.tomlwrappers. Plain prompts likescaffold statuswork because Gemini loadsGEMINI.md.
npm update -g @zigrivers/scaffoldbrew upgrade scaffoldnpm update -g @zigrivers/mmr/plugin marketplace update zigrivers-scaffold
After upgrading the CLI, existing projects still get automatic state migrations. Run scaffold status in your project directory — the state manager detects and renames old step keys, removes retired steps, normalizes artifact paths, and persists the changes atomically. No manual editing of .scaffold/state.json is needed.
Step migrations handled automatically:
add-playwright/add-maestro→add-e2e-testingmulti-model-review→automated-pr-reviewuser-stories-multi-model-review→ removed (folded intoreview-user-stories)claude-code-permissions→ removed (folded intogit-workflow+tech-stack)multi-model-review-tasks→ removed (folded intoimplementation-plan-review)testing-strategy→tdd,implementation-tasks→implementation-plan,review-tasks→implementation-plan-review
The PRD is always created as docs/plan.md. If you have a legacy docs/prd.md from an older version, the context gatherer resolves aliased paths so downstream steps find your PRD regardless.
Fresh scaffold init now creates committed project state under .scaffold/ and auto-runs scaffold build, which writes inspectable adapter artifacts under .scaffold/generated/. Scaffold also manages a dedicated block in .gitignore so generated output, .scaffold/lock.json, and Scaffold temp files stay out of version control by default.
The canonical execution entrypoints are still scaffold run <step> and the installed Scaffold plugin. Files under .scaffold/generated/ are internal build artifacts, not root-level project files.
This release is a clean breaking change for generated adapter output. To migrate an existing project:
- Upgrade Scaffold.
- Remove old root-level generated Scaffold output if present:
prompts/,codex-prompts/,commands/, and rootAGENTS.mdonly if it was Scaffold-generated. Runscaffold status— it warns about any legacy output. - Run
scaffold build. - Review the Scaffold-managed block in
.gitignore. - Commit
.gitignoreplus the intended committed.scaffold/state files (config.yml,state.json,decisions.jsonl,instructions/). - Do not commit
.scaffold/generated/or.scaffold/lock.json.
The fastest way to use Scaffold is through natural language inside Claude Code. The Scaffold Runner skill handles pipeline navigation, surfaces decision points before Claude picks defaults, and tracks your progress automatically. The examples below show what you'd type in a Claude Code session.
Let's say you want to build a neighborhood tool lending library — an app where neighbors can list tools they own and borrow from each other. Here's how that looks end to end.
Set up the project (one-time, in your terminal):
mkdir tool-library && cd tool-library
git init
scaffold initThe init wizard detects that this is a brand new project and asks you to pick a methodology. Choose mvp if you want to get to working code fast — it runs only 7 critical steps instead of the full 60. You can always switch to deep or custom later.
Open Claude Code in your project directory, then start talking:
"I want to build a neighborhood tool lending library where neighbors can
list tools they own, browse what's available nearby, and request to borrow
items. Run the first scaffold step."
The runner picks up create-vision (the first eligible step), asks you a few strategic questions about your idea — who's the audience, what makes this different from existing apps, what does success look like — and produces docs/vision.md. This becomes the foundation everything else builds on.
"Run the next scaffold step"
Now it runs create-prd. Claude translates your vision into a detailed product requirements document with features, user personas, success criteria, and scope boundaries. The output lands in docs/plan.md.
"Next step"
review-prd — Claude reviews the PRD for gaps, ambiguity, and completeness, then suggests improvements. You decide which suggestions to accept.
"Keep going"
user-stories — Claude breaks the PRD into detailed user stories with acceptance criteria. Each story maps back to a specific requirement so nothing falls through the cracks.
"What's left?"
The runner shows your remaining steps and which ones are unblocked. With the mvp preset, you're almost there — just review-user-stories, tdd, implementation-plan, and implementation-playbook remain.
"Finish the remaining steps"
The runner executes each remaining step in order, pausing to surface decisions that need your input (testing framework preferences, depth level for reviews, etc.) rather than letting Claude guess silently.
Once the pipeline is complete:
"Start building"
Claude picks up the first implementation task and begins writing code using TDD — tests first, then implementation. Your project now has architecture docs, coding standards, a test strategy, and a task graph, all produced from your original idea.
CLI equivalent: Everything above can also be done with
scaffold run create-vision,scaffold run create-prd,scaffold next, etc. The runner skill adds interactive decision surfacing on top of these commands.
Say you have a Next.js app with a handful of features built, but no documentation, formal test strategy, or architecture docs. Scaffold can backfill all of that.
In your project root:
cd ~/projects/my-nextjs-app
scaffold initScaffold detects that you already have code (package.json, source files, git history) and classifies the project as brownfield. It suggests the deep methodology since existing projects benefit from thorough documentation, but you can choose any preset.
If you already have docs that match Scaffold's expected outputs (a PRD, architecture doc, etc.), bootstrap your state:
scaffold adoptThis scans your project for existing artifacts and marks those pipeline steps as complete so you don't redo work.
Now open Claude Code and skip what doesn't apply:
"Skip create-vision and create-prd — I already know what I'm building"
The runner marks those steps as skipped with your reason logged.
"Run tech-stack"
Claude scans your existing dependencies, framework choices, and configuration, then documents everything in docs/tech-stack.md — formalizing decisions you've already made so future contributors (and AI agents) understand the rationale.
"Run tdd"
Claude sets up a testing strategy tailored to your existing stack — test runner config, coverage targets, TDD workflow conventions. If you already have some tests, it builds around them.
"Run coding-standards"
Claude analyzes your existing code patterns and creates docs/coding-standards.md with linter and formatter configs that match how you're already writing code.
Continue through whatever steps make sense — git-workflow, security, implementation-plan — and skip the rest.
Later, when you want to add a new feature with full Scaffold rigor:
"Run new-enhancement"
Claude walks you through adding a feature the right way — updating the PRD, creating new user stories, setting up tasks with dependencies, and kicking off implementation. All the planning docs stay in sync.
Scaffold persists your pipeline state in .scaffold/state.json, so you can close Claude Code, take a break, and pick up right where you left off.
In Claude Code (natural language):
"Where am I in the pipeline?" → full progress view with phase breakdown
"What's next?" → shows the next unblocked step(s)
"What's left?" → compact view of remaining steps only
From the terminal (CLI):
scaffold status # full pipeline progress
scaffold status --compact # remaining work only
scaffold next # next eligible step(s)
scaffold dashboard # open a visual progress dashboard in your browser- You don't need every step. The
mvppreset runs just 7 steps and gets you building fast. Start there and switch todeeporcustomif you want more rigor. - "I'm not sure" is a valid answer. When Claude asks a question you can't answer yet, say so — it'll suggest reasonable defaults and explain the trade-offs. You can revisit any decision later.
- You can re-run any step. If your thinking evolves, use
scaffold reset <step>to reset it, then run it again. Scaffold uses update mode — it improves the existing artifact rather than starting from scratch. - Every step produces a real document. Vision docs, PRDs, architecture decisions, test strategies — these all land in your project's
docs/folder as markdown files. They're not throwaway; they're the source of truth your code is built from. - The pipeline is a guide, not a cage. Skip steps that don't apply (
scaffold skip <step> --reason "..."). Run them out of order if you know what you're doing. Scaffold tracks dependencies so it'll tell you if you're missing a prerequisite. - Depth controls thoroughness. Each step runs at a depth from 1 (focused, fast) to 5 (exhaustive). The mvp preset defaults to depth 1; deep defaults to 5. You can override per step or per session:
"Use depth 3 for everything".
Scaffold fully supports game development projects. When you select game as your project type, a project-type overlay activates 24 game-specific pipeline steps and injects game domain expertise into existing steps — all while keeping the standard pipeline workflow (status, next, rework, multi-model review) fully functional.
# Interactive — the wizard asks about your engine, multiplayer, platforms, etc.
scaffold init
# Non-interactive with defaults (engine: custom, single-player, PC)
scaffold init --project-type game --auto
# Non-interactive with specific methodology
scaffold init --project-type game --methodology deep --auto
# Adopt an existing game project (auto-detects Unity/Unreal/Godot)
scaffold adoptGame support uses a project-type overlay architecture. You choose your methodology normally (mvp, deep, or custom), then projectType: game layers on top:
- Enables 24 game steps — GDD, performance budgets, art bible, audio design, etc.
- Disables 3 web-centric steps —
design-system,ux-spec,review-ux(replaced bygame-ui-spec) - Injects 29 game knowledge entries into existing steps (e.g., game engine evaluation into
tech-stack, game testing patterns intotdd) - Remaps artifact references so downstream steps read game-specific docs instead of web ones
A game jam project uses mvp + game overlay (fewer steps, lower depth). An AAA project uses deep + game overlay (all steps, max depth).
During scaffold init, the wizard asks game-specific questions with progressive disclosure:
| Category | Questions |
|---|---|
| Core (always asked) | Game engine (Unity/Unreal/Godot/custom), multiplayer mode (none/local/online/hybrid), target platforms (PC/console/mobile/VR/AR) |
| Conditional | Online services (if multiplayer), content structure (levels/open-world/procedural/endless), economy type (none/progression/monetized) |
| Advanced (opt-in) | Narrative depth, supported locales, NPC AI complexity, mod support, persistence model |
These answers control which conditional steps activate. A single-player puzzle game gets a different pipeline than a multiplayer live-service RPG.
Always enabled (12 steps):
| Step | Phase | What It Produces |
|---|---|---|
game-design-document |
Pre | Game pillars, core loop, mechanics catalog, progression systems |
review-gdd |
Pre | Multi-pass review of GDD for pillar coherence, scope feasibility |
performance-budgets |
Foundation | Frame budgets, memory budgets, GPU limits, loading targets per platform |
game-accessibility |
Specification | XAG-aligned accessibility plan (visual, motor, cognitive, auditory) |
input-controls-spec |
Specification | Input bindings, rebinding, haptics, dead zones, cross-play fairness |
game-ui-spec |
Specification | HUD, menus, controller navigation, settings, FTUE/tutorial, UI states |
review-game-ui |
Specification | Multi-pass review of game UI for completeness and accessibility |
content-structure-design |
Specification | Level layouts, world regions, procedural rulesets, or mission templates |
art-bible |
Specification | Art style, asset specs, naming conventions, DCC pipeline, LOD strategy |
audio-design |
Specification | Audio direction, adaptive music, spatial audio, middleware config, VO |
playtest-plan |
Quality | Playtest types, schedule, feedback templates, balance testing |
analytics-telemetry |
Quality | Event taxonomy, crash telemetry, data pipeline, privacy compliance |
Conditional (12 steps — activated by your game configuration):
| Step | Activates When | What It Produces |
|---|---|---|
narrative-bible |
Narrative is light/heavy | World lore, characters, dialogue systems, branching narrative |
netcode-spec |
Multiplayer is online/hybrid | Network topology, tick rate, prediction, lag compensation, anti-cheat |
review-netcode |
Netcode spec enabled | Latency tolerance, bandwidth, cheat resistance review |
ai-behavior-design |
NPC AI is simple/complex | Behavior trees, pathfinding, perception, difficulty scaling |
economy-design |
Economy is not none | Currencies, loot tables, monetization, legal compliance |
review-economy |
Economy design enabled | Inflation analysis, exploit detection, ethical monetization review |
online-services-spec |
Online services selected | Identity, leaderboards, matchmaking, moderation, cloud save |
modding-ugc-spec |
Mod support enabled | Mod API, sandboxing, distribution, content moderation |
save-system-spec |
Persistence is not none | Save format, cloud sync, corruption recovery, migration |
localization-plan |
Multiple locales | String management, fonts (CJK/RTL), VO localization, LQA |
live-ops-plan |
Live-ops selected | Content cadence, events, hotfix deployment, maintenance |
platform-cert-prep |
Console/mobile/VR targets | Sony TRC, Xbox XR, Nintendo Lotcheck, store compliance checklists |
scaffold adopt automatically detects game engines in existing projects:
| Engine | Detection Signal |
|---|---|
| Unity | Assets/*.meta files |
| Unreal | *.uproject file |
| Godot | project.godot file |
When detected, scaffold adopt sets projectType: game and gameConfig.engine in your config, enabling the full game overlay.
You describe your idea and Claude turns it into a strategic vision document covering who it's for, what makes it different, and what success looks like. The review step stress-tests the vision for gaps, and the innovate step explores market positioning opportunities. Without this, later steps lack a clear North Star and features drift.
| Step | What It Does |
|---|---|
create-vision |
Claude asks about your idea — who it's for, what problem it solves, what makes it different — and produces a vision document with elevator pitch, target audience, competitive positioning, guiding principles, and success criteria. |
review-vision |
Claude stress-tests the vision across five dimensions — clarity, audience precision, competitive rigor, strategic coherence, and whether the PRD can be written from it without ambiguity — and fixes what it finds. |
innovate-vision |
Claude explores untapped opportunities — adjacent markets, AI-native capabilities, ecosystem partnerships, and contrarian positioning — and proposes innovations for your approval. (optional) |
Claude translates your vision into a detailed product requirements document (PRD) with features, user personas, constraints, and success criteria. Then it breaks the PRD into user stories — specific things users can do, each with testable acceptance criteria in Given/When/Then format. Review and innovation steps audit for gaps and suggest enhancements. Without this, you're building without a spec.
| Step | What It Does |
|---|---|
create-prd |
Claude translates your vision (or idea, if no vision exists) into a product requirements document with problem statement, user personas, prioritized feature list, constraints, non-functional requirements, and measurable success criteria. |
innovate-prd |
Claude analyzes the PRD for feature-level gaps — competitive blind spots, UX enhancements, AI-native possibilities — and proposes additions for your approval. (optional) |
review-prd |
Claude reviews the PRD across eight passes — problem rigor, persona coverage, feature scoping, success criteria, internal consistency, constraints, non-functional requirements — and fixes blocking issues. |
user-stories |
Claude breaks every PRD feature into user stories ("As a [persona], I want [action], so that [outcome]") organized by epic, each with testable acceptance criteria in Given/When/Then format. |
innovate-user-stories |
Claude identifies UX enhancement opportunities — progressive disclosure, smart defaults, accessibility improvements — and integrates approved changes into existing stories. (optional) |
review-user-stories |
Claude verifies every PRD feature maps to at least one story, checks that acceptance criteria are specific enough to test, validates story independence, and builds a requirements traceability index at higher depths. |
Claude researches and documents your technology choices (language, framework, database) with rationale, creates coding standards tailored to your stack with actual linter configs, defines your testing strategy and test pyramid, and designs a directory layout optimized for parallel AI agent work. Without this, agents guess at conventions and produce inconsistent code.
| Step | What It Does |
|---|---|
beads |
Sets up optional Beads task tracking for downstream projects Scaffold generates, with a lessons-learned file for cross-session learning, and creates the initial CLAUDE.md skeleton with core principles and workflow conventions. (This is not Scaffold's own issue-tracking workflow.) |
tech-stack |
Claude researches technology options for your project — language, framework, database, hosting, auth — evaluates each against your requirements, and documents every choice with rationale and alternatives considered. |
coding-standards |
Claude creates coding standards tailored to your tech stack — naming conventions, error handling patterns, import organization, AI-specific rules — and generates working linter and formatter config files. |
tdd |
Claude defines your testing approach — which types of tests to write at each layer, coverage targets, what to mock and what not to, test data patterns — so agents write the right tests from the start. |
project-structure |
Claude designs a directory layout optimized for parallel AI agent work (minimizing file conflicts), documents where each type of file belongs, and creates the actual directories in your project. |
Claude sets up your local dev environment with one-command startup and live reload, creates a design system with color palette, typography, and component patterns (web apps only), configures your git branching strategy with CI pipeline and worktree scripts for parallel agents, optionally sets up automated PR review with multi-model validation, and configures AI memory so conventions persist across sessions. Without this, you're manually configuring tooling instead of building.
| Step | What It Does |
|---|---|
dev-env-setup |
Claude configures your project so make dev (or equivalent) starts everything — dev server with live reload, local database, environment variables — and documents the setup in a getting-started guide. |
design-system |
Claude creates a visual language — color palette (WCAG-compliant), typography scale, spacing system, component patterns — and generates working theme config files for your frontend framework. (web apps only) |
git-workflow |
Claude sets up your branching strategy, commit message format, PR workflow, CI pipeline with lint and test jobs, and worktree scripts so multiple AI agents can work in parallel without conflicts. |
automated-pr-review |
Claude configures automated code review — using Codex and/or Gemini CLIs for dual-model review when available, or an external bot — with severity definitions and review criteria tailored to your project. (optional) |
ai-memory-setup |
Claude extracts conventions from your docs into path-scoped rule files that load automatically, optimizes CLAUDE.md with a pointer pattern, and optionally sets up persistent cross-session memory. |
Claude auto-detects your platform (web or mobile) and configures end-to-end testing — Playwright for web apps, Maestro for mobile/Expo. Skips automatically for backend-only projects. Without this, your test pyramid has no top level.
| Step | What It Does |
|---|---|
add-e2e-testing |
Claude detects whether your project is web or mobile, then configures Playwright (web) or Maestro (mobile) with a working smoke test, baseline screenshots, and guidance on when to use E2E vs. unit tests. (optional) |
Claude analyzes your user stories to identify all the core concepts in your project — the entities (things like Users, Orders, Tools), their relationships, the rules that must always be true, and the events that happen when state changes. This becomes the shared language between all your docs and code. Without this, different docs use different names for the same concept and agents create duplicate logic.
| Step | What It Does |
|---|---|
domain-modeling |
Claude analyzes your user stories to identify the core concepts in your project (entities, their relationships, the rules that must always hold true), and establishes a shared vocabulary that all docs and code will use. |
review-domain-modeling |
Claude verifies every PRD feature maps to a domain entity, checks that business rules are enforceable, and ensures the shared vocabulary is consistent across all project files. |
Claude documents every significant technology and design decision as an Architecture Decision Record (ADR) — what was decided, what alternatives were considered, and why. The review catches contradictions and missing decisions. Without this, future contributors (human or AI) don't know why things are the way they are.
| Step | What It Does |
|---|---|
adrs |
Claude documents every significant design decision — what was chosen, what alternatives were considered with pros and cons, and what consequences follow — so future contributors understand why, not just what. |
review-adrs |
Claude checks for contradictions between decisions, missing decisions implied by the architecture, and whether every choice has honest trade-off analysis. |
Claude designs the system blueprint — which components exist, how data flows between them, where each piece of code lives, and how the system can be extended. This translates your domain model and decisions into a concrete structure that implementation will follow. Without this, agents make conflicting structural assumptions.
| Step | What It Does |
|---|---|
system-architecture |
Claude designs the system blueprint — which components exist, how data flows between them, where each module lives in the directory tree, and where extension points allow custom behavior. |
review-architecture |
Claude verifies every domain concept lands in a component, every decision constraint is respected, no components are orphaned from data flows, and the module structure minimizes merge conflicts. |
Claude creates detailed interface specifications for each layer of your system. Database schema translates domain entities into tables with constraints that enforce business rules. API contracts define every endpoint with request/response shapes, error codes, and auth requirements. UX spec maps out user flows, interaction states, accessibility requirements, and responsive behavior. Each is conditional — only generated if your project has that layer. Without these, agents guess at interfaces and implementations don't align.
| Step | What It Does |
|---|---|
database-schema |
Claude translates your domain model into database tables with constraints that enforce business rules, indexes optimized for your API's query patterns, and a reversible migration strategy. (if applicable) |
review-database |
Claude verifies every domain entity has a table, constraints enforce business rules at the database level, and indexes cover all query patterns from the API contracts. (if applicable) |
api-contracts |
Claude specifies every API endpoint — request/response shapes, error codes with human-readable messages, auth requirements, pagination, and example payloads — so frontend and backend can be built in parallel. (if applicable) |
review-api |
Claude checks that every domain operation has an endpoint, error responses include domain-specific codes, and auth requirements are specified for every route. (if applicable) |
ux-spec |
Claude maps out every user flow with all interaction states (loading, error, empty, populated), defines accessibility requirements (WCAG level, keyboard nav), and specifies responsive behavior at each breakpoint. (if applicable) |
review-ux |
Claude verifies every user story has a flow, accessibility requirements are met, all error states are documented, and the design system is used consistently. (if applicable) |
Claude reviews your testing strategy for coverage gaps, generates test skeleton files from your user story acceptance criteria (one test per criterion, ready for TDD), creates automated eval checks that verify code meets your documented standards, designs your deployment pipeline with monitoring and incident response, and conducts a security review covering OWASP Top 10, threat modeling, and input validation rules. Without this, quality is an afterthought bolted on at the end.
| Step | What It Does |
|---|---|
review-testing |
Claude audits the testing strategy for coverage gaps by layer, verifies edge cases from domain invariants are tested, and checks that test environment assumptions match actual config. |
story-tests |
Claude generates a test skeleton file for each user story — one pending test case per acceptance criterion, tagged with story and criterion IDs — giving agents a TDD starting point for every feature. |
create-evals |
Claude generates automated checks that verify your code matches your documented standards — file placement, naming conventions, feature-to-test coverage, API contract alignment, and more — using your project's own test framework. |
operations |
Claude designs your deployment pipeline (build, test, deploy, verify, rollback), defines monitoring metrics with alert thresholds, and writes incident response procedures with rollback instructions. |
review-operations |
Claude verifies the full deployment lifecycle is documented, monitoring covers latency/errors/saturation, alert thresholds have rationale, and common failure scenarios have runbook entries. |
security |
Claude conducts a security review of your entire system — OWASP Top 10 coverage, input validation rules for every user-facing field, data classification, secrets management, CORS policy, rate limiting, and a threat model covering all trust boundaries. |
review-security |
Claude verifies OWASP coverage is complete, auth boundaries match API contracts, every secret is accounted for, and the threat model covers all trust boundaries. Highest priority for multi-model review. |
For projects targeting multiple platforms (web + mobile, for example), Claude audits all documentation for platform-specific gaps — features that work on one platform but aren't specified for another, input pattern differences, and platform-specific testing coverage. Skips automatically for single-platform projects.
| Step | What It Does |
|---|---|
platform-parity-review |
Claude audits all documentation for platform-specific gaps — features missing on one platform, input pattern differences (touch vs. mouse), and platform-specific testing coverage. (multi-platform only) |
Claude optimizes your CLAUDE.md to stay under 200 lines with critical patterns front-loaded, then audits all workflow documentation for consistency — making sure commit formats, branch naming, PR workflows, and key commands match across every doc. Without this, agents encounter conflicting instructions.
| Step | What It Does |
|---|---|
claude-md-optimization |
Claude removes redundancy from CLAUDE.md, fixes terminology inconsistencies, front-loads critical patterns (TDD, commit format, worktrees), and keeps it under 200 lines so agents actually read and follow it. |
workflow-audit |
Claude audits every document that mentions workflow (CLAUDE.md, git-workflow, coding-standards, dev-setup) and fixes any inconsistencies in commit format, branch naming, PR steps, or key commands. |
Claude decomposes your user stories and architecture into concrete, implementable tasks — each scoped to ~150 lines of code, limited to 3 files, with clear acceptance criteria and no ambiguous decisions for agents to guess at. The review validates coverage (every feature has tasks), checks the dependency graph for cycles, and runs multi-model validation at higher depths. Without this, agents don't know what to build or in what order.
| Step | What It Does |
|---|---|
implementation-plan |
Claude breaks your user stories and architecture into concrete tasks — each scoped to ~150 lines of code and 3 files max, with clear acceptance criteria, no ambiguous decisions, and explicit dependencies. |
implementation-plan-review |
Claude verifies every feature has implementation tasks, no task is too large for one session, the dependency graph has no cycles, and every acceptance criterion maps to at least one task. |
Seven cross-cutting audits that catch problems before implementation begins. Without this phase, hidden spec problems surface during implementation as expensive rework.
| Step | What It Does |
|---|---|
scope-creep-check |
Claude compares everything that's been specified against the original PRD and flags anything that wasn't in the requirements — features, components, or tasks that crept in without justification. |
dependency-graph-validation |
Claude verifies the task dependency graph has no cycles (which would deadlock agents), no orphaned tasks, and no chains deeper than three sequential dependencies. |
implementability-dry-run |
Claude simulates picking up each task as an implementing agent and flags anything ambiguous — unclear acceptance criteria, missing input files, undefined error handling — that would force an agent to guess. |
decision-completeness |
Claude checks that every technology choice and architectural pattern has a recorded decision with rationale, and that no two decisions contradict each other. |
traceability-matrix |
Claude builds a map showing that every PRD requirement traces through to user stories, architecture components, implementation tasks, and test cases — with no gaps in either direction. |
cross-phase-consistency |
Claude traces every named concept (entities, fields, API endpoints) across all documents and flags any naming drift, terminology mismatches, or data shape inconsistencies. |
critical-path-walkthrough |
Claude walks the most important user journeys end-to-end across every spec layer — PRD to stories to UX to API to database to tasks — and flags any broken handoffs or missing layers. |
Claude applies all findings from the validation phase, freezes documentation (ready for implementation), creates a developer onboarding guide (the "start here" document for anyone joining the project), and writes the implementation playbook — the operational document agents reference during every coding session. Without this, there's no bridge between planning and building.
| Step | What It Does |
|---|---|
apply-fixes-and-freeze |
Claude applies all findings from the validation phase, fixes blocking issues, and freezes every document with a version marker — signaling that specs are implementation-ready. |
developer-onboarding-guide |
Claude synthesizes all your frozen docs into a single onboarding narrative — project purpose, architecture overview, top coding patterns, key commands, and a quick-start checklist — so anyone joining the project knows exactly where to begin. |
implementation-playbook |
Claude writes the playbook agents reference during every coding session — task execution order, which docs to read before each task, the TDD loop to follow, quality gates to pass, and the handoff format between agents. |
Stateless execution steps that can be run repeatedly once Phase 14 is complete. Single-agent and multi-agent modes start the TDD implementation loop (claim a task, write a failing test, make it pass, refactor, commit, repeat). Resume commands restore session context after breaks. Quick-task handles one-off bug fixes outside the main plan. New-enhancement adds a feature with full planning rigor.
| Step | What It Does |
|---|---|
single-agent-start |
Claude claims the next task, writes a failing test, implements until it passes, refactors, runs quality gates, commits, and repeats — following the implementation playbook. |
single-agent-resume |
Claude recovers context from the previous session — reads lessons learned, checks git state, reconciles merged PRs — and continues the TDD loop from where you left off. |
multi-agent-start |
Claude sets up a named agent in an isolated git worktree so multiple agents can implement tasks simultaneously without file conflicts, each following the same TDD loop. |
multi-agent-resume |
Claude verifies the worktree, syncs with main, reconciles completed tasks, and resumes the agent's TDD loop from the previous session. |
quick-task |
Claude takes a one-off request (bug fix, refactor, performance tweak) and creates a single well-scoped task with acceptance criteria and a test plan — for work outside the main implementation plan. |
new-enhancement |
Claude walks you through adding a feature the right way — updating the PRD, creating new user stories, running an innovation pass, and generating implementation tasks that integrate with your existing plan. |
Just like you'd want more than one person reviewing a pull request, multi-model review gets independent perspectives from different AI models. When Claude, Codex, and Gemini independently flag the same issue, you know it's real. When they all approve, you can proceed with confidence.
- Different blind spots — what Claude considers correct, another model may flag as problematic. Each model reasons differently about architecture, security, and edge cases.
- Independent review — each model reviews your work without seeing what the others said, preventing groupthink.
- Confidence through agreement — when two or three models flag the same issue, it's almost certainly real. When they all approve, you can move forward confidently.
- Catches what single-model misses — security gaps, inconsistent naming across docs, missing edge cases, and specification contradictions that one model overlooks.
Multi-model review is optional. It requires installing one or both of these additional CLI tools:
Codex CLI — OpenAI's command-line coding tool. Requires a ChatGPT subscription (Plus/Pro/Team).
npm install -g @openai/codexGemini CLI — Google's command-line coding tool. Free tier available with a Google account.
npm install -g @google/gemini-cliYou don't need both — Scaffold works with whichever CLIs are available. Having both gives the strongest review (three independent perspectives). See Prerequisites for auth setup and verification commands.
mmr is a standalone CLI that automates multi-model code review. It solves the problems teams hit when manually orchestrating reviews across Claude, Codex, and Gemini: timeouts, auth failures, inconsistent prompts, fragile output parsing, and manual reconciliation.
The core problem it solves: Without mmr, an AI agent dispatching multi-model reviews has to manually construct CLI commands for each model, handle per-tool auth quirks, improvise timeout handling, parse different output formats, and reconcile findings across channels. In practice, this takes 4-6+ minutes per review and frequently fails. mmr reduces this to three commands.
mmr review --pr 47 ──→ Dispatches to all channels in background
Returns job ID immediately
Agent continues working
mmr status mmr-a1b2c3 ──→ Poll progress (which channels done?)
Exit code: 0=done, 1=running, 2=failed
mmr results mmr-a1b2c3 ──→ Reconcile findings across channels
Apply severity gate
Output unified findings
Exit code: 0=passed, 1=gate failed
Key features:
- Async job model — reviews run in background processes. The agent fires
mmr reviewand continues working. No blocking for 4-6 minutes. - Per-channel auth verification — checks authentication before every dispatch. Auth failures are never silent —
mmrtells you exactly what expired and the command to fix it. - Immutable core prompt — every channel gets the same severity definitions (P0-P3), output format spec (JSON), and review criteria. No prompt drift between channels.
- Automated reconciliation — when two channels flag the same location, that's consensus (high confidence). When only one channel flags something, it's unique (medium confidence). P0 from any single source is always high confidence.
- Configurable severity gate — project default in
.mmr.yaml, override per-review with--fix-threshold. Default: P2 (fix P0/P1/P2, skip P3). - Multiple output formats — JSON (default, for machines), text (terminals), markdown (PR comments).
npm (available now):
npm install -g @zigrivers/mmrHomebrew (available after next scaffold release):
brew tap zigrivers/scaffold
brew install mmrVerify: mmr --help
Step 1: Install the model CLIs you want to use
You need at least one. More models = more diverse review perspectives.
# Claude Code (you probably already have this)
npm install -g @anthropic-ai/claude-code
# Codex CLI (requires ChatGPT Plus/Pro/Team subscription)
npm install -g @openai/codex
# Gemini CLI (free tier available with Google account)
npm install -g @google/gemini-cliStep 2: Authenticate each CLI
Each CLI needs a one-time interactive authentication:
# Claude — if not already logged in
claude login
# Codex — opens browser for OAuth
codex login
# Gemini — opens browser for OAuth
gemini -p "hello"Step 3: Initialize mmr in your project
cd your-project
mmr config initThis auto-detects which CLIs are installed and generates .mmr.yaml in your project root:
Detected CLIs:
✓ claude (claude -p)
✓ gemini (gemini -p)
✗ codex (not found)
Generated .mmr.yaml with 2 enabled channels.
Run `mmr config test` to verify authentication.
Step 4: Verify authentication
mmr config test claude ✓ installed ✓ authenticated
gemini ✓ installed ✓ authenticated
codex ✗ not installed (skipped)
2/3 channels ready.
If any channel shows an auth failure, mmr tells you the exact command to fix it.
Step 5: Commit the config
git add .mmr.yaml
git commit -m "chore: add mmr multi-model review config"This ensures your team shares the same channel configuration.
Step 6 (optional): Customize review criteria
Edit .mmr.yaml to add project-specific review criteria that get injected into every review prompt:
review_criteria:
- "Verify all database queries use parameterized statements"
- "Check that error messages do not leak internal state"
- "Ensure all API endpoints validate authentication"You can also adjust per-channel timeouts, the default severity threshold, and named review templates for different review types (PR reviews, implementation plan reviews, etc.).
After creating a PR:
mmr review --pr 47 --focus "auth flow, session handling"
# → Job mmr-a1b2c3 started. 2/2 channels dispatched.Continue working, then check back:
mmr status mmr-a1b2c3
# → claude: completed (47s) | gemini: running (2m12s)
# Later:
mmr status mmr-a1b2c3
# → All channels complete.Collect reconciled results:
mmr results mmr-a1b2c3
# → JSON output with gate_passed, reconciled_findings, per_channel details
mmr results mmr-a1b2c3 --format text
# → Human-readable terminal output
mmr results mmr-a1b2c3 --format markdown
# → Markdown table for PR commentsReview staged changes before committing:
mmr review --staged --focus "regression risk"Review a diff between branches:
mmr review --base main --head feature/authOverride severity gate for a critical path:
mmr review --pr 47 --fix-threshold P1 # Only fix P0 and P1
mmr review --pr 47 --fix-threshold P0 # Only fix critical/security issues| Command | Purpose |
|---|---|
mmr review |
Dispatch a review job to all configured channels |
mmr status <job-id> |
Check progress of a running job |
mmr results <job-id> |
Collect, reconcile, and output findings |
mmr config init |
Auto-detect CLIs and generate .mmr.yaml |
mmr config test |
Verify all channels (installation + auth) |
mmr config channels |
List configured channels |
mmr jobs list |
Show recent review jobs |
mmr jobs prune |
Remove old jobs (default: older than 7 days) |
The config file controls channel definitions, defaults, and project-specific review criteria:
version: 1
defaults:
fix_threshold: P2 # P0/P1/P2 block the gate, P3 is informational
timeout: 300 # Per-channel timeout in seconds
format: json # Default output format
job_retention_days: 7 # Auto-prune old jobs
# Project-specific criteria appended to every review prompt
review_criteria:
- "Check for SQL injection in all query builders"
- "Verify RBAC rules match API contract"
# Channel definitions (auto-generated by mmr config init)
channels:
claude:
enabled: true
command: claude -p
auth:
check: "claude -p 'respond with ok' 2>/dev/null"
timeout: 5
failure_exit_codes: [1]
recovery: "Run: claude login"
gemini:
enabled: true
command: gemini -p
flags:
- "--approval-mode yolo"
- "--output-format json"
env:
NO_BROWSER: "true"
auth:
check: "NO_BROWSER=true gemini -p 'respond with ok' -o json 2>&1"
timeout: 5
failure_exit_codes: [41]
recovery: "Run: gemini -p 'hello' (interactive, opens browser)"
timeout: 360 # Gemini tends to be slower
codex:
enabled: true
command: codex exec
flags:
- "--skip-git-repo-check"
- "-s read-only"
- "--ephemeral"
auth:
check: "codex login status 2>/dev/null"
timeout: 5
failure_exit_codes: [1]
recovery: "Run: codex login"User-level defaults can be set in ~/.mmr/config.yaml for settings that apply across all projects (e.g., which channels are installed on your machine). Project config overrides user config. CLI flags override everything.
Adding a new model CLI requires only a YAML config change — no code modifications to mmr. When a new model CLI ships, add its channel definition to .mmr.yaml and you're ready.
mmr uses a standardized P0-P3 severity classification across all channels:
| Level | Name | Definition | Gate Default |
|---|---|---|---|
| P0 | Critical | Will cause failure, data loss, security vulnerability, or fundamental architectural flaw | Blocks |
| P1 | High | Will cause bugs in normal usage, inconsistency, or blocks downstream work | Blocks |
| P2 | Medium | Improvement opportunity — style, naming, documentation, minor optimization | Blocks |
| P3 | Trivial | Personal preference, trivial nits | Informational |
With the default fix_threshold: P2, any P0, P1, or P2 finding fails the gate. Only P3-only reviews pass.
When multiple channels return findings, mmr applies consensus rules:
| Scenario | Confidence | Action |
|---|---|---|
| 2+ channels flag same location, same severity | High | Report at agreed severity |
| 2+ channels flag same location, different severity | Medium | Report at higher severity |
| All channels approve (no findings) | High | Gate passed |
| One channel flags P0, others approve | High | Report P0 (critical from any source) |
| One channel flags P1/P2, others approve | Medium | Report with attribution |
| Channels contradict each other | Low | Present both for user adjudication |
- Claude reviews first — completes its own structured multi-pass review using different review lenses (coverage, consistency, quality, downstream readiness)
- Independent external review — the document being reviewed is sent to each available CLI. They don't see Claude's findings or each other's output — every review is independent.
- Findings are reconciled — Scaffold (or
mmr) merges all findings by confidence level:
| Scenario | Confidence | Action |
|---|---|---|
| Both models flag the same issue | High | Fix immediately |
| Both models approve | High | Proceed confidently |
| One flags P0, other approves | High | Fix it (P0 is critical) |
| One flags P1, other approves | Medium | Review before fixing |
| Models contradict each other | Low | Present both to user |
Scaffold verifies CLI authentication before every dispatch. If a token has expired, it tells you and provides the command to re-authenticate — it never silently skips a review.
Multi-model review activates automatically at depth 4-5 during any review or validation step — that's 20 steps in total, including all domain reviews (review-prd, review-architecture, review-security, etc.) and all 7 validation checks (traceability, scope creep, implementability, etc.).
At depth 1-3, reviews are Claude-only — still thorough with multiple passes, but single-perspective. You control depth globally during scaffold init, per session ("Use depth 5 for everything"), or per step ("Run review-security at depth 5").
- Depth 4 or 5 — set during
scaffold initor override per step - At least one additional CLI — Codex or Gemini (or both for triple-model review)
- Valid authentication — Scaffold checks before every dispatch and tells you if credentials need refreshing
Not every project needs all 60 steps. Choose a methodology when you run scaffold init:
All steps enabled. Comprehensive analysis of every angle — domain modeling, ADRs, security review, traceability matrix, the works. At depth 4-5, review steps dispatch to Codex/Gemini CLIs for multi-model validation. Best for complex systems, team projects, or when you want thorough documentation.
Only 7 critical steps: create-prd, review-prd, user-stories, review-user-stories, tdd, implementation-plan, and implementation-playbook. Minimal ceremony — get to code fast. Best for prototypes, hackathons, or solo projects.
You choose which steps to enable and set a default depth (1-5). You can also override depth per step. Best when you know which parts of the pipeline matter for your project.
You can change methodology mid-pipeline with scaffold init --methodology <preset>. Scaffold preserves your completed work and adjusts what's remaining.
| Command | What It Does |
|---|---|
scaffold init |
Initialize .scaffold/ state, then auto-build hidden adapter artifacts |
scaffold run <step> |
Execute a pipeline step (assembles and outputs the full prompt) |
scaffold build |
Generate hidden adapter output under .scaffold/generated/ and update the managed .gitignore block |
scaffold adopt |
Bootstrap state from existing artifacts (brownfield projects) |
scaffold skip <step> [<step2>...] |
Skip one or more steps with a reason |
scaffold complete <step> |
Mark a step as completed (for steps executed outside scaffold run) |
scaffold reset <step> |
Reset a step back to pending |
scaffold status [--compact] |
Show pipeline progress (--compact shows only remaining work). Warns if generated adapter output is stale. |
scaffold next |
List next unblocked step(s) |
scaffold check <step> |
Check if a conditional step applies to this project |
scaffold validate |
Validate meta-prompts, config, state, and dependency graph |
scaffold list |
List all steps with status |
scaffold info <step> |
Show full metadata for a step |
scaffold version |
Show Scaffold version |
scaffold update |
Update to the latest version |
scaffold dashboard |
Open a visual progress dashboard in your browser |
scaffold decisions |
Show all logged decisions |
scaffold knowledge |
Manage project-local knowledge base overrides |
scaffold skill install |
Install scaffold skills into the current project (automatic — rarely needed manually) |
scaffold skill list |
Show available skills and installation status |
scaffold skill remove |
Remove scaffold skills from the current project |
# Initialize a new project with deep methodology
scaffold init
# Run a specific step
scaffold run create-prd
# See what's next
scaffold next
# Check full pipeline status
scaffold status
# See only remaining work
scaffold status --compact
# Skip multiple steps at once
scaffold skip design-system add-e2e-testing --reason "backend-only project"
# Check if a step applies before running it
scaffold check add-e2e-testing
# → Applicable: yes | Platform: web | Brownfield: no | Mode: fresh
scaffold check automated-pr-review
# → Applicable: yes | GitHub remote: yes | Available CLIs: codex, gemini | Recommended: local-cli (dual-model)
scaffold check ai-memory-setup
# → Rules: no | MCP server: none | Hooks: none | Mode: fresh
# Re-run a completed step in update mode
scaffold reset review-prd --force
scaffold run review-prd
# Open the visual dashboard
scaffold dashboardScaffold ships with 60 domain expertise entries organized in seven categories:
- core/ (25 entries) — eval craft, testing strategy, domain modeling, API design, database design, system architecture, ADR craft, security best practices, operations, task decomposition, user stories, UX specification, design system tokens, user story innovation, AI memory management, coding conventions, tech stack selection, project structure patterns, task tracking, CLAUDE.md patterns, multi-model review dispatch, review step template, dev environment, git workflow patterns, automated review tooling
- product/ (3 entries) — PRD craft, PRD innovation, gap analysis
- review/ (13 entries) — review methodology (shared), plus domain-specific review passes for PRD, user stories, domain modeling, ADRs, architecture, API design, database design, UX specification, testing, security, operations, implementation tasks
- validation/ (7 entries) — critical path analysis, cross-phase consistency, scope management, traceability, implementability, decision completeness, dependency validation
- finalization/ (3 entries) — implementation playbook, developer onboarding, apply-fixes-and-freeze
- execution/ (4 entries) — TDD execution loop, task claiming strategy, worktree management, enhancement workflow
- tools/ (3 entries) — release management, version strategy, session analysis
Each pipeline step declares which knowledge entries it needs in its frontmatter. The assembly engine injects them automatically. Knowledge files with a ## Deep Guidance section are optimized for the CLI — only the deep guidance content is loaded into the assembled prompt, skipping the summary to avoid redundancy with the prompt text.
Teams can create project-specific knowledge entries in .scaffold/knowledge/ that layer over the global entries:
scaffold knowledge update testing-strategy "We use Playwright for all E2E tests, Jest for unit tests"
scaffold knowledge list # See all entries (global + local)
scaffold knowledge show testing-strategy # View effective content
scaffold knowledge reset testing-strategy # Remove override, revert to globalLocal overrides are committable — the whole team shares enriched, project-specific guidance.
Once your project is scaffolded and you're building features, two categories of commands are available:
These are stateless pipeline steps — they appear in scaffold next once Phase 14 is complete and can be run repeatedly:
| Command | When to Use |
|---|---|
scaffold run single-agent-start |
Start the autonomous implementation loop — Claude picks up tasks and builds. |
scaffold run single-agent-resume |
Resume where you left off after closing Claude Code. |
scaffold run multi-agent-start |
Start parallel implementation with multiple agents in worktrees. |
scaffold run multi-agent-resume |
Resume parallel agent work after a break. |
scaffold run quick-task |
Create a focused task for a bug fix, refactor, or small improvement. |
scaffold run new-enhancement |
Add a new feature to an already-scaffolded project. Updates the PRD, creates new user stories, and sets up tasks with dependencies. |
These are orthogonal to the pipeline — usable at any time, not tied to pipeline state. Defined in content/tools/ with category: tool frontmatter. Run scaffold list --section tools for a complete listing (or --verbose for argument hints, --format json for machine-readable output):
| Command | When to Use |
|---|---|
scaffold run version-bump |
Mark a milestone with a version number without the full release ceremony. |
scaffold run release |
Run your project's release ceremony — changelog plus whatever release artifacts that project defines. Supports --dry-run, current, and rollback. |
scaffold run version |
Show the current Scaffold version. |
scaffold run update |
Update Scaffold to the latest version. |
scaffold run dashboard |
Open a visual progress dashboard in your browser. |
scaffold run prompt-pipeline |
Print the full pipeline reference table. |
scaffold run review-code |
Run all 3 code review channels on local code before commit or push. |
scaffold run review-pr |
Run all 3 code review channels (Codex CLI, Gemini CLI, Superpowers) on a PR. |
scaffold run post-implementation-review |
Full 3-channel codebase review after an AI agent completes all tasks — checks requirements coverage, security, architecture alignment, and more. |
scaffold run session-analyzer |
Analyze Claude Code session logs for patterns and insights. |
Use scaffold run review-code before commit or push when you want a local gate on the current delivery candidate. Use scaffold run review-pr after a GitHub PR exists.
Run any of these via the CLI or ask the scaffold runner skill in Claude Code or Gemini.
scaffold run version-bump
Bumps the version number and updates the changelog, but doesn't create tags, push, or run the formal release ceremony. Think of it as a checkpoint.
scaffold run release
The AI analyzes your commits since the last release, suggests whether this is a major, minor, or patch version bump, and walks you through:
- Running your project's tests
- Updating the version number in your project files
- Generating a changelog entry
- Executing the release artifacts your project defines
Depending on the target project, that may include a Git tag, hosted release,
package publish, deployment, registry update, or another project-specific
release step. scaffold run release is intentionally generic; it follows
the target project's own documented workflow rather than assuming npm or GitHub
for every project.
Options: --dry-run to preview, minor/major/patch to specify the bump, current to release an already-bumped version, rollback to undo.
| Term | What It Means |
|---|---|
| Assembly engine | The runtime system that constructs full 7-section prompts from meta-prompts, knowledge entries, project context, and methodology settings. |
| CLAUDE.md | A configuration file in your project root that tells Claude Code how to work in your project. |
| Depth | A 1-5 scale controlling how thorough each step's analysis is, from MVP-focused (1) to exhaustive (5). |
| Frontmatter | The YAML metadata block at the top of meta-prompt files, declaring dependencies, outputs, knowledge entries, and other configuration. |
| Knowledge base | 60 domain expertise entries that get injected into prompts. Can be extended with project-local overrides. |
| MCP | Model Context Protocol. A way for Claude to use external tools like a headless browser. |
| Meta-prompt | A short intent declaration in content/pipeline/ that gets assembled into a full prompt at runtime. |
| mmr | Multi-Model Review CLI (@zigrivers/mmr). Standalone tool for async multi-model code review dispatch, reconciliation, and severity gating. |
| Methodology | A preset (deep, mvp, custom) controlling which steps run and at what depth. |
| Multi-model review | Independent validation from Codex/Gemini CLIs at depth 4-5, catching blind spots a single model misses. |
| PRD | Product Requirements Document. The foundation for everything Scaffold builds. |
| Runner skill | Auto-activated Claude Code/Gemini skill that surfaces decision points before executing pipeline steps. |
| Worktrees | A git feature for multiple working copies. Scaffold uses these for parallel agent execution. |
I ran a command and nothing happened.
Make sure Scaffold is installed — run scaffold version in your terminal.
Which steps can I skip?
Use scaffold skip <step> --reason "..." to skip any step. You can skip multiple steps at once: scaffold skip design-system add-e2e-testing --reason "backend-only". The mvp preset only enables 7 critical steps by default. With the custom preset, you choose exactly which steps to run.
Can I go back and re-run a step?
Yes. Use scaffold reset <step> --force to reset it to pending, then scaffold run <step>. When re-running a completed step, Scaffold uses update mode — it loads the existing artifact and generates improvements rather than starting from scratch.
Do I need to run every step in one sitting?
No. Pipeline state is persisted in .scaffold/state.json. Run scaffold status when you come back to see where you left off, or scaffold next for what's unblocked.
What if Claude asks me a question I don't know the answer to? Say you're not sure. Claude suggests reasonable defaults and explains the trade-offs. You can revisit decisions later.
Can I use this for an existing project?
Yes. Run scaffold init — the project detector will identify it as brownfield and suggest the deep methodology. Use scaffold adopt to bootstrap state from existing artifacts.
How do I customize the knowledge base for my project?
Use scaffold knowledge update <name> to create a project-local override in .scaffold/knowledge/. It layers over the global entry and is committable for team sharing.
How do I check if an optional step applies to my project?
Run scaffold check <step>. For example, scaffold check add-e2e-testing detects whether your project has a web or mobile frontend. scaffold check automated-pr-review checks for a GitHub remote and available review CLIs.
Codex CLI fails with "stdin is not a terminal"
Use codex exec "prompt" (headless mode), not bare codex "prompt" (interactive TUI). The multi-model-dispatch skill documents the correct invocation patterns.
Codex CLI fails with "Not inside a trusted directory"
Add --skip-git-repo-check flag: codex exec --skip-git-repo-check -s read-only --ephemeral "prompt". This is required when the project hasn't initialized git yet.
Gemini CLI hangs on "Opening authentication page" or returns empty output
Gemini's child process relaunch shows a consent prompt that hangs in non-TTY shells. All scaffold Gemini invocations now include NO_BROWSER=true to suppress this. If you're invoking Gemini manually, prepend NO_BROWSER=true gemini -p "...". If auth tokens have actually expired, run ! gemini -p "hello" to re-authenticate interactively. For CI/headless: set GEMINI_API_KEY env var instead of OAuth.
Codex CLI auth expired ("refresh token", "sign in again")
Run ! codex login to re-authenticate interactively. For CI/headless: set CODEX_API_KEY env var. Check auth status with codex login status.
How does Scaffold invoke Codex/Gemini under the hood? Scaffold handles CLI invocation automatically — you never need to type these commands. If you're debugging or curious, here are the headless invocation patterns:
# Codex (headless mode — use "exec", NOT bare "codex")
codex exec --skip-git-repo-check -s read-only --ephemeral "Review this artifact..." 2>/dev/null
# Gemini (headless mode — use "-p" flag, NO_BROWSER prevents consent prompt hang)
NO_BROWSER=true gemini -p "Review this artifact..." --output-format json --approval-mode yolo 2>/dev/nullThese are documented in detail in the multi-model-dispatch skill.
mmr review dispatches but no channels return results
Check auth: mmr config test. If channels show auth failures, re-authenticate with the recovery command shown. If channels are installed but the review hangs, check the per-channel timeout in .mmr.yaml — some models take 3-5 minutes for large diffs. Increase timeout to 360-600 seconds for large PRs.
mmr results says "gate failed" but I disagree with the findings
Use mmr results <job-id> --format text to see the full reconciled findings with source attribution and confidence scores. Single-source findings with "unique" agreement are less certain than "consensus" findings. Override the threshold for a specific review: mmr review --pr 47 --fix-threshold P1 (only gate on P0 and P1).
How do I add a new AI model CLI to mmr?
Add a channel definition to .mmr.yaml with the command, auth check, and output parser. No code changes needed. See the mmr Configuration section for the full schema.
I upgraded and my pipeline shows old step names
Run scaffold status — the state manager automatically migrates old step names (e.g., add-playwright → add-e2e-testing, multi-model-review → automated-pr-review) and removes retired steps.
The project is a TypeScript CLI (@zigrivers/scaffold) built with yargs, targeting ES2022/Node16 ESM.
src/
├── cli/commands/ # 19 CLI command implementations
├── cli/middleware/ # Project root detection, output mode resolution
├── cli/output/ # Output strategies (interactive, json, auto)
├── core/assembly/ # Assembly engine — meta-prompt → full prompt
├── core/adapters/ # Platform adapters (Claude Code, Gemini, Codex, Universal)
├── core/dependency/ # DAG builder, topological sort, eligibility
├── core/knowledge/ # Knowledge update assembler
├── state/ # State manager, lock manager, decision logger
├── config/ # Config loading, migration, schema validation
├── project/ # Project detector, CLAUDE.md/GEMINI.md managers, adoption
├── wizard/ # Init wizard (interactive + --auto)
├── validation/ # Config, state, frontmatter validators
├── types/ # TypeScript types and enums
├── utils/ # FS helpers, errors, levenshtein
└── dashboard/ # HTML dashboard generator
- Assembly engine (
src/core/assembly/engine.ts) — Pure orchestrator with no I/O. Constructs 7-section prompts from meta-prompt + knowledge + context + methodology + instructions + depth guidance. - State manager (
src/state/state-manager.ts) — Atomic writes via tmp +fs.renameSync(). Tracks step status, in-progress records, and next-eligible cache. Includes migration system for step renames and retired steps. - Dependency graph (
src/core/dependency/) — Kahn's algorithm topological sort with phase-aware ordering and cycle detection. - Platform adapters (
src/core/adapters/) — 3-step lifecycle (initialize → generateStepWrapper → finalize) producing.scaffold/generated/claude-code/commands/,.scaffold/generated/codex/AGENTS.md,.scaffold/generated/universal/prompts/README.md, and Gemini project-local files under.agents/skills/,GEMINI.md, and.gemini/commands/scaffold/. - Project detector (
src/project/detector.ts) — Scans for file system signals to classify projects as greenfield, brownfield, or v1-migration. - Check command (
src/cli/commands/check.ts) — Applicability detection for conditional steps (platform detection, GitHub remote detection, CLI availability).
All build inputs live under content/:
content/
├── pipeline/ # 60 meta-prompts organized by 16 phases (phases 0-15, including build)
├── tools/ # 10 tool meta-prompts (stateless, category: tool)
├── knowledge/ # 61 domain expertise entries (core, product, review, validation, finalization, execution, tools)
├── methodology/ # 3 YAML presets (deep, mvp, custom)
└── skills/ # Skill templates with {{markers}} for multi-platform resolution (includes mmr)
@zigrivers/mmr lives in packages/mmr/ as an independent workspace package:
packages/mmr/
├── src/
│ ├── commands/ # review, status, results, config, jobs (yargs)
│ ├── config/ # Zod schema, 4-layer config loader, builtin channel presets
│ ├── core/ # job-store, auth, prompt assembly, parser, reconciler, dispatcher
│ └── formatters/ # json, text, markdown output formatters
├── templates/ # Immutable core review prompt (severity defs, output format)
└── tests/ # 60 tests across 11 files
Generated output (gitignored):
skills/ # Resolved skills (built from content/skills/ templates)
dist/ # Compiled TypeScript output
- Vitest for unit and E2E tests (73 test files, 997 tests, 90% coverage)
- Performance benchmarks — assembly p95 < 500ms, state I/O p95 < 100ms, graph build p95 < 2s
- Shell script tests via bats (70 tests covering dashboard, worktree, frontmatter, install/uninstall)
- Meta-evals — 39 cross-system consistency checks validating pipeline ↔ command ↔ knowledge integrity
- Coverage thresholds — CI enforces 84/80/88/84 minimums (statements/branches/functions/lines)
- Run:
npm test(unit + E2E),npm run test:perf(performance),make check(bash gates),make check-all(full CI gate)
- Meta-prompt content lives in
content/pipeline/— edit the relevant.mdfile - If you changed adapter behavior, run
scaffold buildin a test project and inspect the generated artifacts under.scaffold/generated/plus the Gemini project-local outputs (.agents/skills/,GEMINI.md,.gemini/commands/scaffold/). - Run
make check-all(lint + type-check + test + evals) before submitting - Knowledge entries live in
content/knowledge/— follow the existing frontmatter schema - ADRs documenting architectural decisions are in
docs/architecture/adrs/ - Run
make hooksto install git hooks (ShellCheck, frontmatter validation)
MIT