Local-first, inspectable prompt chaining for deliberate multi-step workflows. Pipelines are YAML files in pipelines/, and every run writes a fully inspectable artifact directory under runs/.
- Compose multi-stage prompt workflows as simple YAML pipelines
- Keep runs reproducible and auditable with on-disk artifacts
- Support local models via Ollama with optional OpenAI stages
- Python 3
- Ollama running at
http://localhost:11434 - Models specified in pipelines pulled in Ollama (e.g.,
qwen3:8b)
PromptChain can optionally use the OpenAI API. This is opt-in and does not change the local-first default.
Requirements:
OPENAI_API_KEYset in the environment (or.env; see.env.example)- Pipeline or stage configured with
provider: openai
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtSingle stage:
python -m promptchain.cli run --pipeline pipelines/single.yaml --topic chessSequential chain:
python -m promptchain.cli run --pipeline pipelines/three_step.yaml --topic chessFan-out map stage:
python -m promptchain.cli run --pipeline pipelines/fanout_personas_jtbd.yaml --topic chessJSON → downstream stage:
python -m promptchain.cli run --pipeline pipelines/json_then_use.yamlPer-stage file inputs:
python -m promptchain.cli run --pipeline pipelines/file_inputs.yamlPublish example:
python -m promptchain.cli run --pipeline pipelines/publish_example.yaml --topic chessEvery pipeline in pipelines/ has a matching sample script in scripts/ named run_<pipeline>.zsh. These scripts run the pipeline with a small set of inputs and validate that core artifacts were produced.
Run a couple of examples:
scripts/run_single.zsh
scripts/run_three_step.zsh
scripts/run_fanout_personas_jtbd.zshOpenAI examples require OPENAI_API_KEY (copy .env.example to .env and set it, or export the variable):
scripts/run_openai_two_step.zsh
scripts/run_openai_concurrent_map.zsh
scripts/run_openai_batch_map.zshEach run creates a directory under runs/<run_id>/ with:
run.jsonmetadatalogs/raw model outputsstages/<stage_id>/outputs and artifactsoutput/published deliverables (if any)
See docs/README.md for detailed usage, resume workflows, publishing behavior, and prompt context references.