diff --git a/README.md b/README.md
index 01acfe6..9656f37 100644
--- a/README.md
+++ b/README.md
@@ -2,28 +2,20 @@
A type-safe, structured testing library for Jinja templates.
-Stop writing brittle substring assertions. Test your templates with structure, validation, and confidence.
-
## Installation
```bash
-uv add jinjatest
-```
-
-With YAML support:
-```bash
-uv add jinjatest[yaml]
+uv add jinjatest # or: uv add jinjatest[yaml]
```
-## Why jinjatest?
-
-Testing Jinja templates typically means rendering and asserting on raw strings. This is brittle (whitespace, ordering, punctuation) and doesn't scale. **jinjatest** provides:
+## Features
- **Type-safe context validation** with Pydantic
- **Structured output parsing** (JSON, YAML, XML, markdown, fenced code blocks)
- **Test instrumentation** with anchors and traces
- **Pytest integration** with fixtures and snapshots
- **StrictUndefined by default** - missing variables fail loudly
+- **Coverage output** - output the test coverage for Jinja2 templates
## Quick Start
@@ -51,177 +43,100 @@ def test_welcome_pro_user():
### Structured Output (JSON)
-The most robust way to test templates - assert on structure, not strings:
-
```jinja2
-{# prompts/config.j2 #}
-{% set config = {
- "model": model_name,
- "temperature": temperature,
- "features": features | default([])
-} %}
+{% set config = {"model": model_name, "temperature": temperature} %}
{{ config | tojson }}
```
```python
def test_config_output():
- rendered = spec.render({
- "model_name": "gpt-4",
- "temperature": 0.7,
- "features": ["streaming", "tools"]
- })
-
+ rendered = spec.render({"model_name": "gpt-4", "temperature": 0.7})
config = rendered.as_json()
assert config["model"] == "gpt-4"
assert config["temperature"] == 0.7
- assert "streaming" in config["features"]
```
### Structured Output (XML)
-Parse XML output, including fragments with multiple root elements:
+Supports fragments with multiple root elements:
```jinja2
-{# prompts/tool_calls.j2 #}
- {{ query }}
+{{ query }}
+
{% if include_filter %}
- {{ filter_criteria }}
+{{ criteria }}
{% endif %}
```
```python
def test_xml_tool_calls():
- rendered = spec.render({
- "query": "python tutorials",
- "include_filter": True,
- "filter_criteria": "beginner"
- })
-
- # Parse as fragments (multiple roots allowed)
+ rendered = spec.render({"query": "python tutorials", "include_filter": True, "criteria": "beginner"})
tools = rendered.as_xml() # Returns list[XMLElement]
- assert len(tools) == 2
assert tools[0].attrib["name"] == "search"
assert tools[0].find("query").text == "python tutorials"
-
- # For single-root XML, use strict=True
- # rendered.as_xml(strict=True) # Returns XMLElement or raises
+ # Use strict=True for single-root XML
```
### Fenced Code Blocks
-Extract and parse code blocks from markdown-style output:
-
-```jinja2
-{# prompts/assistant.j2 #}
-Here's the configuration:
-
-```json
-{"setting": "{{ setting_name }}", "value": {{ setting_value }}}
-```
-
-And here's an alternative:
-
-```json
-{"setting": "{{ setting_name }}", "value": {{ setting_value * 2 }}}
-```
-```
+Extract code blocks from markdown-style output:
```python
def test_fenced_json_blocks():
rendered = spec.render({"setting_name": "timeout", "setting_value": 30})
-
- # Extract all ```json blocks
- configs = rendered.as_json_blocks()
- assert len(configs) == 2
+ configs = rendered.as_json_blocks() # Extracts all ```json blocks
assert configs[0]["value"] == 30
- assert configs[1]["value"] == 60
-
-# Also available:
-# rendered.as_yaml_blocks() # Extract ```yaml blocks
-# rendered.as_xml_blocks() # Extract ```xml blocks
+ # Also: as_yaml_blocks(), as_xml_blocks()
```
### Section Testing with Anchors
-Test specific sections without fragile delimiters:
-
```jinja2
-{# prompts/chat.j2 #}
{#jt:anchor:system#}
-System rules:
-- Be helpful
-- Be concise
+System rules: Be helpful, be concise.
{#jt:anchor:user#}
User: {{ user_name }}
Request: {{ request }}
-{#jt:anchor:context#}
{% if context_items %}
-Context:
-{% for item in context_items %}
-- {{ item }}
-{% endfor %}
+{#jt:anchor:context#}
+Context: {% for item in context_items %}- {{ item }}{% endfor %}
{% else %}
{#jt:trace:no_context#}
-No additional context.
{% endif %}
```
```python
def test_sections():
- rendered = spec.render({
- "user_name": "Ada",
- "request": "Help me code",
- "context_items": ["doc1", "doc2"],
- })
-
- # Test sections in isolation
+ rendered = spec.render({"user_name": "Ada", "request": "Help", "context_items": ["doc1"]})
assert rendered.section("user").contains("Ada")
assert rendered.section("system").not_contains("Ada")
- assert rendered.section("context").contains("doc1")
def test_branch_coverage():
- rendered = spec.render({
- "user_name": "Ada",
- "request": "Help",
- "context_items": [], # Empty - triggers no_context branch
- })
-
- # Verify which branches were taken
- assert rendered.has_trace("no_context")
+ rendered = spec.render({"user_name": "Ada", "request": "Help", "context_items": []})
+ assert rendered.has_trace("no_context") # Verify branch was taken
```
### Macros as Functions
-Test macros like regular Python functions:
-
```jinja2
{% macro build_prompt(user_input, context=None) %}
-{% set parts = [] %}
-{% do parts.append("You are a helpful assistant.") %}
-{% do parts.append("User: " ~ user_input) %}
-{% if context %}
-{% do parts.append("Context: " ~ context) %}
-{% endif %}
-{{ parts | join("\n") }}
+You are a helpful assistant.
+User: {{ user_input }}
+{% if context %}Context: {{ context }}{% endif %}
{% endmacro %}
```
```python
def test_prompt_builder():
build_prompt = spec.macro("build_prompt")
-
- result = build_prompt("Hello")
- assert "User: Hello" in result
- assert "Context:" not in result
-
- result = build_prompt("Hello", context="Background info")
- assert "Context: Background info" in result
+ assert "User: Hello" in build_prompt("Hello")
+ assert "Context: Info" in build_prompt("Hello", context="Info")
```
## API Reference
@@ -229,221 +144,132 @@ def test_prompt_builder():
### TemplateSpec
```python
-# From file
-spec = TemplateSpec.from_file(
- "template.j2",
- context_model=MyModel, # Optional Pydantic model
- template_dir="templates/", # Optional base directory
- strict_undefined=True, # Default: True
- test_mode=True, # Enable instrumentation
- use_comment_markers=True, # Transform {#jt:...#} comments (default: True)
-)
-
-# From string
-spec = TemplateSpec.from_string(
- "Hello {{ name }}!",
- context_model=MyModel,
- use_comment_markers=True, # Transform {#jt:...#} comments (default: True)
-)
-
-# Render
+spec = TemplateSpec.from_file("template.j2", context_model=MyModel)
+spec = TemplateSpec.from_string("Hello {{ name }}!", context_model=MyModel)
rendered = spec.render({"name": "World"})
-rendered = spec.render(MyModel(name="World"))
-
-# Access macro
my_macro = spec.macro("macro_name")
-result = my_macro("arg1", "arg2")
```
+Options: `template_dir`, `strict_undefined=True`, `test_mode=True`, `use_comment_markers=True`
+
### RenderedPrompt
+**Properties:** `text`, `normalized`, `clean_text`, `lines`, `normalized_lines`, `trace_events`
+
+**Parsing:**
+```python
+rendered.as_json() # Parse as JSON (allow_comments=True for // comments)
+rendered.as_yaml() # Parse as YAML (requires pyyaml)
+rendered.as_xml(strict=False) # Parse as XML (strict=True for single root)
+rendered.as_json_blocks() # Extract ```json blocks
+rendered.as_yaml_blocks() # Extract ```yaml blocks
+rendered.as_xml_blocks() # Extract ```xml blocks
+rendered.as_markdown_sections() # Parse markdown headings
+```
+
+**Sections & Traces:**
+```python
+rendered.section("name") # Get section by anchor
+rendered.has_section("name") # Check section exists
+rendered.has_trace("event") # Check trace was recorded
+rendered.trace_count("event") # Count trace occurrences
+```
+
+**Query helpers:**
```python
-rendered.text # Raw rendered text
-rendered.normalized # Whitespace-normalized text
-rendered.clean_text # Text with anchor markers removed
-rendered.lines # List of lines
-rendered.normalized_lines # List of normalized lines
-
-# Parsing - Full Document
-rendered.as_json() # Parse as JSON
-rendered.as_json(allow_comments=True) # Parse JSON with // and /* */ comments
-rendered.as_yaml() # Parse as YAML (requires pyyaml)
-rendered.as_xml(strict=False) # Parse as XML (strict=True for single root)
-rendered.as_markdown_sections() # Parse markdown headings
-rendered.markdown_section("title") # Find markdown section by title
-
-# Parsing - Fenced Code Blocks
-rendered.as_json_blocks() # Extract all ```json blocks
-rendered.as_json_blocks(allow_comments=True) # With comment support
-rendered.as_yaml_blocks() # Extract all ```yaml blocks
-rendered.as_xml_blocks() # Extract all ```xml blocks
-
-# Sections (with instrumentation)
-rendered.section("name") # Get section by anchor name
-rendered.sections() # Get all sections
-rendered.has_section("name") # Check if section exists
-
-# Traces
-rendered.has_trace("event") # Check if trace was recorded
-rendered.trace_count("event") # Count occurrences of trace event
-rendered.trace_events # List of all trace events
-
-# Query helpers
-rendered.contains("text") # Check substring exists
-rendered.not_contains("text") # Check substring doesn't exist
-rendered.contains_line("text") # Check if any line contains text
-rendered.has_line("exact line") # Check if exact line exists
-rendered.matches(r"pattern") # Check regex matches
-rendered.find_all(r"pattern") # Find all regex matches
+rendered.contains("text") # Check substring
+rendered.not_contains("text") # Check absence
+rendered.matches(r"pattern") # Regex match
+rendered.find_all(r"pattern") # Find all matches
```
### PromptAsserts
```python
-a = PromptAsserts(rendered)
-a = PromptAsserts(rendered).normalized() # Use normalized text
-
-# Chainable assertions
-a.contains("text")
-a.not_contains("text")
-a.contains_line("partial line match")
-a.has_exact_line("exact line")
-a.regex(r"pattern")
-a.not_regex(r"pattern")
-a.equals("exact text")
-a.equals_json({"key": "value"})
-a.line_count(5)
-a.line_count_between(3, 10)
-
-# Trace assertions
-a.has_trace("event")
-a.not_has_trace("event")
-
-# Snapshots
-a.snapshot("snapshot_name", update=False)
+a = PromptAsserts(rendered).normalized()
+a.contains("text").not_contains("bad").regex(r"pattern")
+a.has_trace("event").snapshot("name")
```
### Instrumentation
-In templates, use comment-based markers to define sections and trace events:
-
```jinja2
-{#jt:anchor:section_name#} {# Mark section start #}
-{#jt:trace:event_name#} {# Record trace event #}
+{#jt:anchor:section_name#}
+{#jt:trace:event_name#}
```
-Comment markers are automatically transformed when `test_mode=True`. This allows jinjatest to be a dev-only dependency since the comments are valid Jinja syntax that render as empty strings in production.
-
-You can also use a pre-configured Jinja environment with `TemplateSpec`:
+Markers are automatically transformed when `test_mode=True`. Comments render as empty strings in production, so jinjatest can be dev-only.
+**Custom Jinja environment:**
```python
-from jinja2 import Environment, FileSystemLoader
-from jinjatest import TemplateSpec
-
-# Use your own Jinja environment
env = Environment(loader=FileSystemLoader("templates/"))
env.globals["my_filter"] = lambda x: x.upper()
-
-# TemplateSpec handles instrumentation automatically
spec = TemplateSpec.from_file("my_template.j2", env=env)
-rendered = spec.render({"name": "World"})
-
-# Check traces after rendering
-if rendered.has_trace("some_event"):
- print("Event was triggered")
```
## Pytest Integration
-jinjatest provides pytest fixtures automatically:
-
```python
def test_with_fixtures(template_from_string, jinja_env):
spec = template_from_string("Hello {{ name }}!")
- rendered = spec.render({"name": "World"})
- assert rendered.text == "Hello World!"
+ assert spec.render({"name": "World"}).text == "Hello World!"
def test_with_snapshots(snapshot_manager, template_from_string):
- spec = template_from_string("Hello {{ name }}!")
- rendered = spec.render({"name": "World"})
+ rendered = template_from_string("Hello {{ name }}!").render({"name": "World"})
snapshot_manager.compare_or_update("greeting", rendered.text)
```
-Update snapshots:
-```bash
-pytest --update-snapshots
-```
+Update snapshots: `pytest --update-snapshots`
## Advanced Configuration
-### Custom Environment
-
```python
-from jinjatest import create_environment, TemplateSpec
-
env = create_environment(
template_paths=["templates/", "shared/"],
mock_templates={"header.j2": "Mock Header"},
- strict_undefined=True,
- enable_do_extension=True,
- sandboxed=False,
filters={"my_filter": lambda x: x.upper()},
globals={"version": "1.0"},
)
-
spec = TemplateSpec.from_file("template.j2", env=env)
-```
-
-### Variable Validation (CI Guardrails)
-
-```python
-spec = TemplateSpec.from_file("template.j2")
-
-# Fail if template uses unexpected variables
-spec.assert_variables_subset_of({"user_name", "plan", "items"})
+spec.assert_variables_subset_of({"user_name", "plan", "items"}) # CI guardrails
```
## Template Coverage
-Track branch coverage for your Jinja templates to ensure all conditional paths are tested.
-
-### Quick Start
+jinjatest tracks branch coverage by instrumenting your templates at render time. When you use `TemplateSpec`, it automatically discovers all conditional branches (`if`, `elif`, `else`, `for` loops, macros, etc.) and records which paths are executed during tests. This lets you identify untested template logic without modifying your templates.
```bash
-pytest --jt-cov
+pytest --jt-cov --jt-cov-fail-under=80 --jt-cov-report=term
```
-### CLI Options
+```
+==================== Jinja Template Coverage ====================
+Name Branches Covered Missing Cover
+------------------------------------------------------------------
+templates/welcome.j2 4 3 1 75%
+templates/email/confirm.j2 6 6 0 100%
+templates/components/nav.j2 8 5 3 62%
+------------------------------------------------------------------
+TOTAL 18 14 4 78%
+```
| Option | Description |
|--------|-------------|
| `--jt-cov` | Enable template coverage |
| `--jt-cov-fail-under=N` | Fail if coverage below N% |
-| `--jt-cov-report=TYPE` | Report type: `term`, `term-missing`, `term-verbose`, `html`, `json`, `xml` |
-| `--jt-cov-html=DIR` | HTML report directory |
-| `--jt-cov-json=FILE` | JSON report file |
-| `--jt-cov-xml=FILE` | JUnit XML report file |
-| `--jt-cov-exclude=PATTERN` | Glob pattern to exclude templates |
-
-### Configuration (pyproject.toml)
+| `--jt-cov-report=TYPE` | `term`, `term-missing`, `html`, `json`, `xml` |
+| `--jt-cov-exclude=PATTERN` | Exclude templates by glob |
+**pyproject.toml:**
```toml
[tool.jinjatest.coverage]
enabled = true
fail_under = 80
report = ["term", "html"]
-html_dir = "jt-htmlcov"
-exclude_patterns = ["**/vendor/**", "*.partial.j2"]
+exclude_patterns = ["**/vendor/**"]
```
-### What's Tracked
-
-- `{% if %}` / `{% elif %}` / `{% else %}` branches
-- `{% for %}` loops (body and else)
-- `{% macro %}` definitions
-- `{% block %}` definitions (template inheritance)
-- `{% include %}` statements
-- Ternary expressions (`{{ x if cond else y }}`)
+**Tracked:** `if`/`elif`/`else`, `for` loops, `macro`, `block`, `include`, ternary expressions
## License