Add Jira quarterly-initiative-report skill to pf-workflow plugin#48
Conversation
This skill generates comprehensive quarterly Jira status reports with: - Progress tracking across epics with completion metrics - RAG (Red/Amber/Green) status assessment - Cross-project duplicate link analysis (critical for multi-team initiatives) - Blocker identification and risk assessment - Q+1 priority recommendations based on incomplete work - Complete epic reference table with clickable Jira links Key Features: - Hybrid MCP/REST API support for maximum compatibility - Handles cross-project work via duplicate links (AAP, MTV, CONSOLE, SAT, etc.) - Prevents "invisible work" problem by checking ALL epics for linked work - Tool-agnostic: works in Claude Code, Cursor, and future AI tools - Uses standard tools: curl and jq (no special dependencies) Tested with: - PatternFly Q1 2026 initiative (35 epics, 549 issues) - Cross-project work spanning 6 different Jira projects - Both direct children and linked epic scenarios Benefits: - Automates tedious manual report generation - Ensures complete visibility of cross-project work - Provides data-driven status assessments - Saves hours per quarterly report File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md Lines: 320 (concise, under 500-line guideline) Standards: Meets all ai-helpers repository requirements
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new documentation-only skill Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (1)
30-30: Replace tool-specific setup wording with tool-agnostic phrasing.Line 30 calls out specific products (“Claude Code”, “Cursor”). Reword this to generic assistant/tool settings language to stay fully tool-agnostic.
As per coding guidelines in
CONTRIBUTING-SKILLS.md:116-129, skills must be tool-agnostic and avoid referencing a specific tool.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` at line 30, Update the wording in SKILL.md under the "Option 1: In AI tool settings" section to remove product names ("Claude Code", "Cursor") and replace them with tool-agnostic phrasing such as "your assistant or tool settings (e.g., settings.json or config file)"; locate the header "Option 1: In AI tool settings" and the sentence that currently lists “Claude Code settings.json, Cursor config” and reword it to a generic instruction about adjusting assistant/tool settings per CONTRIBUTING-SKILLS.md guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@SKILL-TEST-RESULTS.md`:
- Line 4: Replace the personal local path shown in the "Skill Location:" entry
(the string starting with "/Users/jawright/...") with a generic placeholder
(e.g., "/path/to/skill/skill.md" or "{SKILL_PATH}") and update the related
compliance assertion around the compliance claim (the statement referenced near
line 154) so it accurately reflects that only generic placeholders are used;
ensure the "Skill Location" header and the compliance line both use the same
non-identifying placeholder format.
---
Nitpick comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Line 30: Update the wording in SKILL.md under the "Option 1: In AI tool
settings" section to remove product names ("Claude Code", "Cursor") and replace
them with tool-agnostic phrasing such as "your assistant or tool settings (e.g.,
settings.json or config file)"; locate the header "Option 1: In AI tool
settings" and the sentence that currently lists “Claude Code settings.json,
Cursor config” and reword it to a generic instruction about adjusting
assistant/tool settings per CONTRIBUTING-SKILLS.md guidelines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c905ec7b-302e-4dcd-a76e-021a8c109e16
📒 Files selected for processing (2)
SKILL-TEST-RESULTS.mdplugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
| # Quarterly Initiative Report Skill - Test Results | ||
|
|
||
| **Test Date:** March 28, 2026 | ||
| **Skill Location:** `/Users/jawright/.claude/skills/quarterly-initiative-report-test/skill.md` |
There was a problem hiding this comment.
Remove personal path details or update the compliance claim.
Line 4 includes a user-identifying local path (/Users/jawright/...), which conflicts with the Line 154 assertion that only generic placeholders are used and no personal information is present.
As per coding guidelines in CONTRIBUTING-SKILLS.md:116-129, skills/docs should meet repository standards, including safe and generic documentation patterns.
Also applies to: 154-154
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@SKILL-TEST-RESULTS.md` at line 4, Replace the personal local path shown in
the "Skill Location:" entry (the string starting with "/Users/jawright/...")
with a generic placeholder (e.g., "/path/to/skill/skill.md" or "{SKILL_PATH}")
and update the related compliance assertion around the compliance claim (the
statement referenced near line 154) so it accurately reflects that only generic
placeholders are used; ensure the "Skill Location" header and the compliance
line both use the same non-identifying placeholder format.
- Remove specific shell rc file references (~/.zshrc, ~/.bashrc) - Replace personal skill path with generic skill name - Addresses feedback from PR review
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (1)
30-31: Use tool-neutral wording in setup instructions.Calling out “Claude Code settings.json, Cursor config” is tool-specific. Prefer neutral phrasing so the skill text remains portable.
Suggested fix
-**Option 1: In AI tool settings** (Claude Code settings.json, Cursor config): +**Option 1: In your AI assistant environment configuration**:As per coding guidelines in
CONTRIBUTING-SKILLS.md, “Skills must be tool-agnostic … Avoid referencing a specific tool.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines 30 - 31, Replace the tool-specific phrase "Claude Code settings.json, Cursor config" under the "Option 1: In AI tool settings" section in SKILL.md with a tool-neutral instruction (e.g., "your AI/code tool's settings or configuration file") so the guidance is portable; update the surrounding text to avoid naming any specific products and ensure it follows the CONTRIBUTING-SKILLS.md rule that "Skills must be tool-agnostic."SKILL-TEST-RESULTS.md (1)
156-156: Use “Markdown” capitalization for consistency.Consider updating “Proper markdown formatting” → “Proper Markdown formatting”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@SKILL-TEST-RESULTS.md` at line 156, Update the table cell text that currently reads "Proper markdown formatting" to use consistent capitalization "Proper Markdown formatting"; locate the string "Proper markdown formatting" in the SKILL-TEST-RESULTS.md table row and change only the word "markdown" to "Markdown" so the header/content uses Title-case for "Markdown".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 284-290: Update the two fenced code blocks that currently lack
language tags by adding a language identifier (use "text") to the opening fence
for the snippets containing "Epic PF-3227: Ansible Nexus Migration (Closed)" and
"Epic PF-3408: Ansible Q1 Features (In Progress)"; make the same change for the
other occurrence mentioned (lines 293-301) so all three code fences start with
```text to satisfy MD040 and improve rendering.
- Around line 146-147: Update the JQL used in the curl call in SKILL.md so the
query is scoped to the target project by adding a project filter (e.g., include
project=PROJECT) alongside the existing label and status/priority filters;
locate the JQL string containing labels="LABEL" AND (status=Blocked OR
priority=Highest) in the SKILL.md example and modify it to include
project=PROJECT so the exported query only returns issues for the selected
project.
In `@SKILL-TEST-RESULTS.md`:
- Line 150: The "Tool-agnostic language" compliance claim is too strong for the
quarterly-initiative-report skill; either update the SKILL-TEST-RESULTS.md table
row for "Tool-agnostic language" to reflect the current status (e.g., change ✅
PASS to ❌ FAIL or add a note) or remove tool-specific references from the
quarterly-initiative-report skill text so it truly is tool-agnostic; locate the
skill by name "quarterly-initiative-report" and edit the SKILL.md content to
eliminate any Claude/Cursor-specific wording, or update the
SKILL-TEST-RESULTS.md row to accurately describe the existing wording.
---
Nitpick comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-31: Replace the tool-specific phrase "Claude Code
settings.json, Cursor config" under the "Option 1: In AI tool settings" section
in SKILL.md with a tool-neutral instruction (e.g., "your AI/code tool's settings
or configuration file") so the guidance is portable; update the surrounding text
to avoid naming any specific products and ensure it follows the
CONTRIBUTING-SKILLS.md rule that "Skills must be tool-agnostic."
In `@SKILL-TEST-RESULTS.md`:
- Line 156: Update the table cell text that currently reads "Proper markdown
formatting" to use consistent capitalization "Proper Markdown formatting";
locate the string "Proper markdown formatting" in the SKILL-TEST-RESULTS.md
table row and change only the word "markdown" to "Markdown" so the
header/content uses Title-case for "Markdown".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e057f4c1-f305-418a-90a6-651086a706d9
📒 Files selected for processing (2)
SKILL-TEST-RESULTS.mdplugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
| ``` | ||
| Epic PF-3227: Ansible Nexus Migration (Closed) | ||
| Direct children: 0 issues | ||
| Linked via duplicates: | ||
| - AAP-58793: 16 issues (16 done, 100%) | ||
| Combined: 16 issues, 100% complete ✅ | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to fenced code blocks.
Two fenced examples are missing language tags, which triggers MD040 and reduces readability in renderers.
Suggested fix
-```
+```text
Epic PF-3227: Ansible Nexus Migration (Closed)
Direct children: 0 issues
Linked via duplicates:
- AAP-58793: 16 issues (16 done, 100%)
Combined: 16 issues, 100% complete ✅- +text
Epic PF-3408: Ansible Q1 Features (In Progress)
Direct children: 0 issues
Linked via duplicates:
- AAP-60038: 63 issues (55 done, 87%)
- AAP-57961: 18 issues (18 done, 100%)
- AAP-59349: 56 issues (22 done, 39%)
Combined: 137 issues, 69% complete
Also applies to: 293-301
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 284-284: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
284 - 290, Update the two fenced code blocks that currently lack language tags
by adding a language identifier (use "text") to the opening fence for the
snippets containing "Epic PF-3227: Ansible Nexus Migration (Closed)" and "Epic
PF-3408: Ansible Q1 Features (In Progress)"; make the same change for the other
occurrence mentioned (lines 293-301) so all three code fences start with ```text
to satisfy MD040 and improve rendering.
Test results file is specific to local test run and should not be committed to repository root. Addresses PR review feedback.
Addresses code review feedback to scope the blocker JQL query to the specific project, preventing unrelated issues from other projects from being included in the blocker search results. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-39: The README currently names specific tools ("Claude Code
settings.json, Cursor config") in the Option 1 heading and JSON comments; make
this tool-agnostic by removing those tool names and replacing them with generic
phrasing like "AI tool settings" or "assistant configuration" in the SKILL.md
Option 1 section, and keep the JSON example but strip any tool-specific labels
so it reads as generic environment-variable configuration; update the heading
"Option 1: In AI tool settings" and any nearby explanatory text to avoid
mentioning Claude, Cursor, or other vendors.
- Around line 64-148: Replace the prescriptive curl/JQL/jq command blocks under
"Step 1: Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather
Complete Metrics" with outcome-driven descriptions: state the goal (e.g., "Fetch
all epics with the given label", "For each epic, retrieve direct child issues
and compute total, done, in_progress, todo and completion_pct", "Detect
duplicate/linked cross-project issues and include their child work in totals"),
remove the exact command examples and pipelines, and keep the "Step 3: Calculate
Aggregate Metrics" and "Step 4: Identify Blockers" sections as high-level
objectives (e.g., how to aggregate totals and what counts to report) so the
skill describes WHAT to accomplish rather than HOW; target the specific
headings/blocks named in the file to locate and replace the detailed
implementation sections.
- Around line 22-27: The env var names in SKILL.md are inconsistent with other
pf-workflow skills; replace ATLASSIAN_EMAIL and ATLASSIAN_API_TOKEN with the
established JIRA_USER_EMAIL and JIRA_API_TOKEN names and update every reference
in this file (examples, description, and any shell/templated usages like
$ATLASSIAN_EMAIL/$ATLASSIAN_API_TOKEN) to $JIRA_USER_EMAIL and $JIRA_API_TOKEN
so the naming aligns with the duplicate-epic skill and avoids duplicate env
vars.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9bfcd1e1-fe16-4c1b-9e6e-ff6d3f9772cd
📒 Files selected for processing (1)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
|
|
||
| ### Step 1: Fetch All Epics with the Label | ||
|
|
||
| ```bash | ||
| # Search for all epics/initiatives with the label | ||
| curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \ | ||
| -H "Accept: application/json" \ | ||
| -X POST \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{"jql":"project=PROJECT AND labels=\"LABEL\" AND type IN (Epic, Initiative)","fields":["key","summary","status","assignee","duedate","issuetype","labels"],"maxResults":1000}' \ | ||
| "$JIRA_SITE_URL/rest/api/3/search/jql" | ||
| ``` | ||
|
|
||
| ### Step 2: For Each Epic, Gather Complete Metrics | ||
|
|
||
| **Process for EVERY epic (including closed):** | ||
|
|
||
| 1. **Fetch direct sub-issues:** | ||
| ```bash | ||
| curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \ | ||
| -H "Accept: application/json" \ | ||
| -X POST \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{"jql":"parent=EPIC-KEY","fields":["key","summary","status","priority"],"maxResults":1000}' \ | ||
| "$JIRA_SITE_URL/rest/api/3/search/jql" | \ | ||
| jq '{ | ||
| total: (.issues | length), | ||
| done: ([.issues[] | select(.fields.status.statusCategory.key == "done")] | length), | ||
| in_progress: ([.issues[] | select(.fields.status.statusCategory.key == "indeterminate")] | length), | ||
| todo: ([.issues[] | select(.fields.status.statusCategory.key == "new")] | length), | ||
| completion_pct: (if (.issues | length) > 0 then (([.issues[] | select(.fields.status.statusCategory.key == "done")] | length) * 100 / (.issues | length) | floor) else 0 end) | ||
| }' | ||
| ``` | ||
|
|
||
| 2. **Check for duplicate links (CRITICAL for all epics):** | ||
| ```bash | ||
| # Check EVERY epic for cross-project duplicate links | ||
| curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \ | ||
| -H "Accept: application/json" \ | ||
| "$JIRA_SITE_URL/rest/api/3/issue/EPIC-KEY?fields=issuelinks" | \ | ||
| jq '{ | ||
| key: .key, | ||
| duplicates: [.fields.issuelinks[] | select(.type.name == "Duplicate") | { | ||
| linked_issue: (if .outwardIssue then .outwardIssue.key else .inwardIssue.key end), | ||
| linked_type: (if .outwardIssue then .outwardIssue.fields.issuetype.name else .inwardIssue.fields.issuetype.name end) | ||
| }] | ||
| }' | ||
| ``` | ||
|
|
||
| 3. **For each linked epic, fetch its child issues:** | ||
| ```bash | ||
| curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \ | ||
| -H "Accept: application/json" \ | ||
| -X POST \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{"jql":"parent=LINKED-EPIC-KEY","fields":["key","summary","status"],"maxResults":1000}' \ | ||
| "$JIRA_SITE_URL/rest/api/3/search/jql" | \ | ||
| jq '{ | ||
| total: (.issues | length), | ||
| done: ([.issues[] | select(.fields.status.statusCategory.key == "done")] | length), | ||
| in_progress: ([.issues[] | select(.fields.status.statusCategory.key == "indeterminate")] | length), | ||
| todo: ([.issues[] | select(.fields.status.statusCategory.key == "new")] | length) | ||
| }' | ||
| ``` | ||
|
|
||
| **IMPORTANT:** Combine direct children + linked epic children for total metrics. Many cross-project initiatives track significant work via duplicate links (e.g., AAP, MTV, CONSOLE, SAT projects). | ||
|
|
||
| ### Step 3: Calculate Aggregate Metrics | ||
|
|
||
| - **Total Issues:** Sum all direct + linked issues across all epics | ||
| - **Overall Completion:** (Total Done / Total Issues) × 100 | ||
| - **Epic Counts:** Closed, In Progress, New | ||
| - **Cross-Project Work:** Issues tracked via duplicate links | ||
|
|
||
| ### Step 4: Identify Blockers | ||
|
|
||
| ```bash | ||
| # Find high-priority or blocked issues | ||
| curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \ | ||
| -H "Accept: application/json" \ | ||
| -X POST \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{"jql":"project=PROJECT AND labels=\"LABEL\" AND (status=Blocked OR priority=Highest)","fields":["key","summary","status","priority","assignee"],"maxResults":100}' \ | ||
| "$JIRA_SITE_URL/rest/api/3/search/jql" | ||
| ``` |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Refactor to describe outcomes rather than implementation steps.
The skill provides very detailed curl commands, exact JQL syntax, and specific jq pipelines. This level of prescriptive implementation violates the "describe outcomes, not implementation" principle—the AI already knows how to make API calls and parse JSON.
Consider refactoring to state what to accomplish at each step:
- "Fetch all epics with the label from the Jira project"
- "For each epic, retrieve direct child issues and calculate completion percentage based on status categories"
- "Check each epic for duplicate links and aggregate linked work"
The current approach (320 lines of detailed commands) reduces the skill's flexibility and makes it harder to maintain. Let the AI determine the best way to interact with Jira's REST API.
As per coding guidelines, "Describe outcomes, not implementation in skills — tell the AI what to accomplish, not how to do it. The AI already knows how to use git, gh, grep, etc."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
64 - 148, Replace the prescriptive curl/JQL/jq command blocks under "Step 1:
Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather Complete
Metrics" with outcome-driven descriptions: state the goal (e.g., "Fetch all
epics with the given label", "For each epic, retrieve direct child issues and
compute total, done, in_progress, todo and completion_pct", "Detect
duplicate/linked cross-project issues and include their child work in totals"),
remove the exact command examples and pipelines, and keep the "Step 3: Calculate
Aggregate Metrics" and "Step 4: Identify Blockers" sections as high-level
objectives (e.g., how to aggregate totals and what counts to report) so the
skill describes WHAT to accomplish rather than HOW; target the specific
headings/blocks named in the file to locate and replace the detailed
implementation sections.
There was a problem hiding this comment.
I disagree that this is a major issue. I want consistent results with this skill with the JQL that is provided. AI may alter what is pulled from Jira or return partial results (like only pull the first 50 items).
There was a problem hiding this comment.
@janwright73, that's a fair and technically sound point. Explicit maxResults: 1000 in the API calls is genuinely important for correctness — an AI left to its own discretion could easily default to paginated results of 50 or 100 and produce incomplete metrics for large initiatives (your 35-epic, 549-issue test case illustrates exactly why this matters). The same applies to the specific JQL syntax ensuring the right issue types and status categories are targeted. I'll withdraw the major concern.
( ˙-˙ )🐇
✏️ Learnings added
Learnt from: janwright73
Repo: patternfly/ai-helpers PR: 48
File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md:64-148
Timestamp: 2026-04-04T14:01:05.331Z
Learning: In `plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`, explicit curl/JQL/jq command blocks are intentionally kept (rather than replaced with outcome-only descriptions) to ensure consistent, complete results — specifically to enforce `maxResults: 1000` and prevent AI tools from defaulting to partial/paginated results (e.g., 50 items). The skill author (janwright73) considers this a valid override of the "describe outcomes not implementation" guideline for accuracy-critical Jira API queries.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING-SKILLS.md:0-0
Timestamp: 2026-04-01T15:43:57.523Z
Learning: Applies to **/skills/*/SKILL.md : Describe outcomes, not implementation in skills — tell the AI what to accomplish, not how to do it. The AI already knows how to use git, gh, grep, etc.
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING-SKILLS.md:0-0
Timestamp: 2026-04-01T15:43:57.523Z
Learning: Applies to **/skills/*/SKILL.md : Skills must be under 500 lines; shorter is better. Review SKILL.md to ensure instructions are clear enough that any AI tool would produce the same quality output.
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2026-04-01T15:43:43.873Z
Learning: Applies to plugins/*/skills/*/SKILL.md : Add a `SKILL.md` file with instructions in plain markdown for each skill
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-01T15:44:03.368Z
Learning: Applies to plugins/*/skills/**/*.md plugins/*/agents/**/*.md : Skills/agents that are PatternFly-specific use the `pf-` prefix (e.g., `pf-unit-test-generator`). Generic skills do not.
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2026-04-01T15:43:43.873Z
Learning: Applies to {plugins/*/skills/*/SKILL.md,plugins/*/agents/*.md} : Include clear descriptions in all frontmatter
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING-SKILLS.md:0-0
Timestamp: 2026-04-01T15:43:57.523Z
Learning: When contributing, fork and clone the `patternfly/ai-helpers` repo, copy the SKILL.md into the appropriate plugin directory, and open a pull request.
…port Updates environment variable names to match the duplicate-epic skill for consistency across the pf-workflow plugin: - ATLASSIAN_EMAIL → JIRA_USER_EMAIL - ATLASSIAN_API_TOKEN → JIRA_API_TOKEN All curl commands and documentation updated to use the new names. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (2)
30-39:⚠️ Potential issue | 🟡 MinorRemove tool-specific product names from setup instructions.
This section still names specific tools (“Claude Code”, “Cursor”), which breaks the skill’s tool-agnostic requirement.
As per coding guidelines, “Skills must be tool-agnostic — they run in both Claude Code and Cursor. Avoid referencing a specific tool.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines 30 - 39, The section titled "Option 1: In AI tool settings" in SKILL.md currently mentions specific products ("Claude Code", "Cursor"); remove those product names and replace with tool-agnostic language (e.g., "AI tool settings" or "your AI client settings") while keeping the JSON env example intact; update the heading and inline text so it refers generically to configuring environment variables (JIRA_USER_EMAIL, JIRA_API_TOKEN, JIRA_SITE_URL) in your AI tool's settings without naming any vendors.
65-127:⚠️ Potential issue | 🟠 MajorRefactor workflow steps to outcome-focused guidance instead of command-level implementation.
These steps are still implementation-prescriptive (
curl/jqpipelines and exact API payloads). Please convert this section to describe required outcomes and acceptance criteria, not exact command mechanics.As per coding guidelines, “Describe outcomes, not implementation in skills — tell the AI what to accomplish, not how to do it.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines 65 - 127, Replace the command-level, curl/jq implementation in "Step 1: Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather Complete Metrics" with outcome-focused requirements: state the expected outcomes (e.g., list of epics with label and their key/summary/status/assignee/duedate/issuetype/labels), the per-epic metrics to produce (total child issues, counts for done/in_progress/todo, completion_pct), a requirement to detect cross-project duplicate links and return linked issue keys/types, and for any linked epics return their child-issue aggregates; convert each of the three sub-steps (fetch direct sub-issues, check for duplicate links, for each linked epic fetch its child issues) into acceptance criteria (what fields and summary metrics must be returned) and remove any explicit curl/jq snippets or API payload examples so the skill describes outcomes not implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-39: The section titled "Option 1: In AI tool settings" in
SKILL.md currently mentions specific products ("Claude Code", "Cursor"); remove
those product names and replace with tool-agnostic language (e.g., "AI tool
settings" or "your AI client settings") while keeping the JSON env example
intact; update the heading and inline text so it refers generically to
configuring environment variables (JIRA_USER_EMAIL, JIRA_API_TOKEN,
JIRA_SITE_URL) in your AI tool's settings without naming any vendors.
- Around line 65-127: Replace the command-level, curl/jq implementation in "Step
1: Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather Complete
Metrics" with outcome-focused requirements: state the expected outcomes (e.g.,
list of epics with label and their
key/summary/status/assignee/duedate/issuetype/labels), the per-epic metrics to
produce (total child issues, counts for done/in_progress/todo, completion_pct),
a requirement to detect cross-project duplicate links and return linked issue
keys/types, and for any linked epics return their child-issue aggregates;
convert each of the three sub-steps (fetch direct sub-issues, check for
duplicate links, for each linked epic fetch its child issues) into acceptance
criteria (what fields and summary metrics must be returned) and remove any
explicit curl/jq snippets or API payload examples so the skill describes
outcomes not implementation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7365c96a-f12f-4e19-85f7-9690db816f7c
📒 Files selected for processing (1)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Reverts the unnecessary change to JIRA_USER_EMAIL/JIRA_API_TOKEN. Uses ATLASSIAN_EMAIL and ATLASSIAN_API_TOKEN which match the actual environment configuration and work correctly. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 140-147: The JIRA search curl in SKILL.md uses maxResults=100
which can truncate blockers; update the query to paginate or remove the hard cap
by implementing JIRA pagination using startAt and maxResults in repeated
requests (or set maxResults to the API maximum) so the JQL call that posts to
"$JIRA_SITE_URL/rest/api/3/search/jql" returns all matching issues instead of
only the first 100; ensure the change updates the curl invocation that currently
includes the "-d '{...\"maxResults\":100...}'" payload to either loop with
startAt increments or use an appropriate maxResults value consistent with other
scripts.
- Around line 152-156: The RAG criteria under the "For each epic:" section are
overlapping and non-deterministic; update the wording to specify an explicit
evaluation order (e.g., check Red conditions first, then Amber, then Green) so
each epic is evaluated with precedence Red → Amber → Green, and revise the three
bullet rules to state that the first matching condition determines the status;
reference the "For each epic:" RAG rule block and ensure examples/edge cases
(e.g., ≥75% with critical blockers) follow the new precedence.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 110cad19-9136-4ed5-b33a-6cecb1d81152
📒 Files selected for processing (1)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
1. Increase blocker query maxResults from 100 to 1000 - Prevents under-reporting of risks and blockers - Ensures comprehensive blocker identification 2. Reorder RAG status evaluation from Red→Amber→Green - Previous order (Green→Amber→Red) was ambiguous - New order evaluates most critical conditions first - "First match wins" logic makes evaluation deterministic - Prevents misclassification of epics with overlapping criteria Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
jpuzz0
left a comment
There was a problem hiding this comment.
Looks good! If someone ends up using both this and the duplicate-epic skill here, they'd need two sets of Jira credentials since the env var names are different (ATLASSIAN_EMAIL vs JIRA_USER_EMAIL). Not a blocker - just something to align down the road if it comes up.
This skill generates comprehensive quarterly Jira status reports with:
Key Features:
Tested with:
Benefits:
File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Lines: 320 (concise, under 500-line guideline)
Standards: Meets all ai-helpers repository requirements
Summary by CodeRabbit
New Features
Documentation