Skip to content

Add Jira quarterly-initiative-report skill to pf-workflow plugin#48

Merged
jpuzz0 merged 7 commits intopatternfly:mainfrom
janwright73:add-quarterly-initiative-report-skill
Apr 6, 2026
Merged

Add Jira quarterly-initiative-report skill to pf-workflow plugin#48
jpuzz0 merged 7 commits intopatternfly:mainfrom
janwright73:add-quarterly-initiative-report-skill

Conversation

@janwright73
Copy link
Copy Markdown
Contributor

@janwright73 janwright73 commented Mar 28, 2026

This skill generates comprehensive quarterly Jira status reports with:

  • Progress tracking across epics with completion metrics
  • RAG (Red/Amber/Green) status assessment
  • Cross-project duplicate link analysis (critical for multi-team initiatives)
  • Blocker identification and risk assessment
  • Q+1 priority recommendations based on incomplete work
  • Complete epic reference table with clickable Jira links

Key Features:

  • Hybrid MCP/REST API support for maximum compatibility
  • Handles cross-project work via duplicate links (AAP, MTV, CONSOLE, SAT, etc.)
  • Prevents "invisible work" problem by checking ALL epics for linked work
  • Tool-agnostic: works in Claude Code, Cursor, and future AI tools
  • Uses standard tools: curl and jq (no special dependencies)

Tested with:

  • PatternFly Q1 2026 initiative (35 epics, 549 issues)
  • Cross-project work spanning 6 different Jira projects
  • Both direct children and linked epic scenarios

Benefits:

  • Automates tedious manual report generation
  • Ensures complete visibility of cross-project work
  • Provides data-driven status assessments
  • Saves hours per quarterly report

File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Lines: 320 (concise, under 500-line guideline)
Standards: Meets all ai-helpers repository requirements

Summary by CodeRabbit

  • New Features

    • Added a quarterly initiative report workflow that generates standardized Jira initiative status reports with per-epic completion metrics, linked-work aggregation, blocker detection, RAG status, and actionable recommendations.
  • Documentation

    • Added user-facing documentation covering invocation examples, report layout and filename conventions, methodology for metrics and dashboards.

This skill generates comprehensive quarterly Jira status reports with:
- Progress tracking across epics with completion metrics
- RAG (Red/Amber/Green) status assessment
- Cross-project duplicate link analysis (critical for multi-team initiatives)
- Blocker identification and risk assessment
- Q+1 priority recommendations based on incomplete work
- Complete epic reference table with clickable Jira links

Key Features:
- Hybrid MCP/REST API support for maximum compatibility
- Handles cross-project work via duplicate links (AAP, MTV, CONSOLE, SAT, etc.)
- Prevents "invisible work" problem by checking ALL epics for linked work
- Tool-agnostic: works in Claude Code, Cursor, and future AI tools
- Uses standard tools: curl and jq (no special dependencies)

Tested with:
- PatternFly Q1 2026 initiative (35 epics, 549 issues)
- Cross-project work spanning 6 different Jira projects
- Both direct children and linked epic scenarios

Benefits:
- Automates tedious manual report generation
- Ensures complete visibility of cross-project work
- Provides data-driven status assessments
- Saves hours per quarterly report

File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Lines: 320 (concise, under 500-line guideline)
Standards: Meets all ai-helpers repository requirements
@jpuzz0
Copy link
Copy Markdown
Collaborator

jpuzz0 commented Mar 31, 2026

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 31, 2026

✅ Actions performed

Full review triggered.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 31, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new documentation-only skill quarterly-initiative-report that describes a Jira-driven workflow to collect epics by project/label, compute direct and linked completion metrics, detect blockers and duplicates, assign RAG statuses, and generate a structured Markdown quarterly initiative report.

Changes

Cohort / File(s) Summary
New Skill Definition
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
New documentation file introducing the quarterly-initiative-report skill: lists required tools (curl, jq) and Jira env vars, credential verification, interactive inputs (project key, label), JQL queries for epics and blockers, per-epic direct and linked-epic metrics by Jira status category, duplicate-linked-epic handling, RAG assignment rules, report structure (executive summary, dashboard, detailed metrics, blockers/risks, recommendations, appendix), invocation examples, and output filename convention.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Suggested reviewers

  • jpuzz0
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding a new Jira quarterly-initiative-report skill to the pf-workflow plugin, which matches the documentation file addition in the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (1)

30-30: Replace tool-specific setup wording with tool-agnostic phrasing.

Line 30 calls out specific products (“Claude Code”, “Cursor”). Reword this to generic assistant/tool settings language to stay fully tool-agnostic.

As per coding guidelines in CONTRIBUTING-SKILLS.md:116-129, skills must be tool-agnostic and avoid referencing a specific tool.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` at line 30,
Update the wording in SKILL.md under the "Option 1: In AI tool settings" section
to remove product names ("Claude Code", "Cursor") and replace them with
tool-agnostic phrasing such as "your assistant or tool settings (e.g.,
settings.json or config file)"; locate the header "Option 1: In AI tool
settings" and the sentence that currently lists “Claude Code settings.json,
Cursor config” and reword it to a generic instruction about adjusting
assistant/tool settings per CONTRIBUTING-SKILLS.md guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@SKILL-TEST-RESULTS.md`:
- Line 4: Replace the personal local path shown in the "Skill Location:" entry
(the string starting with "/Users/jawright/...") with a generic placeholder
(e.g., "/path/to/skill/skill.md" or "{SKILL_PATH}") and update the related
compliance assertion around the compliance claim (the statement referenced near
line 154) so it accurately reflects that only generic placeholders are used;
ensure the "Skill Location" header and the compliance line both use the same
non-identifying placeholder format.

---

Nitpick comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Line 30: Update the wording in SKILL.md under the "Option 1: In AI tool
settings" section to remove product names ("Claude Code", "Cursor") and replace
them with tool-agnostic phrasing such as "your assistant or tool settings (e.g.,
settings.json or config file)"; locate the header "Option 1: In AI tool
settings" and the sentence that currently lists “Claude Code settings.json,
Cursor config” and reword it to a generic instruction about adjusting
assistant/tool settings per CONTRIBUTING-SKILLS.md guidelines.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c905ec7b-302e-4dcd-a76e-021a8c109e16

📥 Commits

Reviewing files that changed from the base of the PR and between 37aa4b4 and d1c76a1.

📒 Files selected for processing (2)
  • SKILL-TEST-RESULTS.md
  • plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md

Comment thread SKILL-TEST-RESULTS.md Outdated
# Quarterly Initiative Report Skill - Test Results

**Test Date:** March 28, 2026
**Skill Location:** `/Users/jawright/.claude/skills/quarterly-initiative-report-test/skill.md`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove personal path details or update the compliance claim.

Line 4 includes a user-identifying local path (/Users/jawright/...), which conflicts with the Line 154 assertion that only generic placeholders are used and no personal information is present.

As per coding guidelines in CONTRIBUTING-SKILLS.md:116-129, skills/docs should meet repository standards, including safe and generic documentation patterns.

Also applies to: 154-154

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@SKILL-TEST-RESULTS.md` at line 4, Replace the personal local path shown in
the "Skill Location:" entry (the string starting with "/Users/jawright/...")
with a generic placeholder (e.g., "/path/to/skill/skill.md" or "{SKILL_PATH}")
and update the related compliance assertion around the compliance claim (the
statement referenced near line 154) so it accurately reflects that only generic
placeholders are used; ensure the "Skill Location" header and the compliance
line both use the same non-identifying placeholder format.

- Remove specific shell rc file references (~/.zshrc, ~/.bashrc)
- Replace personal skill path with generic skill name
- Addresses feedback from PR review
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (1)

30-31: Use tool-neutral wording in setup instructions.

Calling out “Claude Code settings.json, Cursor config” is tool-specific. Prefer neutral phrasing so the skill text remains portable.

Suggested fix
-**Option 1: In AI tool settings** (Claude Code settings.json, Cursor config):
+**Option 1: In your AI assistant environment configuration**:

As per coding guidelines in CONTRIBUTING-SKILLS.md, “Skills must be tool-agnostic … Avoid referencing a specific tool.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
30 - 31, Replace the tool-specific phrase "Claude Code settings.json, Cursor
config" under the "Option 1: In AI tool settings" section in SKILL.md with a
tool-neutral instruction (e.g., "your AI/code tool's settings or configuration
file") so the guidance is portable; update the surrounding text to avoid naming
any specific products and ensure it follows the CONTRIBUTING-SKILLS.md rule that
"Skills must be tool-agnostic."
SKILL-TEST-RESULTS.md (1)

156-156: Use “Markdown” capitalization for consistency.

Consider updating “Proper markdown formatting” → “Proper Markdown formatting”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@SKILL-TEST-RESULTS.md` at line 156, Update the table cell text that currently
reads "Proper markdown formatting" to use consistent capitalization "Proper
Markdown formatting"; locate the string "Proper markdown formatting" in the
SKILL-TEST-RESULTS.md table row and change only the word "markdown" to
"Markdown" so the header/content uses Title-case for "Markdown".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 284-290: Update the two fenced code blocks that currently lack
language tags by adding a language identifier (use "text") to the opening fence
for the snippets containing "Epic PF-3227: Ansible Nexus Migration (Closed)" and
"Epic PF-3408: Ansible Q1 Features (In Progress)"; make the same change for the
other occurrence mentioned (lines 293-301) so all three code fences start with
```text to satisfy MD040 and improve rendering.
- Around line 146-147: Update the JQL used in the curl call in SKILL.md so the
query is scoped to the target project by adding a project filter (e.g., include
project=PROJECT) alongside the existing label and status/priority filters;
locate the JQL string containing labels="LABEL" AND (status=Blocked OR
priority=Highest) in the SKILL.md example and modify it to include
project=PROJECT so the exported query only returns issues for the selected
project.

In `@SKILL-TEST-RESULTS.md`:
- Line 150: The "Tool-agnostic language" compliance claim is too strong for the
quarterly-initiative-report skill; either update the SKILL-TEST-RESULTS.md table
row for "Tool-agnostic language" to reflect the current status (e.g., change ✅
PASS to ❌ FAIL or add a note) or remove tool-specific references from the
quarterly-initiative-report skill text so it truly is tool-agnostic; locate the
skill by name "quarterly-initiative-report" and edit the SKILL.md content to
eliminate any Claude/Cursor-specific wording, or update the
SKILL-TEST-RESULTS.md row to accurately describe the existing wording.

---

Nitpick comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-31: Replace the tool-specific phrase "Claude Code
settings.json, Cursor config" under the "Option 1: In AI tool settings" section
in SKILL.md with a tool-neutral instruction (e.g., "your AI/code tool's settings
or configuration file") so the guidance is portable; update the surrounding text
to avoid naming any specific products and ensure it follows the
CONTRIBUTING-SKILLS.md rule that "Skills must be tool-agnostic."

In `@SKILL-TEST-RESULTS.md`:
- Line 156: Update the table cell text that currently reads "Proper markdown
formatting" to use consistent capitalization "Proper Markdown formatting";
locate the string "Proper markdown formatting" in the SKILL-TEST-RESULTS.md
table row and change only the word "markdown" to "Markdown" so the
header/content uses Title-case for "Markdown".
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e057f4c1-f305-418a-90a6-651086a706d9

📥 Commits

Reviewing files that changed from the base of the PR and between d1c76a1 and 98efebc.

📒 Files selected for processing (2)
  • SKILL-TEST-RESULTS.md
  • plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md

Comment thread plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md Outdated
Comment on lines +284 to +290
```
Epic PF-3227: Ansible Nexus Migration (Closed)
Direct children: 0 issues
Linked via duplicates:
- AAP-58793: 16 issues (16 done, 100%)
Combined: 16 issues, 100% complete ✅
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks.

Two fenced examples are missing language tags, which triggers MD040 and reduces readability in renderers.

Suggested fix
-```
+```text
 Epic PF-3227: Ansible Nexus Migration (Closed)
   Direct children: 0 issues
   Linked via duplicates:
     - AAP-58793: 16 issues (16 done, 100%)
   Combined: 16 issues, 100% complete ✅

- +text
Epic PF-3408: Ansible Q1 Features (In Progress)
Direct children: 0 issues
Linked via duplicates:
- AAP-60038: 63 issues (55 done, 87%)
- AAP-57961: 18 issues (18 done, 100%)
- AAP-59349: 56 issues (22 done, 39%)
Combined: 137 issues, 69% complete

Also applies to: 293-301

🧰 Tools
🪛 markdownlint-cli2 (0.22.0)

[warning] 284-284: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
284 - 290, Update the two fenced code blocks that currently lack language tags
by adding a language identifier (use "text") to the opening fence for the
snippets containing "Epic PF-3227: Ansible Nexus Migration (Closed)" and "Epic
PF-3408: Ansible Q1 Features (In Progress)"; make the same change for the other
occurrence mentioned (lines 293-301) so all three code fences start with ```text
to satisfy MD040 and improve rendering.

Comment thread SKILL-TEST-RESULTS.md Outdated
Comment thread SKILL-TEST-RESULTS.md Outdated
Jan Wright and others added 2 commits April 1, 2026 15:14
Test results file is specific to local test run and should not
be committed to repository root.

Addresses PR review feedback.
Addresses code review feedback to scope the blocker JQL query to the
specific project, preventing unrelated issues from other projects from
being included in the blocker search results.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-39: The README currently names specific tools ("Claude Code
settings.json, Cursor config") in the Option 1 heading and JSON comments; make
this tool-agnostic by removing those tool names and replacing them with generic
phrasing like "AI tool settings" or "assistant configuration" in the SKILL.md
Option 1 section, and keep the JSON example but strip any tool-specific labels
so it reads as generic environment-variable configuration; update the heading
"Option 1: In AI tool settings" and any nearby explanatory text to avoid
mentioning Claude, Cursor, or other vendors.
- Around line 64-148: Replace the prescriptive curl/JQL/jq command blocks under
"Step 1: Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather
Complete Metrics" with outcome-driven descriptions: state the goal (e.g., "Fetch
all epics with the given label", "For each epic, retrieve direct child issues
and compute total, done, in_progress, todo and completion_pct", "Detect
duplicate/linked cross-project issues and include their child work in totals"),
remove the exact command examples and pipelines, and keep the "Step 3: Calculate
Aggregate Metrics" and "Step 4: Identify Blockers" sections as high-level
objectives (e.g., how to aggregate totals and what counts to report) so the
skill describes WHAT to accomplish rather than HOW; target the specific
headings/blocks named in the file to locate and replace the detailed
implementation sections.
- Around line 22-27: The env var names in SKILL.md are inconsistent with other
pf-workflow skills; replace ATLASSIAN_EMAIL and ATLASSIAN_API_TOKEN with the
established JIRA_USER_EMAIL and JIRA_API_TOKEN names and update every reference
in this file (examples, description, and any shell/templated usages like
$ATLASSIAN_EMAIL/$ATLASSIAN_API_TOKEN) to $JIRA_USER_EMAIL and $JIRA_API_TOKEN
so the naming aligns with the duplicate-epic skill and avoids duplicate env
vars.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9bfcd1e1-fe16-4c1b-9e6e-ff6d3f9772cd

📥 Commits

Reviewing files that changed from the base of the PR and between 98efebc and dca2b51.

📒 Files selected for processing (1)
  • plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md

Comment thread plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Comment thread plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Comment on lines +64 to +148

### Step 1: Fetch All Epics with the Label

```bash
# Search for all epics/initiatives with the label
curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \
-H "Accept: application/json" \
-X POST \
-H "Content-Type: application/json" \
-d '{"jql":"project=PROJECT AND labels=\"LABEL\" AND type IN (Epic, Initiative)","fields":["key","summary","status","assignee","duedate","issuetype","labels"],"maxResults":1000}' \
"$JIRA_SITE_URL/rest/api/3/search/jql"
```

### Step 2: For Each Epic, Gather Complete Metrics

**Process for EVERY epic (including closed):**

1. **Fetch direct sub-issues:**
```bash
curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \
-H "Accept: application/json" \
-X POST \
-H "Content-Type: application/json" \
-d '{"jql":"parent=EPIC-KEY","fields":["key","summary","status","priority"],"maxResults":1000}' \
"$JIRA_SITE_URL/rest/api/3/search/jql" | \
jq '{
total: (.issues | length),
done: ([.issues[] | select(.fields.status.statusCategory.key == "done")] | length),
in_progress: ([.issues[] | select(.fields.status.statusCategory.key == "indeterminate")] | length),
todo: ([.issues[] | select(.fields.status.statusCategory.key == "new")] | length),
completion_pct: (if (.issues | length) > 0 then (([.issues[] | select(.fields.status.statusCategory.key == "done")] | length) * 100 / (.issues | length) | floor) else 0 end)
}'
```

2. **Check for duplicate links (CRITICAL for all epics):**
```bash
# Check EVERY epic for cross-project duplicate links
curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \
-H "Accept: application/json" \
"$JIRA_SITE_URL/rest/api/3/issue/EPIC-KEY?fields=issuelinks" | \
jq '{
key: .key,
duplicates: [.fields.issuelinks[] | select(.type.name == "Duplicate") | {
linked_issue: (if .outwardIssue then .outwardIssue.key else .inwardIssue.key end),
linked_type: (if .outwardIssue then .outwardIssue.fields.issuetype.name else .inwardIssue.fields.issuetype.name end)
}]
}'
```

3. **For each linked epic, fetch its child issues:**
```bash
curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \
-H "Accept: application/json" \
-X POST \
-H "Content-Type: application/json" \
-d '{"jql":"parent=LINKED-EPIC-KEY","fields":["key","summary","status"],"maxResults":1000}' \
"$JIRA_SITE_URL/rest/api/3/search/jql" | \
jq '{
total: (.issues | length),
done: ([.issues[] | select(.fields.status.statusCategory.key == "done")] | length),
in_progress: ([.issues[] | select(.fields.status.statusCategory.key == "indeterminate")] | length),
todo: ([.issues[] | select(.fields.status.statusCategory.key == "new")] | length)
}'
```

**IMPORTANT:** Combine direct children + linked epic children for total metrics. Many cross-project initiatives track significant work via duplicate links (e.g., AAP, MTV, CONSOLE, SAT projects).

### Step 3: Calculate Aggregate Metrics

- **Total Issues:** Sum all direct + linked issues across all epics
- **Overall Completion:** (Total Done / Total Issues) × 100
- **Epic Counts:** Closed, In Progress, New
- **Cross-Project Work:** Issues tracked via duplicate links

### Step 4: Identify Blockers

```bash
# Find high-priority or blocked issues
curl -s -u "$ATLASSIAN_EMAIL:$ATLASSIAN_API_TOKEN" \
-H "Accept: application/json" \
-X POST \
-H "Content-Type: application/json" \
-d '{"jql":"project=PROJECT AND labels=\"LABEL\" AND (status=Blocked OR priority=Highest)","fields":["key","summary","status","priority","assignee"],"maxResults":100}' \
"$JIRA_SITE_URL/rest/api/3/search/jql"
```
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Refactor to describe outcomes rather than implementation steps.

The skill provides very detailed curl commands, exact JQL syntax, and specific jq pipelines. This level of prescriptive implementation violates the "describe outcomes, not implementation" principle—the AI already knows how to make API calls and parse JSON.

Consider refactoring to state what to accomplish at each step:

  • "Fetch all epics with the label from the Jira project"
  • "For each epic, retrieve direct child issues and calculate completion percentage based on status categories"
  • "Check each epic for duplicate links and aggregate linked work"

The current approach (320 lines of detailed commands) reduces the skill's flexibility and makes it harder to maintain. Let the AI determine the best way to interact with Jira's REST API.

As per coding guidelines, "Describe outcomes, not implementation in skills — tell the AI what to accomplish, not how to do it. The AI already knows how to use git, gh, grep, etc."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
64 - 148, Replace the prescriptive curl/JQL/jq command blocks under "Step 1:
Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather Complete
Metrics" with outcome-driven descriptions: state the goal (e.g., "Fetch all
epics with the given label", "For each epic, retrieve direct child issues and
compute total, done, in_progress, todo and completion_pct", "Detect
duplicate/linked cross-project issues and include their child work in totals"),
remove the exact command examples and pipelines, and keep the "Step 3: Calculate
Aggregate Metrics" and "Step 4: Identify Blockers" sections as high-level
objectives (e.g., how to aggregate totals and what counts to report) so the
skill describes WHAT to accomplish rather than HOW; target the specific
headings/blocks named in the file to locate and replace the detailed
implementation sections.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree that this is a major issue. I want consistent results with this skill with the JQL that is provided. AI may alter what is pulled from Jira or return partial results (like only pull the first 50 items).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janwright73, that's a fair and technically sound point. Explicit maxResults: 1000 in the API calls is genuinely important for correctness — an AI left to its own discretion could easily default to paginated results of 50 or 100 and produce incomplete metrics for large initiatives (your 35-epic, 549-issue test case illustrates exactly why this matters). The same applies to the specific JQL syntax ensuring the right issue types and status categories are targeted. I'll withdraw the major concern.

( ˙-˙ )🐇


✏️ Learnings added
Learnt from: janwright73
Repo: patternfly/ai-helpers PR: 48
File: plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md:64-148
Timestamp: 2026-04-04T14:01:05.331Z
Learning: In `plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`, explicit curl/JQL/jq command blocks are intentionally kept (rather than replaced with outcome-only descriptions) to ensure consistent, complete results — specifically to enforce `maxResults: 1000` and prevent AI tools from defaulting to partial/paginated results (e.g., 50 items). The skill author (janwright73) considers this a valid override of the "describe outcomes not implementation" guideline for accuracy-critical Jira API queries.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING-SKILLS.md:0-0
Timestamp: 2026-04-01T15:43:57.523Z
Learning: Applies to **/skills/*/SKILL.md : Describe outcomes, not implementation in skills — tell the AI what to accomplish, not how to do it. The AI already knows how to use git, gh, grep, etc.

Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING-SKILLS.md:0-0
Timestamp: 2026-04-01T15:43:57.523Z
Learning: Applies to **/skills/*/SKILL.md : Skills must be under 500 lines; shorter is better. Review SKILL.md to ensure instructions are clear enough that any AI tool would produce the same quality output.

Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2026-04-01T15:43:43.873Z
Learning: Applies to plugins/*/skills/*/SKILL.md : Add a `SKILL.md` file with instructions in plain markdown for each skill

Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-04-01T15:44:03.368Z
Learning: Applies to plugins/*/skills/**/*.md plugins/*/agents/**/*.md : Skills/agents that are PatternFly-specific use the `pf-` prefix (e.g., `pf-unit-test-generator`). Generic skills do not.

Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING.md:0-0
Timestamp: 2026-04-01T15:43:43.873Z
Learning: Applies to {plugins/*/skills/*/SKILL.md,plugins/*/agents/*.md} : Include clear descriptions in all frontmatter

Learnt from: CR
Repo: patternfly/ai-helpers PR: 0
File: CONTRIBUTING-SKILLS.md:0-0
Timestamp: 2026-04-01T15:43:57.523Z
Learning: When contributing, fork and clone the `patternfly/ai-helpers` repo, copy the SKILL.md into the appropriate plugin directory, and open a pull request.

…port

Updates environment variable names to match the duplicate-epic skill
for consistency across the pf-workflow plugin:
- ATLASSIAN_EMAIL → JIRA_USER_EMAIL
- ATLASSIAN_API_TOKEN → JIRA_API_TOKEN

All curl commands and documentation updated to use the new names.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md (2)

30-39: ⚠️ Potential issue | 🟡 Minor

Remove tool-specific product names from setup instructions.

This section still names specific tools (“Claude Code”, “Cursor”), which breaks the skill’s tool-agnostic requirement.

As per coding guidelines, “Skills must be tool-agnostic — they run in both Claude Code and Cursor. Avoid referencing a specific tool.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
30 - 39, The section titled "Option 1: In AI tool settings" in SKILL.md
currently mentions specific products ("Claude Code", "Cursor"); remove those
product names and replace with tool-agnostic language (e.g., "AI tool settings"
or "your AI client settings") while keeping the JSON env example intact; update
the heading and inline text so it refers generically to configuring environment
variables (JIRA_USER_EMAIL, JIRA_API_TOKEN, JIRA_SITE_URL) in your AI tool's
settings without naming any vendors.

65-127: ⚠️ Potential issue | 🟠 Major

Refactor workflow steps to outcome-focused guidance instead of command-level implementation.

These steps are still implementation-prescriptive (curl/jq pipelines and exact API payloads). Please convert this section to describe required outcomes and acceptance criteria, not exact command mechanics.

As per coding guidelines, “Describe outcomes, not implementation in skills — tell the AI what to accomplish, not how to do it.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md` around lines
65 - 127, Replace the command-level, curl/jq implementation in "Step 1: Fetch
All Epics with the Label" and "Step 2: For Each Epic, Gather Complete Metrics"
with outcome-focused requirements: state the expected outcomes (e.g., list of
epics with label and their
key/summary/status/assignee/duedate/issuetype/labels), the per-epic metrics to
produce (total child issues, counts for done/in_progress/todo, completion_pct),
a requirement to detect cross-project duplicate links and return linked issue
keys/types, and for any linked epics return their child-issue aggregates;
convert each of the three sub-steps (fetch direct sub-issues, check for
duplicate links, for each linked epic fetch its child issues) into acceptance
criteria (what fields and summary metrics must be returned) and remove any
explicit curl/jq snippets or API payload examples so the skill describes
outcomes not implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 30-39: The section titled "Option 1: In AI tool settings" in
SKILL.md currently mentions specific products ("Claude Code", "Cursor"); remove
those product names and replace with tool-agnostic language (e.g., "AI tool
settings" or "your AI client settings") while keeping the JSON env example
intact; update the heading and inline text so it refers generically to
configuring environment variables (JIRA_USER_EMAIL, JIRA_API_TOKEN,
JIRA_SITE_URL) in your AI tool's settings without naming any vendors.
- Around line 65-127: Replace the command-level, curl/jq implementation in "Step
1: Fetch All Epics with the Label" and "Step 2: For Each Epic, Gather Complete
Metrics" with outcome-focused requirements: state the expected outcomes (e.g.,
list of epics with label and their
key/summary/status/assignee/duedate/issuetype/labels), the per-epic metrics to
produce (total child issues, counts for done/in_progress/todo, completion_pct),
a requirement to detect cross-project duplicate links and return linked issue
keys/types, and for any linked epics return their child-issue aggregates;
convert each of the three sub-steps (fetch direct sub-issues, check for
duplicate links, for each linked epic fetch its child issues) into acceptance
criteria (what fields and summary metrics must be returned) and remove any
explicit curl/jq snippets or API payload examples so the skill describes
outcomes not implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7365c96a-f12f-4e19-85f7-9690db816f7c

📥 Commits

Reviewing files that changed from the base of the PR and between dca2b51 and c50e28a.

📒 Files selected for processing (1)
  • plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md

Reverts the unnecessary change to JIRA_USER_EMAIL/JIRA_API_TOKEN.
Uses ATLASSIAN_EMAIL and ATLASSIAN_API_TOKEN which match the actual
environment configuration and work correctly.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md`:
- Around line 140-147: The JIRA search curl in SKILL.md uses maxResults=100
which can truncate blockers; update the query to paginate or remove the hard cap
by implementing JIRA pagination using startAt and maxResults in repeated
requests (or set maxResults to the API maximum) so the JQL call that posts to
"$JIRA_SITE_URL/rest/api/3/search/jql" returns all matching issues instead of
only the first 100; ensure the change updates the curl invocation that currently
includes the "-d '{...\"maxResults\":100...}'" payload to either loop with
startAt increments or use an appropriate maxResults value consistent with other
scripts.
- Around line 152-156: The RAG criteria under the "For each epic:" section are
overlapping and non-deterministic; update the wording to specify an explicit
evaluation order (e.g., check Red conditions first, then Amber, then Green) so
each epic is evaluated with precedence Red → Amber → Green, and revise the three
bullet rules to state that the first matching condition determines the status;
reference the "For each epic:" RAG rule block and ensure examples/edge cases
(e.g., ≥75% with critical blockers) follow the new precedence.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 110cad19-9136-4ed5-b33a-6cecb1d81152

📥 Commits

Reviewing files that changed from the base of the PR and between c50e28a and 49e0e75.

📒 Files selected for processing (1)
  • plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md

Comment thread plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md
Comment thread plugins/pf-workflow/skills/quarterly-initiative-report/SKILL.md Outdated
1. Increase blocker query maxResults from 100 to 1000
   - Prevents under-reporting of risks and blockers
   - Ensures comprehensive blocker identification

2. Reorder RAG status evaluation from Red→Amber→Green
   - Previous order (Green→Amber→Red) was ambiguous
   - New order evaluates most critical conditions first
   - "First match wins" logic makes evaluation deterministic
   - Prevents misclassification of epics with overlapping criteria

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator

@jpuzz0 jpuzz0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! If someone ends up using both this and the duplicate-epic skill here, they'd need two sets of Jira credentials since the env var names are different (ATLASSIAN_EMAIL vs JIRA_USER_EMAIL). Not a blocker - just something to align down the road if it comes up.

@jpuzz0 jpuzz0 merged commit 4cccec8 into patternfly:main Apr 6, 2026
1 check passed
@janwright73 janwright73 deleted the add-quarterly-initiative-report-skill branch April 6, 2026 13:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants