Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
15 commits
Select commit Hold shift + click to select a range
998e11f
Upgrade ai-driven-analysis-guide.md to v4.0 with 2026-04-03 quality a…
Copilot Apr 3, 2026
5243dc1
Upgrade ai-driven-analysis-guide.md to v4.0 with comprehensive qualit…
Copilot Apr 3, 2026
25b634a
Upgrade SHARED_PROMPT_PATTERNS.md with v4.0 AI content generation pro…
Copilot Apr 3, 2026
0ce998d
Add Step 3c AI Content Quality Enforcement to all 7 news workflow files
Copilot Apr 3, 2026
7bc284e
Fix table alignment consistency in policy domain mapping table
Copilot Apr 3, 2026
91f0719
Address PR review: fix header date scope, clarify interpellations ste…
Copilot Apr 3, 2026
f26cb91
Address review: fix analysis path variable, standardize chart-data cl…
Copilot Apr 3, 2026
810f9b2
Merge branch 'main' into copilot/improve-ai-driven-analysis-guides
pethers Apr 3, 2026
3727117
Address review round 3: add ANALYSIS_SUBFOLDER mapping, fix reference…
Copilot Apr 3, 2026
71187dd
Address review round 4: align visualization with data-chart-config co…
Copilot Apr 3, 2026
ca90ccd
Address review round 5: align visualization examples with Chart.js co…
Copilot Apr 3, 2026
72daf75
Address review round 6: fix visualization types table (SWOT/Risk→Char…
Copilot Apr 3, 2026
9d06eaa
Address review round 7: add government-propositions→propositions mapp…
Copilot Apr 3, 2026
c6398e5
Address review round 8: clarify data-chart-config requires explicit c…
Copilot Apr 3, 2026
5b83912
fix: replace hard-coded date/event in week-ahead Step 3c with dynamic…
Copilot Apr 3, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
327 changes: 283 additions & 44 deletions .github/workflows/SHARED_PROMPT_PATTERNS.md

Large diffs are not rendered by default.

23 changes: 23 additions & 0 deletions .github/workflows/news-article-generator.md
Original file line number Diff line number Diff line change
Expand Up @@ -631,6 +631,29 @@ For **non-deep-inspection** article types only, if the script fails, generate ar

**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description.

## Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: This is the multi-type article generator. Apply content quality enforcement for ALL article types. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0.

**1. Read pre-computed analysis** — For the current `${REQUESTED_TYPE}`, read ALL analysis files from `analysis/daily/${ARTICLE_DATE}/${ANALYSIS_SUBFOLDER}/`. If synthesis reports "0 documents analyzed", use MCP tools to fetch data directly (see ai-driven-analysis-guide.md §Empty Analysis Fallback).

**2. Scan for BANNED content patterns** — Search each generated article for these exact strings or equivalent boilerplate patterns and REPLACE them:
- Exact string: `"The political landscape remains fluid"` → Replace with specific winners/losers
- Exact string: `"No chamber debate data is available"` → Replace with analysis from document text or MCP debate data
- Pattern/prefix match: any `"Touches on ... policy."` boilerplate followed by generic domain text → Replace with unique per-document analysis
- Pattern/prefix match: any boilerplate starting with `"Analysis of "`, followed by a document count and `" documents covering"` → Replace with analytical lede

**3. Enforce per-document unique "Why It Matters"** — Verify that NO two documents in the same article share identical "Why It Matters" text. If found, rewrite each with document-specific evidence.

**4. Enforce minimum analytical depth** — Every article MUST contain:
- Analytical lede naming actors and political significance
- Per-document analysis (not flat list of links)
- Winners & Losers with named parties and evidence (≥50 words)
- Key Takeaways with confidence labels (3-5 bullet points)
- Analysis references section with GitHub links

**5. Run self-quality check** — Score each article against the 5-dimension rubric from SHARED_PROMPT_PATTERNS.md §"Article Quality Self-Check". If any article scores below 7.0 composite, revise before committing.

## Step 4: Translate & Validate

Check for untranslated Swedish content in non-Swedish articles:
Expand Down
24 changes: 24 additions & 0 deletions .github/workflows/news-committee-reports.md
Original file line number Diff line number Diff line change
Expand Up @@ -681,6 +681,30 @@ npx tsx scripts/fix-article-navigation.ts

**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description.

### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis files and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0 §"AI Article Content Generation Protocol".

**1. Read pre-computed analysis** — Before modifying article content, read:
```bash
cat "analysis/daily/${ARTICLE_DATE}/committeeReports/synthesis-summary.md"
cat "analysis/daily/${ARTICLE_DATE}/committeeReports/swot-analysis.md"
cat "analysis/daily/${ARTICLE_DATE}/committeeReports/risk-assessment.md"
cat "analysis/daily/${ARTICLE_DATE}/committeeReports/stakeholder-perspectives.md"
```

**2. Replace script-generated lede** — Find and replace any `<p class="lede">Analysis of N documents covering...` with an AI-generated analytical lede naming the most significant committee report, key actors, and political significance.

**3. Replace boilerplate "Why It Matters"** — For EACH committee report entry, replace any `"Touches on {X} policy..."` boilerplate with document-specific analysis citing the committee code, policy measure, budget impact, and party positions.

**4. Replace generic "Winners & Losers"** — Find and replace `"The political landscape remains fluid..."` with specific winners/losers naming parties (M, S, SD, V, MP, C, L, KD) with evidence from vote records or committee decisions.

**5. Replace excuse-as-analysis** — Find and replace `"No chamber debate data is available..."` with either: (a) actual debate data from MCP `search_anforanden`, or (b) analysis of the committee report text itself.

**6. Add Key Takeaways** — If missing, add 3-5 bullet points with bold lead phrases, dok_id citations, and [HIGH/MEDIUM/LOW] confidence labels.

**7. Verify policy domain labels** — Ensure each committee report is classified by its committee code (FöU=Defence, JuU=Justice, SoU=Healthcare, etc.), NOT by keyword heuristics.

### Step 4: Translate Swedish Content & Verify Analysis Quality
All Swedish API data MUST be translated. Check every article for `data-translate="true"` markers.

Expand Down
20 changes: 20 additions & 0 deletions .github/workflows/news-interpellations.md
Original file line number Diff line number Diff line change
Expand Up @@ -692,6 +692,26 @@ For each interpellation found, cross-reference the minister's response to identi

**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description.

### Step 3d: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0.
Comment on lines +695 to +697
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR description calls out “Step 3c” enforcement across workflows, but this workflow introduces the enforcement section as “Step 3d”. If the numbering difference is intentional (because Step 3c is already used for title/meta here), consider aligning the label with the PR description (or adjust the PR description) to avoid confusion when following the instructions.

Copilot uses AI. Check for mistakes.
>
> **Note:** This is Step 3**d** (not 3c) because interpellations has an additional Step 3b (Cross-Reference Minister Responses) and Step 3c (AI Title/Meta), shifting this enforcement step to 3d. All other workflows use Step 3c for this same enforcement.

**1. Read pre-computed analysis** — Read synthesis, SWOT, risk analysis from `analysis/daily/${ARTICLE_DATE}/interpellations/`.

**2. Replace script-generated lede** — Replace any `"Analysis of N documents..."` with AI lede naming the most targeted minister, the filing party strategy, and the most significant interpellation topic.

**3. Replace boilerplate "Why It Matters"** — For EACH interpellation, write unique analysis citing the interpellation number, the specific question asked, the targeted minister's portfolio, and why this matters politically. BANNED: `"Touches on {X} policy..."` boilerplate.

**4. Replace generic "Winners & Losers"** — Replace `"The political landscape remains fluid..."` with specific accountability analysis: which ministers face the most pressure, which opposition parties demonstrate coordination, and minister response timeliness.

**5. Integrate minister response data** — Use cross-reference results from Step 3b (minister response speeches via MCP `search_anforanden`) to enrich the article with response summaries, accountability gaps, and policy commitments.

**6. Replace excuse-as-analysis** — Replace `"No chamber debate data..."` with analysis from the interpellation text itself or minister response speeches.

**7. Add interpellation coordination analysis** — Identify patterns: Are multiple interpellations targeting the same minister? The same policy area? Filed on the same day (suggesting coordination)?

### Step 4: Translate, Validate & Verify Analysis Quality

Run validation and HTMLHint before creating PR:
Expand Down
18 changes: 18 additions & 0 deletions .github/workflows/news-motions.md
Original file line number Diff line number Diff line change
Expand Up @@ -673,6 +673,24 @@ npx tsx scripts/fix-article-navigation.ts

**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description.

### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0.

**1. Read pre-computed analysis** — Read synthesis, SWOT, risk analysis from `analysis/daily/${ARTICLE_DATE}/motions/`.

**2. Replace script-generated lede** — Replace any `"Analysis of N documents..."` with AI lede naming the most significant opposition motion(s), filing party, and policy target.

**3. Replace boilerplate "Why It Matters"** — For EACH motion, write unique analysis citing motion number, specific policy proposal, party strategy, and target government policy. BANNED: `"Touches on {X} policy..."` boilerplate.

**4. Replace generic "Winners & Losers"** — Replace `"The political landscape remains fluid..."` with specific opposition strategy analysis: which parties filed, what government policies they target, likelihood of committee support.

**5. Verify document count consistency** — Ensure the number of motions in the title/lede matches the number detailed in the body. If title says "50 motions" but body details 10, either expand body or correct the title.

**6. Verify policy domain classification** — Each motion MUST be classified by its Riksdag committee assignment (utskott), NOT by keyword matching. BANNED: Classifying a food safety motion as "housing policy" based on co-location with housing motions.

**7. Add opposition strategy context** — Explain whether motions represent coordinated opposition campaign, individual MP initiative, or response to government proposition.

### Step 4: Translate, Validate & Verify Analysis Quality

Run validation and HTMLHint before creating PR:
Expand Down
18 changes: 18 additions & 0 deletions .github/workflows/news-propositions.md
Original file line number Diff line number Diff line change
Expand Up @@ -661,6 +661,24 @@ npx tsx scripts/fix-article-navigation.ts

**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description.

### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: The AI MUST read pre-computed analysis and rewrite ALL script-generated stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0.

**1. Read pre-computed analysis** — Read synthesis, SWOT, risk, and stakeholder analysis from `analysis/daily/${ARTICLE_DATE}/propositions/`.

**2. Replace script-generated lede** — Replace any `"Analysis of N documents..."` placeholder with AI lede naming specific propositions, ministers, and political significance.

**3. Replace boilerplate "Why It Matters"** — For EACH proposition, write unique analysis citing the proposition number (e.g., Prop. 2025/26:235), specific policy changes, budget impact (SEK amounts), and affected populations. BANNED: `"Touches on {X} policy..."` boilerplate.

**4. Replace generic "Winners & Losers"** — Replace `"The political landscape remains fluid..."` with specific analysis naming government ministers who tabled the propositions and opposition parties likely to challenge them.

**5. Replace excuse-as-analysis** — Replace `"No chamber debate data..."` with analysis of the proposition text itself or debate data from MCP `search_anforanden`.

**6. Handle empty analysis** — If synthesis reports "0 documents analyzed", use MCP `get_propositioner(rm="2025/26")` directly. NEVER publish with "0 documents analyzed" as content.

**7. Add Strategic Context** — Explain whether propositions represent coordinated government offensive (pre-election legislative push) or routine business. Cross-reference with committee reports and motions from the same date.

### Step 4: Translate, Validate & Verify Analysis Quality

Run validation and HTMLHint before creating PR:
Expand Down
16 changes: 16 additions & 0 deletions .github/workflows/news-realtime-monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -815,6 +815,22 @@ If the script genuinely fails after verifying MCP, generate articles manually ON

**4. Update all metadata** — Ensure `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>` all reflect the AI-generated title and description.

## Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: Breaking news articles MUST have the highest content quality. Read pre-computed analysis and rewrite ALL stub content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0.

**1. Read pre-computed analysis** — Read ALL analysis files from `analysis/daily/${ARTICLE_DATE}/realtime-${HHMM}/` including per-document analyses in `documents/`.

**2. Write intelligence-grade lede** — Breaking news ledes MUST name the specific development, key actor, quantified impact (SEK amounts, seat counts, affected populations), and urgency.

**3. Write unique "Why It Matters"** per document — Each document's analysis MUST be specific to that document's content. BANNED: any repeated `"Touches on {X} policy..."` boilerplate.

**4. Write substantive "Winners & Losers"** — Name specific parties, ministers, agencies, and sectors with evidence from the analysis. BANNED: `"The political landscape remains fluid..."`.

**5. Include Key Takeaways** — 3-5 bullet points with confidence labels and dok_id citations.

**6. Include visualization data** — For breaking news with voting data, include Chart.js vote distribution data. For defense/budget articles, include budget allocation data.

## Step 4: Validate & Translate

```bash
Expand Down
18 changes: 18 additions & 0 deletions .github/workflows/news-week-ahead.md
Original file line number Diff line number Diff line change
Expand Up @@ -491,6 +491,24 @@ npx tsx scripts/fix-article-navigation.ts

**4. Update all metadata** — `<title>`, `<meta name="description">`, `<meta property="og:title">`, `<meta property="og:description">`, and `<h1>`.

### Step 3c: AI Content Quality Enforcement (v4.0 — MANDATORY)

> 🚨 **v4.0 CRITICAL**: Week-ahead articles require forward-looking intelligence. Read pre-computed analysis and generate prospective content. See `SHARED_PROMPT_PATTERNS.md` §"AI ARTICLE CONTENT GENERATION" and `ai-driven-analysis-guide.md` v4.0.

**1. Read pre-computed analysis** — Read analysis from `analysis/daily/${ARTICLE_DATE}/week-ahead/`. If synthesis reports "0 documents analyzed", use MCP `get_calendar_events` and `get_betankanden` to populate content directly.

**2. Generate forward-looking lede** — Week-ahead ledes MUST name specific upcoming events (committee votes, plenary debates, government announcements) with dates and significance. BANNED: empty or generic ledes.

**3. Generate committee schedule analysis** — For each scheduled committee report debate, explain: what the committee decided, which parties filed reservations, and what the expected plenary vote outcome is.

**4. Generate government agenda preview** — List upcoming government actions (propositions expected, ministerial meetings, EU engagements) with political significance context.

**5. Replace generic filler** — Remove `"The political landscape remains fluid..."` and replace with specific forward indicators derived from MCP data (e.g., `get_calendar_events`, `get_betankanden`). Each indicator MUST name a real upcoming event, committee, or deadline extracted from the data — e.g., "Watch: `<COMMITTEE>` scheduling `<TOPIC>` follow-up by `<DATE from calendar>`". Do NOT hard-code example dates or event names; always source them from the current week's MCP query results.

**6. Verify document count consistency** — Ensure report counts are consistent across title, lede, body, and key takeaways. Contradictory counts (17 vs 42 vs 16) are REJECTED.

**7. Handle Easter/recess periods** — When parliament is in recess, explain what legislation is pending for the return session and what government agencies are acting during recess.

### Step 4: Translate, Validate & Verify Analysis Quality

Run validation and HTMLHint before creating PR:
Expand Down
Loading
Loading