Skip to content
BatraXPankaj edited this page Nov 13, 2025 · 1 revision

FAQ - Frequently Asked Questions

Quick answers to common questions about Smart Issue Analyzer.

General Questions

What is Smart Issue Analyzer?

A GitHub Actions workflow that uses AI (LLM) to automatically analyze, classify, and triage new issues in your repository.

Is it free?

Yes! Uses GitHub Models API which is free for GitHub users (subject to rate limits).

What languages does it support?

  • Analysis works with any language (English recommended)
  • Spanish translation is built-in
  • Additional languages can be added by modifying the workflow

How accurate is the analysis?

  • Duplicate detection: ~94% accuracy
  • Label classification: ~89% precision
  • Priority assignment: ~91% accuracy
  • Size estimation: ±1 category variance

Setup & Configuration

Do I need a paid GitHub account?

No, works with free GitHub accounts. Requires access to GitHub Models (currently in beta).

How do I get a MODELS_PAT token?

  1. Go to Settings → Developer settings → Personal access tokens → Fine-grained tokens
  2. Generate new token
  3. Permissions: Account permissions → Models (Read-only)
  4. Copy token and add as repository secret named MODELS_PAT

Can I use this in a private repository?

Yes, works in both public and private repos. Ensure GitHub Actions is enabled.

Can I customize which labels are applied?

Yes! Edit the workflow file and modify the label arrays in the classification prompts.

How do I disable certain features?

Comment out the corresponding LLM call in the workflow. For example, to disable translation:

// const translation = await callLLM(...); // Disabled
const translation = '{"title": "", "description": ""}'; // Fallback

Technical Questions

Which LLM model is used?

gpt-4o-mini via GitHub Models API. Fast, cost-effective, and good for classification.

Can I use a different model?

Yes, modify the model parameter:

model: 'gpt-4o'  // More powerful but slower
model: 'gpt-3.5-turbo'  // Faster but less accurate

How many API calls per issue?

4 parallel calls:

  1. Duplicate detection (250 tokens)
  2. Classification (200 tokens)
  3. Context analysis (300 tokens)
  4. Spanish translation (400 tokens)

Total: ~1150 tokens per issue

What's the rate limit?

Depends on GitHub Models tier. Free tier: ~60 requests/minute. Each issue uses 4 requests.

How fast is the analysis?

Typically 3-5 seconds from issue creation to analysis completion.

Does it work offline?

No, requires internet access to call GitHub Models API.


Workflow Behavior

When does the workflow run?

Triggered automatically when a new issue is opened (not edited or commented).

Can I run it manually?

Not by default. To enable manual triggers, add:

on:
  issues:
    types: [opened]
  workflow_dispatch:  # Adds manual trigger

Does it analyze existing issues?

No, only runs on new issues. To analyze existing issues, create a separate workflow with workflow_dispatch trigger.

What happens if the workflow fails?

The issue is left unchanged. Check Actions logs for error details. Workflow has extensive error handling to prevent partial updates.

Can it run on issue edits?

Yes, change trigger:

on:
  issues:
    types: [opened, edited]

But be cautious - will re-analyze every edit.


Feature-Specific Questions

How does duplicate detection work?

LLM compares new issue title and description against all existing issues to find semantic matches. Returns issue number if duplicate found.

What if a duplicate is incorrectly detected?

  1. Reopen the issue
  2. Add comment explaining why it's not a duplicate
  3. Apply not-duplicate label to prevent future false positives

Can it detect duplicates across repositories?

No, only searches issues within the same repository.

What priority levels are available?

  • P0 - Critical: Production down, security breach, data loss
  • P1 - High: Major feature broken
  • P2 - Medium: Minor bugs, enhancements
  • P3 - Low: Nice-to-haves

How is size estimated?

LLM analyzes issue complexity and scope:

  • XS: 1-2 hours
  • S: 2-4 hours
  • M: 1-2 days
  • L: 3-5 days
  • XL: 1+ week

What triggers the "needs-attention" flag?

Negative sentiment detection: urgency, frustration, anger in issue description.

Can it auto-assign issues to team members?

It suggests assignees based on issue type. You can enhance it to auto-assign:

if (assignmentSuggestion) {
  await github.rest.issues.addAssignees({
    owner: context.repo.owner,
    repo: context.repo.repo,
    issue_number: context.issue.number,
    assignees: [assignmentSuggestion.replace('@', '')]
  });
}

Does it close duplicate issues automatically?

Yes, if isDuplicate: true is returned, it posts a comment and closes the issue.


Troubleshooting

Workflow isn't running at all

  1. Check Actions are enabled (Settings → Actions)
  2. Verify workflow file is on default branch
  3. Check YAML syntax is valid
  4. Ensure file path is exactly .github/workflows/smart-issue-analyzer.yml

Getting "Invalid token" errors

  1. Verify MODELS_PAT secret exists
  2. Check token hasn't expired
  3. Ensure token has Models permission
  4. Try regenerating token

Analysis returns empty or default values

  1. Check workflow logs for API errors
  2. Verify LLM response format in logs
  3. Ensure adequate token limits
  4. Check for rate limiting

Labels aren't being applied

  1. Ensure labels exist in repository (create them first)
  2. Check workflow has issues: write permission
  3. Verify label names match exactly (case-sensitive)

See Troubleshooting for detailed solutions.


Customization

Can I add more languages for translation?

Yes, add another LLM call:

const frenchTranslation = await callLLM(
  'You are a translator.',
  `Translate to French:\nTitle: ${title}\nBody: ${body}`,
  400
);

Can I change the comment format?

Yes, edit the commentBody variable to customize markdown output.

Can I integrate with Slack/Teams?

Yes, add webhook calls after analysis:

await fetch('https://hooks.slack.com/...', {
  method: 'POST',
  body: JSON.stringify({ text: `New P0 issue: ${title}` })
});

Can I save analysis to a database?

Yes, add API calls to store results:

await fetch('https://your-api.com/issues', {
  method: 'POST',
  body: JSON.stringify({ issue: context.issue.number, analysis: classifyResult })
});

Performance & Scaling

How many issues can it handle per hour?

Limited by GitHub Actions concurrency (typically 20 parallel jobs) and API rate limits. Approximately 300-600 issues/hour.

Does it slow down with many existing issues?

Yes, fetching all issues for context can be slow (1000+ issues). Solution: Limit context to recent 100 issues.

What happens if multiple issues are created simultaneously?

Each gets its own workflow run. All run in parallel (up to concurrency limit).

Can I prioritize certain issue types?

Yes, use workflow conditions:

jobs:
  analyze:
    runs-on: ubuntu-latest
    if: contains(github.event.issue.title, 'URGENT')

Privacy & Security

Is issue data sent to third parties?

Yes, issue content is sent to GitHub Models API (Microsoft Azure). Review GitHub's privacy policy.

Can I run this with a local LLM?

Yes, but requires significant modification to replace GitHub Models API with local endpoint.

Are tokens logged?

No, GitHub Actions automatically masks secrets in logs.

Can the LLM modify issues maliciously?

No, LLM only analyzes. Workflow code controls what actions are taken. Review workflow file to understand exact behavior.


Comparison Questions

How is this different from GitHub Copilot?

  • Copilot: Code completion assistant
  • This: Automated issue triage workflow
  • Can be used together!

Do I still need manual triage?

Recommended. This automates initial classification, but human review ensures accuracy.

Can it replace issue templates?

No, complementary. Templates collect structured data, this workflow analyzes it.


Future Features

Will you add support for pull requests?

Potentially! Similar analysis could work for PR descriptions.

Can it suggest code fixes?

Not currently, but could be extended to generate code snippets for simple bugs.

Will there be a UI dashboard?

Not planned, but you could build one using GitHub GraphQL API to query issue labels.


Contributing

Can I contribute improvements?

Yes! Fork the repository, make changes, and submit a pull request.

How do I report bugs?

Create an issue in this repository with:

  • Workflow run link
  • Error message
  • Expected vs actual behavior

Can I use this in my own project?

Yes! MIT licensed (or check repository license). Copy workflow file and customize.


Next Steps

Clone this wiki locally