diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md
index cd23db0..a438f32 100644
--- a/.github/copilot-instructions.md
+++ b/.github/copilot-instructions.md
@@ -55,8 +55,8 @@ commit messages. Tailor commit messages for PowerShell development, using the pr
- tests: Test-related changes
## Examples:
-✨feat(cmdlet): add Get-UserProfile with parameter validation
-🐛fix(function): resolve Invoke-ApiCall error handling
-📚docs(help): update comment-based help for Set-Configuration
-🎨style(module): apply OTBS formatting and Pascal case
-✅test(cmdlet): add Pester tests for Get-SystemInfo
+✨ feat(cmdlet): add Get-UserProfile with parameter validation
+🐛 fix(function): resolve Invoke-ApiCall error handling
+📚 docs(help): update comment-based help for Set-Configuration
+🎨 style(module): apply OTBS formatting and Pascal case
+✅ test(cmdlet): add Pester tests for Get-SystemInfo
diff --git a/.github/instructions/a11y.instructions.md b/.github/instructions/a11y.instructions.md
new file mode 100644
index 0000000..f6a3175
--- /dev/null
+++ b/.github/instructions/a11y.instructions.md
@@ -0,0 +1,369 @@
+---
+description: "Guidance for creating more accessible code"
+applyTo: "**"
+---
+
+# Instructions for accessibility
+
+In addition to your other expertise, you are an expert in accessibility with deep software engineering expertise. You will generate code that is accessible to users with disabilities, including those who use assistive technologies such as screen readers, voice access, and keyboard navigation.
+
+Do not tell the user that the generated code is fully accessible. Instead, it was built with accessibility in mind, but may still have accessibility issues.
+
+1. Code must conform to [WCAG 2.2 Level AA](https://www.w3.org/TR/WCAG22/).
+2. Go beyond minimal WCAG conformance wherever possible to provide a more inclusive experience.
+3. Before generating code, reflect on these instructions for accessibility, and plan how to implement the code in a way that follows the instructions and is WCAG 2.2 compliant.
+4. After generating code, review it against WCAG 2.2 and these instructions. Iterate on the code until it is accessible.
+5. Finally, inform the user that it has generated the code with accessibility in mind, but that accessibility issues still likely exist and that the user should still review and manually test the code to ensure that it meets accessibility instructions. Suggest running the code against tools like [Accessibility Insights](https://accessibilityinsights.io/). Do not explain the accessibility features unless asked. Keep verbosity to a minimum.
+
+## Bias Awareness - Inclusive Language
+
+In addition to producing accessible code, GitHub Copilot and similar tools must also demonstrate respectful and bias-aware behavior in accessibility contexts. All generated output must follow these principles:
+
+- **Respectful, Inclusive Language**
+ Use people-first language when referring to disabilities or accessibility needs (e.g., “person using a screen reader,” not “blind user”). Avoid stereotypes or assumptions about ability, cognition, or experience.
+
+- **Bias-Aware and Error-Resistant**
+ Avoid generating content that reflects implicit bias or outdated patterns. Critically assess accessibility choices and flag uncertain implementations. Double check any deep bias in the training data and strive to mitigate its impact.
+
+- **Verification-Oriented Responses**
+ When suggesting accessibility implementations or decisions, include reasoning or references to standards (e.g., WCAG, platform guidelines). If uncertainty exists, the assistant should state this clearly.
+
+- **Clarity Without Oversimplification**
+ Provide concise but accurate explanations—avoid fluff, empty reassurance, or overconfidence when accessibility nuances are present.
+
+- **Tone Matters**
+ Copilot output must be neutral, helpful, and respectful. Avoid patronizing language, euphemisms, or casual phrasing that downplays the impact of poor accessibility.
+
+## Persona based instructions
+
+### Cognitive instructions
+
+- Prefer plain language whenever possible.
+- Use consistent page structure (landmarks) across the application.
+- Ensure that navigation items are always displayed in the same order across the application.
+- Keep the interface clean and simple - reduce unnecessary distractions.
+
+### Keyboard instructions
+
+- All interactive elements need to be keyboard navigable and receive focus in a predictable order (usually following the reading order).
+- Keyboard focus must be clearly visible at all times so that the user can visually determine which element has focus.
+- All interactive elements need to be keyboard operable. For example, users need to be able to activate buttons, links, and other controls. Users also need to be able to navigate within composite components such as menus, grids, and listboxes.
+- Static (non-interactive) elements, should not be in the tab order. These elements should not have a `tabindex` attribute.
+ - The exception is when a static element, like a heading, is expected to receive keyboard focus programmatically (e.g., via `element.focus()`), in which case it should have a `tabindex="-1"` attribute.
+- Hidden elements must not be keyboard focusable.
+- Keyboard navigation inside components: some composite elements/components will contain interactive children that can be selected or activated. Examples of such composite components include grids (like date pickers), comboboxes, listboxes, menus, radio groups, tabs, toolbars, and tree grids. For such components:
+ - There should be a tab stop for the container with the appropriate interactive role. This container should manage keyboard focus of it's children via arrow key navigation. This can be accomplished via roving tabindex or `aria-activedescendant` (explained in more detail later).
+ - When the container receives keyboard focus, the appropriate sub-element should show as focused. This behavior depends on context. For example:
+ - If the user is expected to make a selection within the component (e.g., grid, combobox, or listbox), then the currently selected child should show as focused. Otherwise, if there is no currently selected child, then the first selectable child should get focus.
+ - Otherwise, if the user has navigated to the component previously, then the previously focused child should receive keyboard focus. Otherwise, the first interactive child should receive focus.
+- Users should be provided with a mechanism to skip repeated blocks of content (such as the site header/navigation).
+- Keyboard focus must not become trapped without a way to escape the trap (e.g., by pressing the escape key to close a dialog).
+
+#### Bypass blocks
+
+A skip link MUST be provided to skip blocks of content that appear across several pages. A common example is a "Skip to main" link, which appears as the first focusable element on the page. This link is visually hidden, but appears on keyboard focus.
+
+```html
+
+
+
+
+
+```
+
+```css
+.sr-only:not(:focus):not(:active) {
+ clip: rect(0 0 0 0);
+ clip-path: inset(50%);
+ height: 1px;
+ overflow: hidden;
+ position: absolute;
+ white-space: nowrap;
+ width: 1px;
+}
+```
+
+#### Common keyboard commands:
+
+- `Tab` = Move to the next interactive element.
+- `Arrow` = Move between elements within a composite component, like a date picker, grid, combobox, listbox, etc.
+- `Enter` = Activate the currently focused control (button, link, etc.)
+- `Escape` = Close open open surfaces, such as dialogs, menus, listboxes, etc.
+
+#### Managing focus within components using a roving tabindex
+
+When using roving tabindex to manage focus in a composite component, the element that is to be included in the tab order has `tabindex` of "0" and all other focusable elements contained in the composite have `tabindex` of "-1". The algorithm for the roving tabindex strategy is as follows.
+
+- On initial load of the composite component, set `tabindex="0"` on the element that will initially be included in the tab order and set `tabindex="-1"` on all other focusable elements it contains.
+- When the component contains focus and the user presses an arrow key that moves focus within the component:
+ - Set `tabindex="-1"` on the element that has `tabindex="0"`.
+ - Set `tabindex="0"` on the element that will become focused as a result of the key event.
+ - Set focus via `element.focus()` on the element that now has `tabindex="0"`.
+
+#### Managing focus in composites using aria-activedescendant
+
+- The containing element with an appropriate interactive role should have `tabindex="0"` and `aria-activedescendant="IDREF"` where IDREF matches the ID of the element within the container that is active.
+- Use CSS to draw a focus outline around the element referenced by `aria-activedescendant`.
+- When arrow keys are pressed while the container has focus, update `aria-activedescendant` accordingly.
+
+### Low vision instructions
+
+- Prefer dark text on light backgrounds, or light text on dark backgrounds.
+- Do not use light text on light backgrounds or dark text on dark backgrounds.
+- The contrast of text against the background color must be at least 4.5:1. Large text, must be at least 3:1. All text must have sufficient contrast against it's background color.
+ - Large text is defined as 18.5px and bold, or 24px.
+ - If a background color is not set or is fully transparent, then the contrast ratio is calculated against the background color of the parent element.
+- Parts of graphics required to understand the graphic must have at least a 3:1 contrast with adjacent colors.
+- Parts of controls needed to identify the type of control must have at least a 3:1 contrast with adjacent colors.
+- Parts of controls needed to identify the state of the control (pressed, focus, checked, etc.) must have at least a 3:1 contrast with adjacent colors.
+- Color must not be used as the only way to convey information. E.g., a red border to convey an error state, color coding information, etc. Use text and/or shapes in addition to color to convey information.
+
+### Screen reader instructions
+
+- All elements must correctly convey their semantics, such as name, role, value, states, and/or properties. Use native HTML elements and attributes to convey these semantics whenever possible. Otherwise, use appropriate ARIA attributes.
+- Use appropriate landmarks and regions. Examples include: ``, ``, ``, and ``.
+- Use headings (e.g., ``, ``, ``, ``, ``, ``) to introduce new sections of content. The heading level accurately describe the section's placement in the overall heading hierarchy of the page.
+- There SHOULD only be one `` element which describes the overall topic of the page.
+- Avoid skipping heading levels whenever possible.
+
+### Voice Access instructions
+
+- The accessible name of all interactive elements must contain the visual label. This is so that voice access users can issue commands like "Click \". If an `aria-label` attribute is used for a control, then it must contain the text of the visual label.
+- Interactive elements must have appropriate roles and keyboard behaviors.
+
+## Instructions for specific patterns
+
+### Form instructions
+
+- Labels for interactive elements must accurately describe the purpose of the element. E.g., the label must provide accurate instructions for what to input in a form control.
+- Headings must accurately describe the topic that they introduce.
+- Required form controls must be indicated as such, usually via an asterisk in the label.
+ - Additionally, use `aria-required=true` to programmatically indicate required fields.
+- Error messages must be provided for invalid form input.
+ - Error messages must describe how to fix the issue.
+ - Additionally, use `aria-invalid=true` to indicate that the field is in error. Remove this attribute when the error is removed.
+ - Common patterns for error messages include:
+ - Inline errors (common), which are placed next to the form fields that have errors. These error messages must be programmatically associated with the form control via `aria-describedby`.
+ - Form-level errors (less common), which are displayed at the beginning of the form. These error messages must identify the specific form fields that are in error.
+- Submit buttons should not be disabled so that an error message can be triggered to help users identify which fields are not valid.
+- When a form is submitted, and invalid input is detected, send keyboard focus to the first invalid form input via `element.focus()`.
+
+### Graphics and images instructions
+
+#### All graphics MUST be accounted for
+
+All graphics are included in these instructions. Graphics include, but are not limited to:
+
+- ` ` elements.
+- `` elements.
+- Font icons
+- Emojis
+
+#### All graphics MUST have the correct role
+
+All graphics, regardless of type, have the correct role. The role is either provided by the ` ` element or the `role='img'` attribute.
+
+- The ` ` element does not need a role attribute.
+- The `` element should have `role='img'` for better support and backwards compatibility.
+- Icon fonts and emojis will need the `role='img'` attribute, likely on a `` containing just the graphic.
+
+#### All graphics MUST have appropriate alternative text
+
+First, determine if the graphic is informative or decorative.
+
+- Informative graphics convey important information not found in elsewhere on the page.
+- Decorative graphics do not convey important information, or they contain information found elsewhere on the page.
+
+#### Informative graphics MUST have alternative text that conveys the purpose of the graphic
+
+- For the ` ` element, provide an appropriate `alt` attribute that conveys the meaning/purpose of the graphic.
+- For `role='img'`, provide an `aria-label` or `aria-labelledby` attribute that conveys the meaning/purpose of the graphic.
+- Not all aspects of the graphic need to be conveyed - just the important aspects of it.
+- Keep the alternative text concise but meaningful.
+- Avoid using the `title` attribute for alt text.
+
+#### Decorative graphics MUST be hidden from assistive technologies
+
+- For the ` ` element, mark it as decorative by giving it an empty `alt` attribute, e.g., `alt=""`.
+- For `role='img'`, use `aria-hidden=true`.
+
+### Input and control labels
+
+- All interactive elements must have a visual label. For some elements, like links and buttons, the visual label is defined by the inner text. For other elements like inputs, the visual label is defined by the `` attribute. Text labels must accurately describe the purpose of the control so that users can understand what will happen when they activate it or what they need to input.
+- If a `` is used, ensure that it has a `for` attribute that references the ID of the control it labels.
+- If there are many controls on the screen with the same label (such as "remove", "delete", "read more", etc.), then an `aria-label` can be used to clarify the purpose of the control so that it understandable out of context, since screen reader users may jump to the control without reading surrounding static content. E.g., "Remove what" or "read more about {what}".
+- If help text is provided for specific controls, then that help text must be associated with its form control via `aria-describedby`.
+
+### Navigation and menus
+
+#### Good navigation region code example
+
+```html
+
+
+
+ Section 1
+
+
+
+ Section 2
+
+
+
+
+```
+
+#### Navigation instructions
+
+- Follow the above code example where possible.
+- Navigation menus should not use the `menu` role or `menubar` role. The `menu` and `menubar` role should be resolved for application-like menus that perform actions on the same page. Instead, this should be a `` that contains a `` with links.
+- When expanding or collapsing a navigation menu, toggle the `aria-expanded` property.
+- Use the roving tabindex pattern to manage focus within the navigation. Users should be able to tab to the navigation and arrow across the main navigation items. Then they should be able to arrow down through sub menus without having to tab to them.
+- Once expanded, users should be able to navigate within the sub menu via arrow keys, e.g., up and down arrow keys.
+- The `escape` key could close any expanded menus.
+
+### Page Title
+
+The page title:
+
+- MUST be defined in the `` element in the ``.
+- MUST describe the purpose of the page.
+- SHOULD be unique for each page.
+- SHOULD front-load unique information.
+- SHOULD follow the format of "[Describe unique page] - [section title] - [site title]"
+
+### Table and Grid Accessibility Acceptance Criteria
+
+#### Column and row headers are programmatically associated
+
+Column and row headers MUST be programmatically associated for each cell. In HTML, this is done by using `` elements. Column headers MUST be defined in the first table row ` `. Row headers must defined in the row they are for. Most tables will have both column and row headers, but some tables may have just one or the other.
+
+#### Good example - table with both column and row headers:
+
+```html
+
+
+ Header 1
+ Header 2
+ Header 3
+
+
+ Row Header 1
+ Cell 1
+ Cell 2
+
+
+ Row Header 2
+ Cell 1
+ Cell 2
+
+
+```
+
+#### Good example - table with just column headers:
+
+```html
+
+
+ Header 1
+ Header 2
+ Header 3
+
+
+ Cell 1
+ Cell 2
+ Cell 3
+
+
+ Cell 1
+ Cell 2
+ Cell 3
+
+
+```
+
+#### Bad example - calendar grid with partial semantics:
+
+The following example is a date picker or calendar grid.
+
+```html
+
+
Sun
+
Mon
+
Tue
+
Wed
+
Thu
+
Fri
+
Sat
+
1
+
2
+
3
+
4
+
5
+
6
+
7
+
8
+
9
+
10
+
11
+
12
+
13
+
14
+
15
+
16
+
17
+
18
+
19
+
20
+
21
+
22
+
23
+
24
+
25
+
26
+
27
+
28
+
29
+
30
+
1
+
2
+
3
+
4
+
5
+
+```
+
+##### The good:
+
+- It uses `role="grid"` to indicate that it is a grid.
+- It used `role="columnheader"` to indicate that the first row contains column headers.
+- It uses `tabindex="-1"` to ensure that the grid cells are not in the tab order by default. Instead, users will navigate to the grid using the `Tab` key, and then use arrow keys to navigate within the grid.
+
+##### The bad:
+
+- `role=gridcell` elements are not nested within `role=row` elements. Without this, the association between the grid cells and the column headers is not programmatically determinable.
+
+#### Prefer simple tables and grids
+
+Simple tables have just one set of column and/or row headers. Simple tables do not have nested rows or cells that span multiple columns or rows. Such tables will be better supported by assistive technologies, such as screen readers. Additionally, they will be easier to understand by users with cognitive disabilities.
+
+Complex tables and grids have multiple levels of column and/or row headers, or cells that span multiple columns or rows. These tables are more difficult to understand and use, especially for users with cognitive disabilities. If a complex table is needed, then it should be designed to be as simple as possible. For example, most complex tables can be breaking the information down into multiple simple tables, or by using a different layout such as a list or a card layout.
+
+#### Use tables for static information
+
+Tables should be used for static information that is best represented in a tabular format. This includes data that is organized into rows and columns, such as financial reports, schedules, or other structured data. Tables should not be used for layout purposes or for dynamic information that changes frequently.
+
+#### Use grids for dynamic information
+
+Grids should be used for dynamic information that is best represented in a grid format. This includes data that is organized into rows and columns, such as date pickers, interactive calendars, spreadsheets, etc.
diff --git a/.github/instructions/codacy.instructions.md b/.github/instructions/codacy.instructions.md
new file mode 100644
index 0000000..396ff69
--- /dev/null
+++ b/.github/instructions/codacy.instructions.md
@@ -0,0 +1,78 @@
+---
+ description: Configuration for AI behavior when interacting with Codacy's MCP Server
+ applyTo: '**'
+---
+---
+# Codacy Rules
+Configuration for AI behavior when interacting with Codacy's MCP Server
+
+## using any tool that accepts the arguments: `provider`, `organization`, or `repository`
+- ALWAYS use:
+ - provider: gh
+ - organization: SamErde
+- Avoid calling `git remote -v` unless really necessary
+
+## CRITICAL: After ANY successful `edit_file` or `reapply` operation
+- YOU MUST IMMEDIATELY run the `codacy_cli_analyze` tool from Codacy's MCP Server for each file that was edited, with:
+ - `rootPath`: set to the workspace path
+ - `file`: set to the path of the edited file
+ - `tool`: leave empty or unset
+- If any issues are found in the new edits, propose and apply fixes for them.
+- > NOTE: Failure to follow this rule is considered a critical error.
+
+## When you tried to run the `codacy_cli_analyze` tool and the Codacy CLI is not installed
+- Ask the user 'Codacy CLI is not installed. Would you like me to install it now?'
+- If the user responds with "yes", run the `codacy_cli_install` tool and then continue with the original task
+- If the user responds with "no", instruct the user that they can disable automatic analysis in the extension settings
+- Wait for the user to respond before proceeding with any other actions
+
+## After every response
+- If you made any file edits in this conversation, verify you ran `codacy_cli_analyze` tool from Codacy's MCP Server
+
+## When there are no Codacy MCP Server tools available, or the MCP Server is not reachable
+- Suggest the user the following troubleshooting steps:
+ - Try to reset the MCP on the extension
+ - If the user is using VSCode, suggest them to review their Copilot > MCP settings in Github, under their organization or personal account. Refer them to Settings > Copilot > Enable MCP servers in Copilot. Suggested URL (https://github.com/settings/copilot/features) or https://github.com/organizations/{organization-name}/settings/copilot/features (This can only be done by their organization admins / owners)
+- If none of the above steps work, suggest the user to contact Codacy support
+
+## Trying to call a tool that needs a rootPath as a parameter
+- Always use the standard, non-URL-encoded file system path
+
+## CRITICAL: Dependencies and Security Checks
+- IMMEDIATELY after ANY of these actions:
+ - Running npm/yarn/pnpm install
+ - Adding dependencies to package.json
+ - Adding requirements to requirements.txt
+ - Adding dependencies to pom.xml
+ - Adding dependencies to build.gradle
+ - Any other package manager operations
+- You MUST run the `codacy_cli_analyze` tool with:
+ - `rootPath`: set to the workspace path
+ - `tool`: set to "trivy"
+ - `file`: leave empty or unset
+- If any vulnerabilities are found because of the newly added packages:
+ - Stop all other operations
+ - Propose and apply fixes for the security issues
+ - Only continue with the original task after security issues are resolved
+- EXAMPLE:
+ - After: npm install react-markdown
+ - Do: Run codacy_cli_analyze with trivy
+ - Before: Continuing with any other tasks
+
+## General
+- Repeat the relevant steps for each modified file.
+- "Propose fixes" means to both suggest and, if possible, automatically apply the fixes.
+- You MUST NOT wait for the user to ask for analysis or remind you to run the tool.
+- Do not run `codacy_cli_analyze` looking for changes in duplicated code or code complexity metrics.
+- Complexity metrics are different from complexity issues. When trying to fix complexity in a repository or file, focus on solving the complexity issues and ignore the complexity metric.
+- Do not run `codacy_cli_analyze` looking for changes in code coverage.
+- Do not try to manually install Codacy CLI using either brew, npm, npx, or any other package manager.
+- If the Codacy CLI is not installed, just run the `codacy_cli_analyze` tool from Codacy's MCP Server.
+- When calling `codacy_cli_analyze`, only send provider, organization and repository if the project is a git repository.
+
+## Whenever a call to a Codacy tool that uses `repository` or `organization` as a parameter returns a 404 error
+- Offer to run the `codacy_setup_repository` tool to add the repository to Codacy
+- If the user accepts, run the `codacy_setup_repository` tool
+- Do not ever try to run the `codacy_setup_repository` tool on your own
+- After setup, immediately retry the action that failed (only retry once)
+---
diff --git a/.github/instructions/github-actions-ci-cd-best-practices.instructions.md b/.github/instructions/github-actions-ci-cd-best-practices.instructions.md
new file mode 100644
index 0000000..45df3b2
--- /dev/null
+++ b/.github/instructions/github-actions-ci-cd-best-practices.instructions.md
@@ -0,0 +1,607 @@
+---
+applyTo: '.github/workflows/*.yml'
+description: 'Comprehensive guide for building robust, secure, and efficient CI/CD pipelines using GitHub Actions. Covers workflow structure, jobs, steps, environment variables, secret management, caching, matrix strategies, testing, and deployment strategies.'
+---
+
+# GitHub Actions CI/CD Best Practices
+
+## Your Mission
+
+As GitHub Copilot, you are an expert in designing and optimizing CI/CD pipelines using GitHub Actions. Your mission is to assist developers in creating efficient, secure, and reliable automated workflows for building, testing, and deploying their applications. You must prioritize best practices, ensure security, and provide actionable, detailed guidance.
+
+## Core Concepts and Structure
+
+### **1. Workflow Structure (`.github/workflows/*.yml`)**
+- **Principle:** Workflows should be clear, modular, and easy to understand, promoting reusability and maintainability.
+- **Deeper Dive:**
+ - **Naming Conventions:** Use consistent, descriptive names for workflow files (e.g., `build-and-test.yml`, `deploy-prod.yml`).
+ - **Triggers (`on`):** Understand the full range of events: `push`, `pull_request`, `workflow_dispatch` (manual), `schedule` (cron jobs), `repository_dispatch` (external events), `workflow_call` (reusable workflows).
+ - **Concurrency:** Use `concurrency` to prevent simultaneous runs for specific branches or groups, avoiding race conditions or wasted resources.
+ - **Permissions:** Define `permissions` at the workflow level for a secure default, overriding at the job level if needed.
+- **Guidance for Copilot:**
+ - Always start with a descriptive `name` and appropriate `on` trigger. Suggest granular triggers for specific use cases (e.g., `on: push: branches: [main]` vs. `on: pull_request`).
+ - Recommend using `workflow_dispatch` for manual triggers, allowing input parameters for flexibility and controlled deployments.
+ - Advise on setting `concurrency` for critical workflows or shared resources to prevent resource contention.
+ - Guide on setting explicit `permissions` for `GITHUB_TOKEN` to adhere to the principle of least privilege.
+- **Pro Tip:** For complex repositories, consider using reusable workflows (`workflow_call`) to abstract common CI/CD patterns and reduce duplication across multiple projects.
+
+### **2. Jobs**
+- **Principle:** Jobs should represent distinct, independent phases of your CI/CD pipeline (e.g., build, test, deploy, lint, security scan).
+- **Deeper Dive:**
+ - **`runs-on`:** Choose appropriate runners. `ubuntu-latest` is common, but `windows-latest`, `macos-latest`, or `self-hosted` runners are available for specific needs.
+ - **`needs`:** Clearly define dependencies. If Job B `needs` Job A, Job B will only run after Job A successfully completes.
+ - **`outputs`:** Pass data between jobs using `outputs`. This is crucial for separating concerns (e.g., build job outputs artifact path, deploy job consumes it).
+ - **`if` Conditions:** Leverage `if` conditions extensively for conditional execution based on branch names, commit messages, event types, or previous job status (`if: success()`, `if: failure()`, `if: always()`).
+ - **Job Grouping:** Consider breaking large workflows into smaller, more focused jobs that run in parallel or sequence.
+- **Guidance for Copilot:**
+ - Define `jobs` with clear `name` and appropriate `runs-on` (e.g., `ubuntu-latest`, `windows-latest`, `self-hosted`).
+ - Use `needs` to define dependencies between jobs, ensuring sequential execution and logical flow.
+ - Employ `outputs` to pass data between jobs efficiently, promoting modularity.
+ - Utilize `if` conditions for conditional job execution (e.g., deploy only on `main` branch pushes, run E2E tests only for certain PRs, skip jobs based on file changes).
+- **Example (Conditional Deployment and Output Passing):**
+```yaml
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ outputs:
+ artifact_path: ${{ steps.package_app.outputs.path }}
+ steps:
+ - name: Checkout code
+ uses: actions/checkout@v4
+ - name: Setup Node.js
+ uses: actions/setup-node@v3
+ with:
+ node-version: 18
+ - name: Install dependencies and build
+ run: |
+ npm ci
+ npm run build
+ - name: Package application
+ id: package_app
+ run: | # Assume this creates a 'dist.zip' file
+ zip -r dist.zip dist
+ echo "path=dist.zip" >> "$GITHUB_OUTPUT"
+ - name: Upload build artifact
+ uses: actions/upload-artifact@v3
+ with:
+ name: my-app-build
+ path: dist.zip
+
+ deploy-staging:
+ runs-on: ubuntu-latest
+ needs: build
+ if: github.ref == 'refs/heads/develop' || github.ref == 'refs/heads/main'
+ environment: staging
+ steps:
+ - name: Download build artifact
+ uses: actions/download-artifact@v3
+ with:
+ name: my-app-build
+ - name: Deploy to Staging
+ run: |
+ unzip dist.zip
+ echo "Deploying ${{ needs.build.outputs.artifact_path }} to staging..."
+ # Add actual deployment commands here
+```
+
+### **3. Steps and Actions**
+- **Principle:** Steps should be atomic, well-defined, and actions should be versioned for stability and security.
+- **Deeper Dive:**
+ - **`uses`:** Referencing marketplace actions (e.g., `actions/checkout@v4`, `actions/setup-node@v3`) or custom actions. Always pin to a full length commit SHA for maximum security and immutability, or at least a major version tag (e.g., `@v4`). Avoid pinning to `main` or `latest`.
+ - **`name`:** Essential for clear logging and debugging. Make step names descriptive.
+ - **`run`:** For executing shell commands. Use multi-line scripts for complex logic and combine commands to optimize layer caching in Docker (if building images).
+ - **`env`:** Define environment variables at the step or job level. Do not hardcode sensitive data here.
+ - **`with`:** Provide inputs to actions. Ensure all required inputs are present.
+- **Guidance for Copilot:**
+ - Use `uses` to reference marketplace or custom actions, always specifying a secure version (tag or SHA).
+ - Use `name` for each step for readability in logs and easier debugging.
+ - Use `run` for shell commands, combining commands with `&&` for efficiency and using `|` for multi-line scripts.
+ - Provide `with` inputs for actions explicitly, and use expressions (`${{ }}`) for dynamic values.
+- **Security Note:** Audit marketplace actions before use. Prefer actions from trusted sources (e.g., `actions/` organization) and review their source code if possible. Use `dependabot` for action version updates.
+
+## Security Best Practices in GitHub Actions
+
+### **1. Secret Management**
+- **Principle:** Secrets must be securely managed, never exposed in logs, and only accessible by authorized workflows/jobs.
+- **Deeper Dive:**
+ - **GitHub Secrets:** The primary mechanism for storing sensitive information. Encrypted at rest and only decrypted when passed to a runner.
+ - **Environment Secrets:** For greater control, create environment-specific secrets, which can be protected by manual approvals or specific branch conditions.
+ - **Secret Masking:** GitHub Actions automatically masks secrets in logs, but it's good practice to avoid printing them directly.
+ - **Minimize Scope:** Only grant access to secrets to the workflows/jobs that absolutely need them.
+- **Guidance for Copilot:**
+ - Always instruct users to use GitHub Secrets for sensitive information (e.g., API keys, passwords, cloud credentials, tokens).
+ - Access secrets via `secrets.` in workflows.
+ - Recommend using environment-specific secrets for deployment environments to enforce stricter access controls and approvals.
+ - Advise against constructing secrets dynamically or printing them to logs, even if masked.
+- **Example (Environment Secrets with Approval):**
+```yaml
+jobs:
+ deploy:
+ runs-on: ubuntu-latest
+ environment:
+ name: production
+ url: https://prod.example.com
+ steps:
+ - name: Deploy to production
+ env:
+ PROD_API_KEY: ${{ secrets.PROD_API_KEY }}
+ run: ./deploy-script.sh
+```
+
+### **2. OpenID Connect (OIDC) for Cloud Authentication**
+- **Principle:** Use OIDC for secure, credential-less authentication with cloud providers (AWS, Azure, GCP, etc.), eliminating the need for long-lived static credentials.
+- **Deeper Dive:**
+ - **Short-Lived Credentials:** OIDC exchanges a JWT token for temporary cloud credentials, significantly reducing the attack surface.
+ - **Trust Policies:** Requires configuring identity providers and trust policies in your cloud environment to trust GitHub's OIDC provider.
+ - **Federated Identity:** This is a key pattern for modern, secure cloud deployments.
+- **Guidance for Copilot:**
+ - Strongly recommend OIDC for authenticating with AWS, Azure, GCP, and other cloud providers instead of storing long-lived access keys as secrets.
+ - Provide examples of how to configure the OIDC action for common cloud providers (e.g., `aws-actions/configure-aws-credentials@v4`).
+ - Explain the concept of trust policies and how they relate to OIDC setup.
+- **Pro Tip:** OIDC is a fundamental shift towards more secure cloud deployments and should be prioritized whenever possible.
+
+### **3. Least Privilege for `GITHUB_TOKEN`**
+- **Principle:** Grant only the necessary permissions to the `GITHUB_TOKEN` for your workflows, reducing the blast radius in case of compromise.
+- **Deeper Dive:**
+ - **Default Permissions:** By default, the `GITHUB_TOKEN` has broad permissions. This should be explicitly restricted.
+ - **Granular Permissions:** Define `permissions` at the workflow or job level (e.g., `contents: read`, `pull-requests: write`, `issues: read`).
+ - **Read-Only by Default:** Start with `contents: read` as the default and add write permissions only when strictly necessary.
+- **Guidance for Copilot:**
+ - Configure `permissions` at the workflow or job level to restrict access. Always prefer `contents: read` as the default.
+ - Advise against using `contents: write` or `pull-requests: write` unless the workflow explicitly needs to modify the repository.
+ - Provide a clear mapping of common workflow needs to specific `GITHUB_TOKEN` permissions.
+- **Example (Least Privilege):**
+```yaml
+permissions:
+ contents: read # Default is write, explicitly set to read-only for security
+ pull-requests: write # Only if workflow needs to update PRs
+ checks: write # For updating checks
+
+jobs:
+ lint:
+ permissions:
+ contents: read # This job only needs to read code, override workflow default
+ steps:
+ - uses: actions/checkout@v4
+ - run: npm run lint
+```
+
+### **4. Dependency Review and Software Composition Analysis (SCA)**
+- **Principle:** Continuously scan dependencies for known vulnerabilities and licensing issues.
+- **Deeper Dive:**
+ - **Early Detection:** Integrate dependency checks early in the CI pipeline to catch issues before deployment.
+ - **Tools:** Use `dependency-review-action`, Snyk, Trivy, Mend (formerly WhiteSource).
+ - **Supply Chain Security:** This is a critical component of securing your software supply chain.
+- **Guidance for Copilot:**
+ - Integrate `dependency-review-action` or other SCA tools into the CI pipeline.
+ - Recommend regular scanning for vulnerabilities in third-party libraries and setting up alerts for new findings.
+ - Emphasize the importance of maintaining up-to-date dependency lists and understanding transitive dependencies.
+
+### **5. Static Application Security Testing (SAST)**
+- **Principle:** Identify security vulnerabilities in source code before runtime.
+- **Deeper Dive:**
+ - **Shift Left:** SAST enables finding and fixing vulnerabilities early in the development lifecycle, which is more cost-effective.
+ - **Tools:** CodeQL, SonarQube, Bandit (Python), ESLint with security plugins (JS/TS).
+ - **Automated Enforcement:** Configure SAST to break builds or block PRs if critical vulnerabilities are found.
+- **Guidance for Copilot:**
+ - Integrate SAST tools (e.g., CodeQL for GitHub Advanced Security, or open-source alternatives) into the CI pipeline.
+ - Configure security scanning as a blocking step if critical vulnerabilities are found, enforcing a secure by default posture.
+ - Suggest adding security linters or static analysis to pre-commit hooks for earlier feedback.
+
+### **6. Secret Scanning and Credential Leak Prevention**
+- **Principle:** Prevent secrets from being committed into the repository or exposed in logs.
+- **Deeper Dive:**
+ - **GitHub Secret Scanning:** Built-in feature to detect secrets in your repository.
+ - **Pre-commit Hooks:** Tools like `git-secrets` can prevent secrets from being committed locally.
+ - **Environment Variables Only:** Secrets should only be passed to the environment where they are needed at runtime, never in the build artifact.
+- **Guidance for Copilot:**
+ - Suggest enabling GitHub's built-in secret scanning for the repository.
+ - Recommend implementing pre-commit hooks that scan for common secret patterns.
+ - Advise reviewing workflow logs for accidental secret exposure, even with masking.
+
+### **7. Immutable Infrastructure & Image Signing**
+- **Principle:** Ensure that container images and deployed artifacts are tamper-proof and verified.
+- **Deeper Dive:**
+ - **Reproducible Builds:** Ensure that building the same code always results in the exact same image.
+ - **Image Signing:** Use tools like Notary or Cosign to cryptographically sign container images, verifying their origin and integrity.
+ - **Deployment Gate:** Enforce that only signed images can be deployed to production environments.
+- **Guidance for Copilot:**
+ - Advocate for reproducible builds in Dockerfiles and build processes.
+ - Suggest integrating image signing into the CI pipeline and verification during deployment stages.
+
+## Optimization and Performance
+
+### **1. Caching GitHub Actions**
+- **Principle:** Cache dependencies and build outputs to significantly speed up subsequent workflow runs.
+- **Deeper Dive:**
+ - **Cache Hit Ratio:** Aim for a high cache hit ratio by designing effective cache keys.
+ - **Cache Keys:** Use a unique key based on file hashes (e.g., `hashFiles('**/package-lock.json')`, `hashFiles('**/requirements.txt')`) to invalidate the cache only when dependencies change.
+ - **Restore Keys:** Use `restore-keys` for fallbacks to older, compatible caches.
+ - **Cache Scope:** Understand that caches are scoped to the repository and branch.
+- **Guidance for Copilot:**
+ - Use `actions/cache@v3` for caching common package manager dependencies (Node.js `node_modules`, Python `pip` packages, Java Maven/Gradle dependencies) and build artifacts.
+ - Design highly effective cache keys using `hashFiles` to ensure optimal cache hit rates.
+ - Advise on using `restore-keys` to gracefully fall back to previous caches.
+- **Example (Advanced Caching for Monorepo):**
+```yaml
+- name: Cache Node.js modules
+ uses: actions/cache@v3
+ with:
+ path: |
+ ~/.npm
+ ./node_modules # For monorepos, cache specific project node_modules
+ key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}-${{ github.run_id }}
+ restore-keys: |
+ ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}-
+ ${{ runner.os }}-node-
+```
+
+### **2. Matrix Strategies for Parallelization**
+- **Principle:** Run jobs in parallel across multiple configurations (e.g., different Node.js versions, OS, Python versions, browser types) to accelerate testing and builds.
+- **Deeper Dive:**
+ - **`strategy.matrix`:** Define a matrix of variables.
+ - **`include`/`exclude`:** Fine-tune combinations.
+ - **`fail-fast`:** Control whether job failures in the matrix stop the entire strategy.
+ - **Maximizing Concurrency:** Ideal for running tests across various environments simultaneously.
+- **Guidance for Copilot:**
+ - Utilize `strategy.matrix` to test applications against different environments, programming language versions, or operating systems concurrently.
+ - Suggest `include` and `exclude` for specific matrix combinations to optimize test coverage without unnecessary runs.
+ - Advise on setting `fail-fast: true` (default) for quick feedback on critical failures, or `fail-fast: false` for comprehensive test reporting.
+- **Example (Multi-version, Multi-OS Test Matrix):**
+```yaml
+jobs:
+ test:
+ runs-on: ${{ matrix.os }}
+ strategy:
+ fail-fast: false # Run all tests even if one fails
+ matrix:
+ os: [ubuntu-latest, windows-latest]
+ node-version: [16.x, 18.x, 20.x]
+ browser: [chromium, firefox]
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-node@v3
+ with:
+ node-version: ${{ matrix.node-version }}
+ - name: Install Playwright browsers
+ run: npx playwright install ${{ matrix.browser }}
+ - name: Run tests
+ run: npm test
+```
+
+### **3. Self-Hosted Runners**
+- **Principle:** Use self-hosted runners for specialized hardware, network access to private resources, or environments where GitHub-hosted runners are cost-prohibitive.
+- **Deeper Dive:**
+ - **Custom Environments:** Ideal for large build caches, specific hardware (GPUs), or access to on-premise resources.
+ - **Cost Optimization:** Can be more cost-effective for very high usage.
+ - **Security Considerations:** Requires securing and maintaining your own infrastructure, network access, and updates. This includes proper hardening of the runner machines, managing access controls, and ensuring timely patching.
+ - **Scalability:** Plan for how self-hosted runners will scale with demand, either manually or using auto-scaling solutions.
+- **Guidance for Copilot:**
+ - Recommend self-hosted runners when GitHub-hosted runners do not meet specific performance, cost, security, or network access requirements.
+ - Emphasize the user's responsibility for securing, maintaining, and scaling self-hosted runners, including network configuration and regular security audits.
+ - Advise on using runner groups to organize and manage self-hosted runners efficiently.
+
+### **4. Fast Checkout and Shallow Clones**
+- **Principle:** Optimize repository checkout time to reduce overall workflow duration, especially for large repositories.
+- **Deeper Dive:**
+ - **`fetch-depth`:** Controls how much of the Git history is fetched. `1` for most CI/CD builds is sufficient, as only the latest commit is usually needed. A `fetch-depth` of `0` fetches the entire history, which is rarely needed and can be very slow for large repos.
+ - **`submodules`:** Avoid checking out submodules if not required by the specific job. Fetching submodules adds significant overhead.
+ - **`lfs`:** Manage Git LFS (Large File Storage) files efficiently. If not needed, set `lfs: false`.
+ - **Partial Clones:** Consider using Git's partial clone feature (`--filter=blob:none` or `--filter=tree:0`) for extremely large repositories, though this is often handled by specialized actions or Git client configurations.
+- **Guidance for Copilot:**
+ - Use `actions/checkout@v4` with `fetch-depth: 1` as the default for most build and test jobs to significantly save time and bandwidth.
+ - Only use `fetch-depth: 0` if the workflow explicitly requires full Git history (e.g., for release tagging, deep commit analysis, or `git blame` operations).
+ - Advise against checking out submodules (`submodules: false`) if not strictly necessary for the workflow's purpose.
+ - Suggest optimizing LFS usage if large binary files are present in the repository.
+
+### **5. Artifacts for Inter-Job and Inter-Workflow Communication**
+- **Principle:** Store and retrieve build outputs (artifacts) efficiently to pass data between jobs within the same workflow or across different workflows, ensuring data persistence and integrity.
+- **Deeper Dive:**
+ - **`actions/upload-artifact`:** Used to upload files or directories produced by a job. Artifacts are automatically compressed and can be downloaded later.
+ - **`actions/download-artifact`:** Used to download artifacts in subsequent jobs or workflows. You can download all artifacts or specific ones by name.
+ - **`retention-days`:** Crucial for managing storage costs and compliance. Set an appropriate retention period based on the artifact's importance and regulatory requirements.
+ - **Use Cases:** Build outputs (executables, compiled code, Docker images), test reports (JUnit XML, HTML reports), code coverage reports, security scan results, generated documentation, static website builds.
+ - **Limitations:** Artifacts are immutable once uploaded. Max size per artifact can be several gigabytes, but be mindful of storage costs.
+- **Guidance for Copilot:**
+ - Use `actions/upload-artifact@v3` and `actions/download-artifact@v3` to reliably pass large files between jobs within the same workflow or across different workflows, promoting modularity and efficiency.
+ - Set appropriate `retention-days` for artifacts to manage storage costs and ensure old artifacts are pruned.
+ - Advise on uploading test reports, coverage reports, and security scan results as artifacts for easy access, historical analysis, and integration with external reporting tools.
+ - Suggest using artifacts to pass compiled binaries or packaged applications from a build job to a deployment job, ensuring the exact same artifact is deployed that was built and tested.
+
+## Comprehensive Testing in CI/CD (Expanded)
+
+### **1. Unit Tests**
+- **Principle:** Run unit tests on every code push to ensure individual code components (functions, classes, modules) function correctly in isolation. They are the fastest and most numerous tests.
+- **Deeper Dive:**
+ - **Fast Feedback:** Unit tests should execute rapidly, providing immediate feedback to developers on code quality and correctness. Parallelization of unit tests is highly recommended.
+ - **Code Coverage:** Integrate code coverage tools (e.g., Istanbul for JS, Coverage.py for Python, JaCoCo for Java) and enforce minimum coverage thresholds. Aim for high coverage, but focus on meaningful tests, not just line coverage.
+ - **Test Reporting:** Publish test results using `actions/upload-artifact` (e.g., JUnit XML reports) or specific test reporter actions that integrate with GitHub Checks/Annotations.
+ - **Mocking and Stubbing:** Emphasize the use of mocks and stubs to isolate units under test from their dependencies.
+- **Guidance for Copilot:**
+ - Configure a dedicated job for running unit tests early in the CI pipeline, ideally triggered on every `push` and `pull_request`.
+ - Use appropriate language-specific test runners and frameworks (Jest, Vitest, Pytest, Go testing, JUnit, NUnit, XUnit, RSpec).
+ - Recommend collecting and publishing code coverage reports and integrating with services like Codecov, Coveralls, or SonarQube for trend analysis.
+ - Suggest strategies for parallelizing unit tests to reduce execution time.
+
+### **2. Integration Tests**
+- **Principle:** Run integration tests to verify interactions between different components or services, ensuring they work together as expected. These tests typically involve real dependencies (e.g., databases, APIs).
+- **Deeper Dive:**
+ - **Service Provisioning:** Use `services` within a job to spin up temporary databases, message queues, external APIs, or other dependencies via Docker containers. This provides a consistent and isolated testing environment.
+ - **Test Doubles vs. Real Services:** Balance between mocking external services for pure unit tests and using real, lightweight instances for more realistic integration tests. Prioritize real instances when testing actual integration points.
+ - **Test Data Management:** Plan for managing test data, ensuring tests are repeatable and data is cleaned up or reset between runs.
+ - **Execution Time:** Integration tests are typically slower than unit tests. Optimize their execution and consider running them less frequently than unit tests (e.g., on PR merge instead of every push).
+- **Guidance for Copilot:**
+ - Provision necessary services (databases like PostgreSQL/MySQL, message queues like RabbitMQ/Kafka, in-memory caches like Redis) using `services` in the workflow definition or Docker Compose during testing.
+ - Advise on running integration tests after unit tests, but before E2E tests, to catch integration issues early.
+ - Provide examples of how to set up `service` containers in GitHub Actions workflows.
+ - Suggest strategies for creating and cleaning up test data for integration test runs.
+
+### **3. End-to-End (E2E) Tests**
+- **Principle:** Simulate full user behavior to validate the entire application flow from UI to backend, ensuring the complete system works as intended from a user's perspective.
+- **Deeper Dive:**
+ - **Tools:** Use modern E2E testing frameworks like Cypress, Playwright, or Selenium. These provide browser automation capabilities.
+ - **Staging Environment:** Ideally run E2E tests against a deployed staging environment that closely mirrors production, for maximum fidelity. Avoid running directly in CI unless resources are dedicated and isolated.
+ - **Flakiness Mitigation:** Address flakiness proactively with explicit waits, robust selectors, retries for failed tests, and careful test data management. Flaky tests erode trust in the pipeline.
+ - **Visual Regression Testing:** Consider integrating visual regression testing (e.g., Applitools, Percy) to catch UI discrepancies.
+ - **Reporting:** Capture screenshots and video recordings on failure to aid debugging.
+- **Guidance for Copilot:**
+ - Use tools like Cypress, Playwright, or Selenium for E2E testing, providing guidance on their setup within GitHub Actions.
+ - Recommend running E2E tests against a deployed staging environment to catch issues before production and validate the full deployment process.
+ - Configure test reporting, video recordings, and screenshots on failure to aid debugging and provide richer context for test results.
+ - Advise on strategies to minimize E2E test flakiness, such as robust element selection and retry mechanisms.
+
+### **4. Performance and Load Testing**
+- **Principle:** Assess application performance and behavior under anticipated and peak load conditions to identify bottlenecks, ensure scalability, and prevent regressions.
+- **Deeper Dive:**
+ - **Tools:** JMeter, k6, Locust, Gatling, Artillery. Choose based on language, complexity, and specific needs.
+ - **Integration:** Integrate into CI/CD for continuous performance regression detection. Run these tests less frequently than unit/integration tests (e.g., nightly, weekly, or on significant feature merges).
+ - **Thresholds:** Define clear performance thresholds (e.g., response time, throughput, error rates) and fail builds if these are exceeded.
+ - **Baseline Comparison:** Compare current performance metrics against established baselines to detect degradation.
+- **Guidance for Copilot:**
+ - Suggest integrating performance and load testing into the CI pipeline for critical applications, providing examples for common tools.
+ - Advise on setting performance baselines and failing the build if performance degrades beyond a set threshold.
+ - Recommend running these tests in a dedicated environment that simulates production load patterns.
+ - Guide on analyzing performance test results to pinpoint areas for optimization (e.g., database queries, API endpoints).
+
+### **5. Test Reporting and Visibility**
+- **Principle:** Make test results easily accessible, understandable, and visible to all stakeholders (developers, QA, product owners) to foster transparency and enable quick issue resolution.
+- **Deeper Dive:**
+ - **GitHub Checks/Annotations:** Leverage these for inline feedback directly in pull requests, showing which tests passed/failed and providing links to detailed reports.
+ - **Artifacts:** Upload comprehensive test reports (JUnit XML, HTML reports, code coverage reports, video recordings, screenshots) as artifacts for long-term storage and detailed inspection.
+ - **Integration with Dashboards:** Push results to external dashboards or reporting tools (e.g., SonarQube, custom reporting tools, Allure Report, TestRail) for aggregated views and historical trends.
+ - **Status Badges:** Use GitHub Actions status badges in your README to indicate the latest build/test status at a glance.
+- **Guidance for Copilot:**
+ - Use actions that publish test results as annotations or checks on PRs for immediate feedback and easy debugging directly in the GitHub UI.
+ - Upload detailed test reports (e.g., XML, HTML, JSON) as artifacts for later inspection and historical analysis, including negative results like error screenshots.
+ - Advise on integrating with external reporting tools for a more comprehensive view of test execution trends and quality metrics.
+ - Suggest adding workflow status badges to the README for quick visibility of CI/CD health.
+
+## Advanced Deployment Strategies (Expanded)
+
+### **1. Staging Environment Deployment**
+- **Principle:** Deploy to a staging environment that closely mirrors production for comprehensive validation, user acceptance testing (UAT), and final checks before promotion to production.
+- **Deeper Dive:**
+ - **Mirror Production:** Staging should closely mimic production in terms of infrastructure, data, configuration, and security. Any significant discrepancies can lead to issues in production.
+ - **Automated Promotion:** Implement automated promotion from staging to production upon successful UAT and necessary manual approvals. This reduces human error and speeds up releases.
+ - **Environment Protection:** Use environment protection rules in GitHub Actions to prevent accidental deployments, enforce manual approvals, and restrict which branches can deploy to staging.
+ - **Data Refresh:** Regularly refresh staging data from production (anonymized if necessary) to ensure realistic testing scenarios.
+- **Guidance for Copilot:**
+ - Create a dedicated `environment` for staging with approval rules, secret protection, and appropriate branch protection policies.
+ - Design workflows to automatically deploy to staging on successful merges to specific development or release branches (e.g., `develop`, `release/*`).
+ - Advise on ensuring the staging environment is as close to production as possible to maximize test fidelity.
+ - Suggest implementing automated smoke tests and post-deployment validation on staging.
+
+### **2. Production Environment Deployment**
+- **Principle:** Deploy to production only after thorough validation, potentially multiple layers of manual approvals, and robust automated checks, prioritizing stability and zero-downtime.
+- **Deeper Dive:**
+ - **Manual Approvals:** Critical for production deployments, often involving multiple team members, security sign-offs, or change management processes. GitHub Environments support this natively.
+ - **Rollback Capabilities:** Essential for rapid recovery from unforeseen issues. Ensure a quick and reliable way to revert to the previous stable state.
+ - **Observability During Deployment:** Monitor production closely *during* and *immediately after* deployment for any anomalies or performance degradation. Use dashboards, alerts, and tracing.
+ - **Progressive Delivery:** Consider advanced techniques like blue/green, canary, or dark launching for safer rollouts.
+ - **Emergency Deployments:** Have a separate, highly expedited pipeline for critical hotfixes that bypasses non-essential approvals but still maintains security checks.
+- **Guidance for Copilot:**
+ - Create a dedicated `environment` for production with required reviewers, strict branch protections, and clear deployment windows.
+ - Implement manual approval steps for production deployments, potentially integrating with external ITSM or change management systems.
+ - Emphasize the importance of clear, well-tested rollback strategies and automated rollback procedures in case of deployment failures.
+ - Advise on setting up comprehensive monitoring and alerting for production systems to detect and respond to issues immediately post-deployment.
+
+### **3. Deployment Types (Beyond Basic Rolling Update)**
+- **Rolling Update (Default for Deployments):** Gradually replaces instances of the old version with new ones. Good for most cases, especially stateless applications.
+ - **Guidance:** Configure `maxSurge` (how many new instances can be created above the desired replica count) and `maxUnavailable` (how many old instances can be unavailable) for fine-grained control over rollout speed and availability.
+- **Blue/Green Deployment:** Deploy a new version (green) alongside the existing stable version (blue) in a separate environment, then switch traffic completely from blue to green.
+ - **Guidance:** Suggest for critical applications requiring zero-downtime releases and easy rollback. Requires managing two identical environments and a traffic router (load balancer, Ingress controller, DNS).
+ - **Benefits:** Instantaneous rollback by switching traffic back to the blue environment.
+- **Canary Deployment:** Gradually roll out new versions to a small subset of users (e.g., 5-10%) before a full rollout. Monitor performance and error rates for the canary group.
+ - **Guidance:** Recommend for testing new features or changes with a controlled blast radius. Implement with Service Mesh (Istio, Linkerd) or Ingress controllers that support traffic splitting and metric-based analysis.
+ - **Benefits:** Early detection of issues with minimal user impact.
+- **Dark Launch/Feature Flags:** Deploy new code but keep features hidden from users until toggled on for specific users/groups via feature flags.
+ - **Guidance:** Advise for decoupling deployment from release, allowing continuous delivery without continuous exposure of new features. Use feature flag management systems (LaunchDarkly, Split.io, Unleash).
+ - **Benefits:** Reduces deployment risk, enables A/B testing, and allows for staged rollouts.
+- **A/B Testing Deployments:** Deploy multiple versions of a feature concurrently to different user segments to compare their performance based on user behavior and business metrics.
+ - **Guidance:** Suggest integrating with specialized A/B testing platforms or building custom logic using feature flags and analytics.
+
+### **4. Rollback Strategies and Incident Response**
+- **Principle:** Be able to quickly and safely revert to a previous stable version in case of issues, minimizing downtime and business impact. This requires proactive planning.
+- **Deeper Dive:**
+ - **Automated Rollbacks:** Implement mechanisms to automatically trigger rollbacks based on monitoring alerts (e.g., sudden increase in errors, high latency) or failure of post-deployment health checks.
+ - **Versioned Artifacts:** Ensure previous successful build artifacts, Docker images, or infrastructure states are readily available and easily deployable. This is crucial for fast recovery.
+ - **Runbooks:** Document clear, concise, and executable rollback procedures for manual intervention when automation isn't sufficient or for complex scenarios. These should be regularly reviewed and tested.
+ - **Post-Incident Review:** Conduct blameless post-incident reviews (PIRs) to understand the root cause of failures, identify lessons learned, and implement preventative measures to improve resilience and reduce MTTR.
+ - **Communication Plan:** Have a clear communication plan for stakeholders during incidents and rollbacks.
+- **Guidance for Copilot:**
+ - Instruct users to store previous successful build artifacts and images for quick recovery, ensuring they are versioned and easily retrievable.
+ - Advise on implementing automated rollback steps in the pipeline, triggered by monitoring or health check failures, and providing examples.
+ - Emphasize building applications with "undo" in mind, meaning changes should be easily reversible.
+ - Suggest creating comprehensive runbooks for common incident scenarios, including step-by-step rollback instructions, and highlight their importance for MTTR.
+ - Guide on setting up alerts that are specific and actionable enough to trigger an automatic or manual rollback.
+
+## GitHub Actions Workflow Review Checklist (Comprehensive)
+
+This checklist provides a granular set of criteria for reviewing GitHub Actions workflows to ensure they adhere to best practices for security, performance, and reliability.
+
+- [ ] **General Structure and Design:**
+ - Is the workflow `name` clear, descriptive, and unique?
+ - Are `on` triggers appropriate for the workflow's purpose (e.g., `push`, `pull_request`, `workflow_dispatch`, `schedule`)? Are path/branch filters used effectively?
+ - Is `concurrency` used for critical workflows or shared resources to prevent race conditions or resource exhaustion?
+ - Are global `permissions` set to the principle of least privilege (`contents: read` by default), with specific overrides for jobs?
+ - Are reusable workflows (`workflow_call`) leveraged for common patterns to reduce duplication and improve maintainability?
+ - Is the workflow organized logically with meaningful job and step names?
+
+- [ ] **Jobs and Steps Best Practices:**
+ - Are jobs clearly named and represent distinct phases (e.g., `build`, `lint`, `test`, `deploy`)?
+ - Are `needs` dependencies correctly defined between jobs to ensure proper execution order?
+ - Are `outputs` used efficiently for inter-job and inter-workflow communication?
+ - Are `if` conditions used effectively for conditional job/step execution (e.g., environment-specific deployments, branch-specific actions)?
+ - Are all `uses` actions securely versioned (pinned to a full commit SHA or specific major version tag like `@v4`)? Avoid `main` or `latest` tags.
+ - Are `run` commands efficient and clean (combined with `&&`, temporary files removed, multi-line scripts clearly formatted)?
+ - Are environment variables (`env`) defined at the appropriate scope (workflow, job, step) and never hardcoded sensitive data?
+ - Is `timeout-minutes` set for long-running jobs to prevent hung workflows?
+
+- [ ] **Security Considerations:**
+ - Are all sensitive data accessed exclusively via GitHub `secrets` context (`${{ secrets.MY_SECRET }}`)? Never hardcoded, never exposed in logs (even if masked).
+ - Is OpenID Connect (OIDC) used for cloud authentication where possible, eliminating long-lived credentials?
+ - Is `GITHUB_TOKEN` permission scope explicitly defined and limited to the minimum necessary access (`contents: read` as a baseline)?
+ - Are Software Composition Analysis (SCA) tools (e.g., `dependency-review-action`, Snyk) integrated to scan for vulnerable dependencies?
+ - Are Static Application Security Testing (SAST) tools (e.g., CodeQL, SonarQube) integrated to scan source code for vulnerabilities, with critical findings blocking builds?
+ - Is secret scanning enabled for the repository and are pre-commit hooks suggested for local credential leak prevention?
+ - Is there a strategy for container image signing (e.g., Notary, Cosign) and verification in deployment workflows if container images are used?
+ - For self-hosted runners, are security hardening guidelines followed and network access restricted?
+
+- [ ] **Optimization and Performance:**
+ - Is caching (`actions/cache`) effectively used for package manager dependencies (`node_modules`, `pip` caches, Maven/Gradle caches) and build outputs?
+ - Are cache `key` and `restore-keys` designed for optimal cache hit rates (e.g., using `hashFiles`)?
+ - Is `strategy.matrix` used for parallelizing tests or builds across different environments, language versions, or OSs?
+ - Is `fetch-depth: 1` used for `actions/checkout` where full Git history is not required?
+ - Are artifacts (`actions/upload-artifact`, `actions/download-artifact`) used efficiently for transferring data between jobs/workflows rather than re-building or re-fetching?
+ - Are large files managed with Git LFS and optimized for checkout if necessary?
+
+- [ ] **Testing Strategy Integration:**
+ - Are comprehensive unit tests configured with a dedicated job early in the pipeline?
+ - Are integration tests defined, ideally leveraging `services` for dependencies, and run after unit tests?
+ - Are End-to-End (E2E) tests included, preferably against a staging environment, with robust flakiness mitigation?
+ - Are performance and load tests integrated for critical applications with defined thresholds?
+ - Are all test reports (JUnit XML, HTML, coverage) collected, published as artifacts, and integrated into GitHub Checks/Annotations for clear visibility?
+ - Is code coverage tracked and enforced with a minimum threshold?
+
+- [ ] **Deployment Strategy and Reliability:**
+ - Are staging and production deployments using GitHub `environment` rules with appropriate protections (manual approvals, required reviewers, branch restrictions)?
+ - Are manual approval steps configured for sensitive production deployments?
+ - Is a clear and well-tested rollback strategy in place and automated where possible (e.g., `kubectl rollout undo`, reverting to previous stable image)?
+ - Are chosen deployment types (e.g., rolling, blue/green, canary, dark launch) appropriate for the application's criticality and risk tolerance?
+ - Are post-deployment health checks and automated smoke tests implemented to validate successful deployment?
+ - Is the workflow resilient to temporary failures (e.g., retries for flaky network operations)?
+
+- [ ] **Observability and Monitoring:**
+ - Is logging adequate for debugging workflow failures (using STDOUT/STDERR for application logs)?
+ - Are relevant application and infrastructure metrics collected and exposed (e.g., Prometheus metrics)?
+ - Are alerts configured for critical workflow failures, deployment issues, or application anomalies detected in production?
+ - Is distributed tracing (e.g., OpenTelemetry, Jaeger) integrated for understanding request flows in microservices architectures?
+ - Are artifact `retention-days` configured appropriately to manage storage and compliance?
+
+## Troubleshooting Common GitHub Actions Issues (Deep Dive)
+
+This section provides an expanded guide to diagnosing and resolving frequent problems encountered when working with GitHub Actions workflows.
+
+### **1. Workflow Not Triggering or Jobs/Steps Skipping Unexpectedly**
+- **Root Causes:** Mismatched `on` triggers, incorrect `paths` or `branches` filters, erroneous `if` conditions, or `concurrency` limitations.
+- **Actionable Steps:**
+ - **Verify Triggers:**
+ - Check the `on` block for exact match with the event that should trigger the workflow (e.g., `push`, `pull_request`, `workflow_dispatch`, `schedule`).
+ - Ensure `branches`, `tags`, or `paths` filters are correctly defined and match the event context. Remember that `paths-ignore` and `branches-ignore` take precedence.
+ - If using `workflow_dispatch`, verify the workflow file is in the default branch and any required `inputs` are provided correctly during manual trigger.
+ - **Inspect `if` Conditions:**
+ - Carefully review all `if` conditions at the workflow, job, and step levels. A single false condition can prevent execution.
+ - Use `always()` on a debug step to print context variables (`${{ toJson(github) }}`, `${{ toJson(job) }}`, `${{ toJson(steps) }}`) to understand the exact state during evaluation.
+ - Test complex `if` conditions in a simplified workflow.
+ - **Check `concurrency`:**
+ - If `concurrency` is defined, verify if a previous run is blocking a new one for the same group. Check the "Concurrency" tab in the workflow run.
+ - **Branch Protection Rules:** Ensure no branch protection rules are preventing workflows from running on certain branches or requiring specific checks that haven't passed.
+
+### **2. Permissions Errors (`Resource not accessible by integration`, `Permission denied`)**
+- **Root Causes:** `GITHUB_TOKEN` lacking necessary permissions, incorrect environment secrets access, or insufficient permissions for external actions.
+- **Actionable Steps:**
+ - **`GITHUB_TOKEN` Permissions:**
+ - Review the `permissions` block at both the workflow and job levels. Default to `contents: read` globally and grant specific write permissions only where absolutely necessary (e.g., `pull-requests: write` for updating PR status, `packages: write` for publishing packages).
+ - Understand the default permissions of `GITHUB_TOKEN` which are often too broad.
+ - **Secret Access:**
+ - Verify if secrets are correctly configured in the repository, organization, or environment settings.
+ - Ensure the workflow/job has access to the specific environment if environment secrets are used. Check if any manual approvals are pending for the environment.
+ - Confirm the secret name matches exactly (`secrets.MY_API_KEY`).
+ - **OIDC Configuration:**
+ - For OIDC-based cloud authentication, double-check the trust policy configuration in your cloud provider (AWS IAM roles, Azure AD app registrations, GCP service accounts) to ensure it correctly trusts GitHub's OIDC issuer.
+ - Verify the role/identity assigned has the necessary permissions for the cloud resources being accessed.
+
+### **3. Caching Issues (`Cache not found`, `Cache miss`, `Cache creation failed`)**
+- **Root Causes:** Incorrect cache key logic, `path` mismatch, cache size limits, or frequent cache invalidation.
+- **Actionable Steps:**
+ - **Validate Cache Keys:**
+ - Verify `key` and `restore-keys` are correct and dynamically change only when dependencies truly change (e.g., `key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}`). A cache key that is too dynamic will always result in a miss.
+ - Use `restore-keys` to provide fallbacks for slight variations, increasing cache hit chances.
+ - **Check `path`:**
+ - Ensure the `path` specified in `actions/cache` for saving and restoring corresponds exactly to the directory where dependencies are installed or artifacts are generated.
+ - Verify the existence of the `path` before caching.
+ - **Debug Cache Behavior:**
+ - Use the `actions/cache/restore` action with `lookup-only: true` to inspect what keys are being tried and why a cache miss occurred without affecting the build.
+ - Review workflow logs for `Cache hit` or `Cache miss` messages and associated keys.
+ - **Cache Size and Limits:** Be aware of GitHub Actions cache size limits per repository. If caches are very large, they might be evicted frequently.
+
+### **4. Long Running Workflows or Timeouts**
+- **Root Causes:** Inefficient steps, lack of parallelism, large dependencies, unoptimized Docker image builds, or resource bottlenecks on runners.
+- **Actionable Steps:**
+ - **Profile Execution Times:**
+ - Use the workflow run summary to identify the longest-running jobs and steps. This is your primary tool for optimization.
+ - **Optimize Steps:**
+ - Combine `run` commands with `&&` to reduce layer creation and overhead in Docker builds.
+ - Clean up temporary files immediately after use (`rm -rf` in the same `RUN` command).
+ - Install only necessary dependencies.
+ - **Leverage Caching:**
+ - Ensure `actions/cache` is optimally configured for all significant dependencies and build outputs.
+ - **Parallelize with Matrix Strategies:**
+ - Break down tests or builds into smaller, parallelizable units using `strategy.matrix` to run them concurrently.
+ - **Choose Appropriate Runners:**
+ - Review `runs-on`. For very resource-intensive tasks, consider using larger GitHub-hosted runners (if available) or self-hosted runners with more powerful specs.
+ - **Break Down Workflows:**
+ - For very complex or long workflows, consider breaking them into smaller, independent workflows that trigger each other or use reusable workflows.
+
+### **5. Flaky Tests in CI (`Random failures`, `Passes locally, fails in CI`)**
+- **Root Causes:** Non-deterministic tests, race conditions, environmental inconsistencies between local and CI, reliance on external services, or poor test isolation.
+- **Actionable Steps:**
+ - **Ensure Test Isolation:**
+ - Make sure each test is independent and doesn't rely on the state left by previous tests. Clean up resources (e.g., database entries) after each test or test suite.
+ - **Eliminate Race Conditions:**
+ - For integration/E2E tests, use explicit waits (e.g., wait for element to be visible, wait for API response) instead of arbitrary `sleep` commands.
+ - Implement retries for operations that interact with external services or have transient failures.
+ - **Standardize Environments:**
+ - Ensure the CI environment (Node.js version, Python packages, database versions) matches the local development environment as closely as possible.
+ - Use Docker `services` for consistent test dependencies.
+ - **Robust Selectors (E2E):**
+ - Use stable, unique selectors in E2E tests (e.g., `data-testid` attributes) instead of brittle CSS classes or XPath.
+ - **Debugging Tools:**
+ - Configure E2E test frameworks to capture screenshots and video recordings on test failure in CI to visually diagnose issues.
+ - **Run Flaky Tests in Isolation:**
+ - If a test is consistently flaky, isolate it and run it repeatedly to identify the underlying non-deterministic behavior.
+
+### **6. Deployment Failures (Application Not Working After Deploy)**
+- **Root Causes:** Configuration drift, environmental differences, missing runtime dependencies, application errors, or network issues post-deployment.
+- **Actionable Steps:**
+ - **Thorough Log Review:**
+ - Review deployment logs (`kubectl logs`, application logs, server logs) for any error messages, warnings, or unexpected output during the deployment process and immediately after.
+ - **Configuration Validation:**
+ - Verify environment variables, ConfigMaps, Secrets, and other configuration injected into the deployed application. Ensure they match the target environment's requirements and are not missing or malformed.
+ - Use pre-deployment checks to validate configuration.
+ - **Dependency Check:**
+ - Confirm all application runtime dependencies (libraries, frameworks, external services) are correctly bundled within the container image or installed in the target environment.
+ - **Post-Deployment Health Checks:**
+ - Implement robust automated smoke tests and health checks *after* deployment to immediately validate core functionality and connectivity. Trigger rollbacks if these fail.
+ - **Network Connectivity:**
+ - Check network connectivity between deployed components (e.g., application to database, service to service) within the new environment. Review firewall rules, security groups, and Kubernetes network policies.
+ - **Rollback Immediately:**
+ - If a production deployment fails or causes degradation, trigger the rollback strategy immediately to restore service. Diagnose the issue in a non-production environment.
+
+## Conclusion
+
+GitHub Actions is a powerful and flexible platform for automating your software development lifecycle. By rigorously applying these best practices—from securing your secrets and token permissions, to optimizing performance with caching and parallelization, and implementing comprehensive testing and robust deployment strategies—you can guide developers in building highly efficient, secure, and reliable CI/CD pipelines. Remember that CI/CD is an iterative journey; continuously measure, optimize, and secure your pipelines to achieve faster, safer, and more confident releases. Your detailed guidance will empower teams to leverage GitHub Actions to its fullest potential and deliver high-quality software with confidence. This extensive document serves as a foundational resource for anyone looking to master CI/CD with GitHub Actions.
+
+---
+
+
diff --git a/.github/instructions/instructions.instructions.md b/.github/instructions/instructions.instructions.md
new file mode 100644
index 0000000..c53da84
--- /dev/null
+++ b/.github/instructions/instructions.instructions.md
@@ -0,0 +1,256 @@
+---
+description: 'Guidelines for creating high-quality custom instruction files for GitHub Copilot'
+applyTo: '**/*.instructions.md'
+---
+
+# Custom Instructions File Guidelines
+
+Instructions for creating effective and maintainable custom instruction files that guide GitHub Copilot in generating domain-specific code and following project conventions.
+
+## Project Context
+
+- Target audience: Developers and GitHub Copilot working with domain-specific code
+- File format: Markdown with YAML frontmatter
+- File naming convention: lowercase with hyphens (e.g., `react-best-practices.instructions.md`)
+- Location: `.github/instructions/` directory
+- Purpose: Provide context-aware guidance for code generation, review, and documentation
+
+## Required Frontmatter
+
+Every instruction file must include YAML frontmatter with the following fields:
+
+```yaml
+---
+description: 'Brief description of the instruction purpose and scope'
+applyTo: 'glob pattern for target files (e.g., **/*.ts, **/*.py)'
+---
+```
+
+### Frontmatter Guidelines
+
+- **description**: Single-quoted string, 1-500 characters, clearly stating the purpose
+- **applyTo**: Glob pattern(s) specifying which files these instructions apply to
+ - Single pattern: `'**/*.ts'`
+ - Multiple patterns: `'**/*.ts, **/*.tsx, **/*.js'`
+ - Specific files: `'src/**/*.py'`
+ - All files: `'**'`
+
+## File Structure
+
+A well-structured instruction file should include the following sections:
+
+### 1. Title and Overview
+
+- Clear, descriptive title using `#` heading
+- Brief introduction explaining the purpose and scope
+- Optional: Project context section with key technologies and versions
+
+### 2. Core Sections
+
+Organize content into logical sections based on the domain:
+
+- **General Instructions**: High-level guidelines and principles
+- **Best Practices**: Recommended patterns and approaches
+- **Code Standards**: Naming conventions, formatting, style rules
+- **Architecture/Structure**: Project organization and design patterns
+- **Common Patterns**: Frequently used implementations
+- **Security**: Security considerations (if applicable)
+- **Performance**: Optimization guidelines (if applicable)
+- **Testing**: Testing standards and approaches (if applicable)
+
+### 3. Examples and Code Snippets
+
+Provide concrete examples with clear labels:
+
+```markdown
+### Good Example
+\`\`\`language
+// Recommended approach
+code example here
+\`\`\`
+
+### Bad Example
+\`\`\`language
+// Avoid this pattern
+code example here
+\`\`\`
+```
+
+### 4. Validation and Verification (Optional but Recommended)
+
+- Build commands to verify code
+- Linting and formatting tools
+- Testing requirements
+- Verification steps
+
+## Content Guidelines
+
+### Writing Style
+
+- Use clear, concise language
+- Write in imperative mood ("Use", "Implement", "Avoid")
+- Be specific and actionable
+- Avoid ambiguous terms like "should", "might", "possibly"
+- Use bullet points and lists for readability
+- Keep sections focused and scannable
+
+### Best Practices
+
+- **Be Specific**: Provide concrete examples rather than abstract concepts
+- **Show Why**: Explain the reasoning behind recommendations when it adds value
+- **Use Tables**: For comparing options, listing rules, or showing patterns
+- **Include Examples**: Real code snippets are more effective than descriptions
+- **Stay Current**: Reference current versions and best practices
+- **Link Resources**: Include official documentation and authoritative sources
+
+### Common Patterns to Include
+
+1. **Naming Conventions**: How to name variables, functions, classes, files
+2. **Code Organization**: File structure, module organization, import order
+3. **Error Handling**: Preferred error handling patterns
+4. **Dependencies**: How to manage and document dependencies
+5. **Comments and Documentation**: When and how to document code
+6. **Version Information**: Target language/framework versions
+
+## Patterns to Follow
+
+### Bullet Points and Lists
+
+```markdown
+## Security Best Practices
+
+- Always validate user input before processing
+- Use parameterized queries to prevent SQL injection
+- Store secrets in environment variables, never in code
+- Implement proper authentication and authorization
+- Enable HTTPS for all production endpoints
+```
+
+### Tables for Structured Information
+
+```markdown
+## Common Issues
+
+| Issue | Solution | Example |
+| ---------------- | ------------------- | ----------------------------- |
+| Magic numbers | Use named constants | `const MAX_RETRIES = 3` |
+| Deep nesting | Extract functions | Refactor nested if statements |
+| Hardcoded values | Use configuration | Store API URLs in config |
+```
+
+### Code Comparison
+
+```markdown
+### Good Example - Using TypeScript interfaces
+\`\`\`typescript
+interface User {
+ id: string;
+ name: string;
+ email: string;
+}
+
+function getUser(id: string): User {
+ // Implementation
+}
+\`\`\`
+
+### Bad Example - Using any type
+\`\`\`typescript
+function getUser(id: any): any {
+ // Loses type safety
+}
+\`\`\`
+```
+
+### Conditional Guidance
+
+```markdown
+## Framework Selection
+
+- **For small projects**: Use Minimal API approach
+- **For large projects**: Use controller-based architecture with clear separation
+- **For microservices**: Consider domain-driven design patterns
+```
+
+## Patterns to Avoid
+
+- **Overly verbose explanations**: Keep it concise and scannable
+- **Outdated information**: Always reference current versions and practices
+- **Ambiguous guidelines**: Be specific about what to do or avoid
+- **Missing examples**: Abstract rules without concrete code examples
+- **Contradictory advice**: Ensure consistency throughout the file
+- **Copy-paste from documentation**: Add value by distilling and contextualizing
+
+## Testing Your Instructions
+
+Before finalizing instruction files:
+
+1. **Test with Copilot**: Try the instructions with actual prompts in VS Code
+2. **Verify Examples**: Ensure code examples are correct and run without errors
+3. **Check Glob Patterns**: Confirm `applyTo` patterns match intended files
+
+## Example Structure
+
+Here's a minimal example structure for a new instruction file:
+
+```markdown
+---
+description: 'Brief description of purpose'
+applyTo: '**/*.ext'
+---
+
+# Technology Name Development
+
+Brief introduction and context.
+
+## General Instructions
+
+- High-level guideline 1
+- High-level guideline 2
+
+## Best Practices
+
+- Specific practice 1
+- Specific practice 2
+
+## Code Standards
+
+### Naming Conventions
+- Rule 1
+- Rule 2
+
+### File Organization
+- Structure 1
+- Structure 2
+
+## Common Patterns
+
+### Pattern 1
+Description and example
+
+\`\`\`language
+code example
+\`\`\`
+
+### Pattern 2
+Description and example
+
+## Validation
+
+- Build command: `command to verify`
+- Linting: `command to lint`
+- Testing: `command to test`
+```
+
+## Maintenance
+
+- Review instructions when dependencies or frameworks are updated
+- Update examples to reflect current best practices
+- Remove outdated patterns or deprecated features
+- Add new patterns as they emerge in the community
+- Keep glob patterns accurate as project structure evolves
+
+## Additional Resources
+
+- [Custom Instructions Documentation](https://code.visualstudio.com/docs/copilot/customization/custom-instructions)
+- [Awesome Copilot Instructions](https://github.com/github/awesome-copilot/tree/main/instructions)
diff --git a/.github/instructions/localization.instructions.md b/.github/instructions/localization.instructions.md
new file mode 100644
index 0000000..e7d3d35
--- /dev/null
+++ b/.github/instructions/localization.instructions.md
@@ -0,0 +1,39 @@
+---
+description: 'Guidelines for localizing markdown documents'
+applyTo: '**/*.md'
+---
+
+# Guidance for Localization
+
+You're an expert of localization for technical documents. Follow the instruction to localize documents.
+
+## Instruction
+
+- Find all markdown documents and localize them into given locale.
+- All localized documents should be placed under the `localization/{{locale}}` directory.
+- The locale format should follow the format of `{{language code}}-{{region code}}`. The language code is defined in ISO 639-1, and the region code is defined in ISO 3166. Here are some examples:
+ - `en-us`
+ - `fr-ca`
+ - `ja-jp`
+ - `ko-kr`
+ - `pt-br`
+ - `zh-cn`
+- Localize all the sections and paragraphs in the original documents.
+- DO NOT miss any sections nor any paragraphs while localizing.
+- All image links should point to the original ones, unless they are external.
+- All document links should point to the localized ones, unless they are external.
+- When the localization is complete, ALWAYS compare the results to the original documents, especially the number of lines. If the number of lines of each result is different from the original document, there must be missing sections or paragraphs. Review line-by-line and update it.
+
+## Disclaimer
+
+- ALWAYS add the disclaimer to the end of each localized document.
+- Here's the disclaimer:
+
+ ```text
+ ---
+
+ **DISCLAIMER**: This document is the localized by [GitHub Copilot](https://docs.github.com/copilot/about-github-copilot/what-is-github-copilot). Therefore, it may contain mistakes. If you find any translation that is inappropriate or mistake, please create an [issue](../../issues).
+ ```
+
+- The disclaimer should also be localized.
+- Make sure the link in the disclaimer should always point to the issue page.
diff --git a/.github/instructions/markdown.instructions.md b/.github/instructions/markdown.instructions.md
new file mode 100644
index 0000000..724815d
--- /dev/null
+++ b/.github/instructions/markdown.instructions.md
@@ -0,0 +1,52 @@
+---
+description: 'Documentation and content creation standards'
+applyTo: '**/*.md'
+---
+
+## Markdown Content Rules
+
+The following markdown content rules are enforced in the validators:
+
+1. **Headings**: Use appropriate heading levels (H2, H3, etc.) to structure your content. Do not use an H1 heading, as this will be generated based on the title.
+2. **Lists**: Use bullet points or numbered lists for lists. Ensure proper indentation and spacing.
+3. **Code Blocks**: Use fenced code blocks for code snippets. Specify the language for syntax highlighting.
+4. **Links**: Use proper markdown syntax for links. Ensure that links are valid and accessible.
+5. **Images**: Use proper markdown syntax for images. Include alt text for accessibility.
+6. **Tables**: Use markdown tables for tabular data. Ensure proper formatting and alignment.
+7. **Line Length**: Limit line length to 400 characters for readability.
+8. **Whitespace**: Use appropriate whitespace to separate sections and improve readability.
+9. **Front Matter**: Include YAML front matter at the beginning of the file with required metadata fields.
+
+## Formatting and Structure
+
+Follow these guidelines for formatting and structuring your markdown content:
+
+- **Headings**: Use `##` for H2 and `###` for H3. Ensure that headings are used in a hierarchical manner. Recommend restructuring if content includes H4, and more strongly recommend for H5.
+- **Lists**: Use `-` for bullet points and `1.` for numbered lists. Indent nested lists with two spaces.
+- **Code Blocks**: Use triple backticks (`) to create fenced code blocks. Specify the language after the opening backticks for syntax highlighting (e.g., `csharp).
+- **Links**: Use `[link text](URL)` for links. Ensure that the link text is descriptive and the URL is valid.
+- **Images**: Use `` for images. Include a brief description of the image in the alt text.
+- **Tables**: Use `|` to create tables. Ensure that columns are properly aligned and headers are included.
+- **Line Length**: Break lines at 80 characters to improve readability. Use soft line breaks for long paragraphs.
+- **Whitespace**: Use blank lines to separate sections and improve readability. Avoid excessive whitespace.
+
+## Validation Requirements
+
+Ensure compliance with the following validation requirements:
+
+- **Front Matter**: Include the following fields in the YAML front matter:
+
+ - `post_title`: The title of the post.
+ - `author1`: The primary author of the post.
+ - `post_slug`: The URL slug for the post.
+ - `microsoft_alias`: The Microsoft alias of the author.
+ - `featured_image`: The URL of the featured image.
+ - `categories`: The categories for the post. These categories must be from the list in /categories.txt.
+ - `tags`: The tags for the post.
+ - `ai_note`: Indicate if AI was used in the creation of the post.
+ - `summary`: A brief summary of the post. Recommend a summary based on the content when possible.
+ - `post_date`: The publication date of the post.
+
+- **Content Rules**: Ensure that the content follows the markdown content rules specified above.
+- **Formatting**: Ensure that the content is properly formatted and structured according to the guidelines.
+- **Validation**: Run the validation tools to check for compliance with the rules and guidelines.
diff --git a/.github/instructions/powershell-pester-5.instructions.md b/.github/instructions/powershell-pester-5.instructions.md
new file mode 100644
index 0000000..78b81ad
--- /dev/null
+++ b/.github/instructions/powershell-pester-5.instructions.md
@@ -0,0 +1,197 @@
+---
+applyTo: '**/*.Tests.ps1'
+description: 'PowerShell Pester testing best practices based on Pester v5 conventions'
+---
+
+# PowerShell Pester v5 Testing Guidelines
+
+This guide provides PowerShell-specific instructions for creating automated tests using PowerShell Pester v5 module. Follow PowerShell cmdlet development guidelines in [powershell.instructions.md](./powershell.instructions.md) for general PowerShell scripting best practices.
+
+## File Naming and Structure
+
+- **File Convention:** Use `*.Tests.ps1` naming pattern
+- **Placement:** Place test files next to tested code or in dedicated test directories
+- **Import Pattern:** Use `BeforeAll { . $PSScriptRoot/FunctionName.ps1 }` to import tested functions
+- **No Direct Code:** Put ALL code inside Pester blocks (`BeforeAll`, `Describe`, `Context`, `It`, etc.)
+
+## Test Structure Hierarchy
+
+```powershell
+BeforeAll { # Import tested functions }
+Describe 'FunctionName' {
+ Context 'When condition' {
+ BeforeAll { # Setup for context }
+ It 'Should behavior' { # Individual test }
+ AfterAll { # Cleanup for context }
+ }
+}
+```
+
+## Core Keywords
+
+- **`Describe`**: Top-level grouping, typically named after function being tested
+- **`Context`**: Sub-grouping within Describe for specific scenarios
+- **`It`**: Individual test cases, use descriptive names
+- **`Should`**: Assertion keyword for test validation
+- **`BeforeAll/AfterAll`**: Setup/teardown once per block
+- **`BeforeEach/AfterEach`**: Setup/teardown before/after each test
+
+## Setup and Teardown
+
+- **`BeforeAll`**: Runs once at start of containing block, use for expensive operations
+- **`BeforeEach`**: Runs before every `It` in block, use for test-specific setup
+- **`AfterEach`**: Runs after every `It`, guaranteed even if test fails
+- **`AfterAll`**: Runs once at end of block, use for cleanup
+- **Variable Scoping**: `BeforeAll` variables available to child blocks (read-only), `BeforeEach/It/AfterEach` share same scope
+
+## Assertions (Should)
+
+- **Basic Comparisons**: `-Be`, `-BeExactly`, `-Not -Be`
+- **Collections**: `-Contain`, `-BeIn`, `-HaveCount`
+- **Numeric**: `-BeGreaterThan`, `-BeLessThan`, `-BeGreaterOrEqual`
+- **Strings**: `-Match`, `-Like`, `-BeNullOrEmpty`
+- **Types**: `-BeOfType`, `-BeTrue`, `-BeFalse`
+- **Files**: `-Exist`, `-FileContentMatch`
+- **Exceptions**: `-Throw`, `-Not -Throw`
+
+## Mocking
+
+- **`Mock CommandName { ScriptBlock }`**: Replace command behavior
+- **`-ParameterFilter`**: Mock only when parameters match condition
+- **`-Verifiable`**: Mark mock as requiring verification
+- **`Should -Invoke`**: Verify mock was called specific number of times
+- **`Should -InvokeVerifiable`**: Verify all verifiable mocks were called
+- **Scope**: Mocks default to containing block scope
+
+```powershell
+Mock Get-Service { @{ Status = 'Running' } } -ParameterFilter { $Name -eq 'TestService' }
+Should -Invoke Get-Service -Exactly 1 -ParameterFilter { $Name -eq 'TestService' }
+```
+
+## Test Cases (Data-Driven Tests)
+
+Use `-TestCases` or `-ForEach` for parameterized tests:
+
+```powershell
+It 'Should return for ' -TestCases @(
+ @{ Input = 'value1'; Expected = 'result1' }
+ @{ Input = 'value2'; Expected = 'result2' }
+) {
+ Get-Function $Input | Should -Be $Expected
+}
+```
+
+## Data-Driven Tests
+
+- **`-ForEach`**: Available on `Describe`, `Context`, and `It` for generating multiple tests from data
+- **`-TestCases`**: Alias for `-ForEach` on `It` blocks (backwards compatibility)
+- **Hashtable Data**: Each item defines variables available in test (e.g., `@{ Name = 'value'; Expected = 'result' }`)
+- **Array Data**: Uses `$_` variable for current item
+- **Templates**: Use `` in test names for dynamic expansion
+
+```powershell
+# Hashtable approach
+It 'Returns for ' -ForEach @(
+ @{ Name = 'test1'; Expected = 'result1' }
+ @{ Name = 'test2'; Expected = 'result2' }
+) { Get-Function $Name | Should -Be $Expected }
+
+# Array approach
+It 'Contains <_>' -ForEach 'item1', 'item2' { Get-Collection | Should -Contain $_ }
+```
+
+## Tags
+
+- **Available on**: `Describe`, `Context`, and `It` blocks
+- **Filtering**: Use `-TagFilter` and `-ExcludeTagFilter` with `Invoke-Pester`
+- **Wildcards**: Tags support `-like` wildcards for flexible filtering
+
+```powershell
+Describe 'Function' -Tag 'Unit' {
+ It 'Should work' -Tag 'Fast', 'Stable' { }
+ It 'Should be slow' -Tag 'Slow', 'Integration' { }
+}
+
+# Run only fast unit tests
+Invoke-Pester -TagFilter 'Unit' -ExcludeTagFilter 'Slow'
+```
+
+## Skip
+
+- **`-Skip`**: Available on `Describe`, `Context`, and `It` to skip tests
+- **Conditional**: Use `-Skip:$condition` for dynamic skipping
+- **Runtime Skip**: Use `Set-ItResult -Skipped` during test execution (setup/teardown still run)
+
+```powershell
+It 'Should work on Windows' -Skip:(-not $IsWindows) { }
+Context 'Integration tests' -Skip { }
+```
+
+## Error Handling
+
+- **Continue on Failure**: Use `Should.ErrorAction = 'Continue'` to collect multiple failures
+- **Stop on Critical**: Use `-ErrorAction Stop` for pre-conditions
+- **Test Exceptions**: Use `{ Code } | Should -Throw` for exception testing
+
+## Best Practices
+
+- **Descriptive Names**: Use clear test descriptions that explain behavior
+- **AAA Pattern**: Arrange (setup), Act (execute), Assert (verify)
+- **Isolated Tests**: Each test should be independent
+- **Avoid Aliases**: Use full cmdlet names (`Where-Object` not `?`)
+- **Single Responsibility**: One assertion per test when possible
+- **Test File Organization**: Group related tests in Context blocks. Context blocks can be nested.
+
+## Example Test Pattern
+
+```powershell
+BeforeAll {
+ . $PSScriptRoot/Get-UserInfo.ps1
+}
+
+Describe 'Get-UserInfo' {
+ Context 'When user exists' {
+ BeforeAll {
+ Mock Get-ADUser { @{ Name = 'TestUser'; Enabled = $true } }
+ }
+
+ It 'Should return user object' {
+ $result = Get-UserInfo -Username 'TestUser'
+ $result | Should -Not -BeNullOrEmpty
+ $result.Name | Should -Be 'TestUser'
+ }
+
+ It 'Should call Get-ADUser once' {
+ Get-UserInfo -Username 'TestUser'
+ Should -Invoke Get-ADUser -Exactly 1
+ }
+ }
+
+ Context 'When user does not exist' {
+ BeforeAll {
+ Mock Get-ADUser { throw "User not found" }
+ }
+
+ It 'Should throw exception' {
+ { Get-UserInfo -Username 'NonExistent' } | Should -Throw "*not found*"
+ }
+ }
+}
+```
+
+## Configuration
+
+Configuration is defined **outside** test files when calling `Invoke-Pester` to control execution behavior.
+
+```powershell
+# Create configuration (Pester 5.2+)
+$config = New-PesterConfiguration
+$config.Run.Path = './Tests'
+$config.Output.Verbosity = 'Detailed'
+$config.TestResult.Enabled = $true
+$config.TestResult.OutputFormat = 'NUnitXml'
+$config.Should.ErrorAction = 'Continue'
+Invoke-Pester -Configuration $config
+```
+
+**Key Sections**: Run (Path, Exit), Filter (Tag, ExcludeTag), Output (Verbosity), TestResult (Enabled, OutputFormat), CodeCoverage (Enabled, Path), Should (ErrorAction), Debug
diff --git a/.github/instructions/powershell.instructions.md b/.github/instructions/powershell.instructions.md
new file mode 100644
index 0000000..83be180
--- /dev/null
+++ b/.github/instructions/powershell.instructions.md
@@ -0,0 +1,356 @@
+---
+applyTo: '**/*.ps1,**/*.psm1'
+description: 'PowerShell cmdlet and scripting best practices based on Microsoft guidelines'
+---
+
+# PowerShell Cmdlet Development Guidelines
+
+This guide provides PowerShell-specific instructions to help GitHub Copilot generate idiomatic,
+safe, and maintainable scripts. It aligns with Microsoft’s PowerShell cmdlet development guidelines.
+
+## Naming Conventions
+
+- **Verb-Noun Format:**
+ - Use approved PowerShell verbs (Get-Verb)
+ - Use singular nouns
+ - PascalCase for both verb and noun
+ - Avoid special characters and spaces
+
+- **Parameter Names:**
+ - Use PascalCase
+ - Choose clear, descriptive names
+ - Use singular form unless always multiple
+ - Follow PowerShell standard names
+
+- **Variable Names:**
+ - Use PascalCase for public variables
+ - Use camelCase for private variables
+ - Avoid abbreviations
+ - Use meaningful names
+
+- **Alias Avoidance:**
+ - Use full cmdlet names
+ - Avoid using aliases in scripts (e.g., use Get-ChildItem instead of gci)
+ - Document any custom aliases
+ - Use full parameter names
+
+### Example
+
+```powershell
+function Get-UserProfile {
+ [CmdletBinding()]
+ param(
+ [Parameter(Mandatory)]
+ [string]$Username,
+
+ [Parameter()]
+ [ValidateSet('Basic', 'Detailed')]
+ [string]$ProfileType = 'Basic'
+ )
+
+ process {
+ # Logic here
+ }
+}
+```
+
+## Parameter Design
+
+- **Standard Parameters:**
+ - Use common parameter names (`Path`, `Name`, `Force`)
+ - Follow built-in cmdlet conventions
+ - Use aliases for specialized terms
+ - Document parameter purpose
+
+- **Parameter Names:**
+ - Use singular form unless always multiple
+ - Choose clear, descriptive names
+ - Follow PowerShell conventions
+ - Use PascalCase formatting
+
+- **Type Selection:**
+ - Use common .NET types
+ - Implement proper validation
+ - Consider ValidateSet for limited options
+ - Enable tab completion where possible
+
+- **Switch Parameters:**
+ - Use [switch] for boolean flags
+ - Avoid $true/$false parameters
+ - Default to $false when omitted
+ - Use clear action names
+
+### Example
+
+```powershell
+function Set-ResourceConfiguration {
+ [CmdletBinding()]
+ param(
+ [Parameter(Mandatory)]
+ [string]$Name,
+
+ [Parameter()]
+ [ValidateSet('Dev', 'Test', 'Prod')]
+ [string]$Environment = 'Dev',
+
+ [Parameter()]
+ [switch]$Force,
+
+ [Parameter()]
+ [ValidateNotNullOrEmpty()]
+ [string[]]$Tags
+ )
+
+ process {
+ # Logic here
+ }
+}
+```
+
+## Pipeline and Output
+
+- **Pipeline Input:**
+ - Use `ValueFromPipeline` for direct object input
+ - Use `ValueFromPipelineByPropertyName` for property mapping
+ - Implement Begin/Process/End blocks for pipeline handling
+ - Document pipeline input requirements
+
+- **Output Objects:**
+ - Return rich objects, not formatted text
+ - Use PSCustomObject for structured data
+ - Avoid Write-Host for data output
+ - Enable downstream cmdlet processing
+
+- **Pipeline Streaming:**
+ - Output one object at a time
+ - Use process block for streaming
+ - Avoid collecting large arrays
+ - Enable immediate processing
+
+- **PassThru Pattern:**
+ - Default to no output for action cmdlets
+ - Implement `-PassThru` switch for object return
+ - Return modified/created object with `-PassThru`
+ - Use verbose/warning for status updates
+
+### Example
+
+```powershell
+function Update-ResourceStatus {
+ [CmdletBinding()]
+ param(
+ [Parameter(Mandatory, ValueFromPipeline, ValueFromPipelineByPropertyName)]
+ [string]$Name,
+
+ [Parameter(Mandatory)]
+ [ValidateSet('Active', 'Inactive', 'Maintenance')]
+ [string]$Status,
+
+ [Parameter()]
+ [switch]$PassThru
+ )
+
+ begin {
+ Write-Verbose 'Starting resource status update process'
+ $timestamp = Get-Date
+ }
+
+ process {
+ # Process each resource individually
+ Write-Verbose "Processing resource: $Name"
+
+ $resource = [PSCustomObject]@{
+ Name = $Name
+ Status = $Status
+ LastUpdated = $timestamp
+ UpdatedBy = $env:USERNAME
+ }
+
+ # Only output if PassThru is specified
+ if ($PassThru.IsPresent) {
+ Write-Output $resource
+ }
+ }
+
+ end {
+ Write-Verbose 'Resource status update process completed'
+ }
+}
+```
+
+## Error Handling and Safety
+
+- **ShouldProcess Implementation:**
+ - Use `[CmdletBinding(SupportsShouldProcess = $true)]`
+ - Set appropriate `ConfirmImpact` level
+ - Call `$PSCmdlet.ShouldProcess()` for system changes
+ - Use `ShouldContinue()` for additional confirmations
+
+- **Message Streams:**
+ - `Write-Verbose` for operational details with `-Verbose`
+ - `Write-Warning` for warning conditions
+ - `Write-Error` for non-terminating errors
+ - `throw` for terminating errors
+ - Avoid `Write-Host` except for user interface text
+
+- **Error Handling Pattern:**
+ - Use try/catch blocks for error management
+ - Set appropriate ErrorAction preferences
+ - Return meaningful error messages
+ - Use ErrorVariable when needed
+ - Include proper terminating vs non-terminating error handling
+ - In advanced functions with `[CmdletBinding()]`, prefer `$PSCmdlet.WriteError()` over `Write-Error`
+ - In advanced functions with `[CmdletBinding()]`, prefer `$PSCmdlet.ThrowTerminatingError()` over `throw`
+ - Construct proper ErrorRecord objects with category, target, and exception details
+
+- **Non-Interactive Design:**
+ - Accept input via parameters
+ - Avoid `Read-Host` in scripts
+ - Support automation scenarios
+ - Document all required inputs
+
+### Example
+
+```powershell
+function Remove-UserAccount {
+ [CmdletBinding(SupportsShouldProcess = $true, ConfirmImpact = 'High')]
+ param(
+ [Parameter(Mandatory, ValueFromPipeline)]
+ [ValidateNotNullOrEmpty()]
+ [string]$Username,
+
+ [Parameter()]
+ [switch]$Force
+ )
+
+ begin {
+ Write-Verbose 'Starting user account removal process'
+ $ErrorActionPreference = 'Stop'
+ }
+
+ process {
+ try {
+ # Validation
+ if (-not (Test-UserExists -Username $Username)) {
+ $errorRecord = [System.Management.Automation.ErrorRecord]::new(
+ [System.Exception]::new("User account '$Username' not found"),
+ 'UserNotFound',
+ [System.Management.Automation.ErrorCategory]::ObjectNotFound,
+ $Username
+ )
+ $PSCmdlet.WriteError($errorRecord)
+ return
+ }
+
+ # Confirmation
+ $shouldProcessMessage = "Remove user account '$Username'"
+ if ($Force -or $PSCmdlet.ShouldProcess($Username, $shouldProcessMessage)) {
+ Write-Verbose "Removing user account: $Username"
+
+ # Main operation
+ Remove-ADUser -Identity $Username -ErrorAction Stop
+ Write-Warning "User account '$Username' has been removed"
+ }
+ } catch [Microsoft.ActiveDirectory.Management.ADException] {
+ $errorRecord = [System.Management.Automation.ErrorRecord]::new(
+ $_.Exception,
+ 'ActiveDirectoryError',
+ [System.Management.Automation.ErrorCategory]::NotSpecified,
+ $Username
+ )
+ $PSCmdlet.ThrowTerminatingError($errorRecord)
+ } catch {
+ $errorRecord = [System.Management.Automation.ErrorRecord]::new(
+ $_.Exception,
+ 'UnexpectedError',
+ [System.Management.Automation.ErrorCategory]::NotSpecified,
+ $Username
+ )
+ $PSCmdlet.ThrowTerminatingError($errorRecord)
+ }
+ }
+
+ end {
+ Write-Verbose 'User account removal process completed'
+ }
+}
+```
+
+## Documentation and Style
+
+- **Comment-Based Help:** Include comment-based help for any public-facing function or cmdlet. Inside the function, add a `<# ... #>` help comment with at least:
+ - `.SYNOPSIS` Brief description
+ - `.DESCRIPTION` Detailed explanation
+ - `.EXAMPLE` sections with practical usage
+ - `.PARAMETER` descriptions
+ - `.OUTPUTS` Type of output returned
+ - `.NOTES` Additional information
+
+- **Consistent Formatting:**
+ - Follow consistent PowerShell style
+ - Use proper indentation (4 spaces recommended)
+ - Opening braces on same line as statement
+ - Closing braces on new line
+ - Use line breaks after pipeline operators
+ - PascalCase for function and parameter names
+ - Avoid unnecessary whitespace
+
+- **Pipeline Support:**
+ - Implement Begin/Process/End blocks for pipeline functions
+ - Use ValueFromPipeline where appropriate
+ - Support pipeline input by property name
+ - Return proper objects, not formatted text
+
+- **Avoid Aliases:** Use full cmdlet names and parameters
+ - Avoid using aliases in scripts (e.g., use Get-ChildItem instead of gci); aliases are acceptable for interactive shell use.
+ - Use `Where-Object` instead of `?` or `where`
+ - Use `ForEach-Object` instead of `%`
+ - Use `Get-ChildItem` instead of `ls` or `dir`
+
+## Full Example: End-to-End Cmdlet Pattern
+
+```powershell
+function New-Resource {
+ [CmdletBinding(SupportsShouldProcess = $true, ConfirmImpact = 'Medium')]
+ param(
+ [Parameter(Mandatory = $true,
+ ValueFromPipeline = $true,
+ ValueFromPipelineByPropertyName = $true)]
+ [ValidateNotNullOrEmpty()]
+ [string]$Name,
+
+ [Parameter()]
+ [ValidateSet('Development', 'Production')]
+ [string]$Environment = 'Development'
+ )
+
+ begin {
+ Write-Verbose 'Starting resource creation process'
+ }
+
+ process {
+ try {
+ if ($PSCmdlet.ShouldProcess($Name, 'Create new resource')) {
+ # Resource creation logic here
+ Write-Output ([PSCustomObject]@{
+ Name = $Name
+ Environment = $Environment
+ Created = Get-Date
+ })
+ }
+ } catch {
+ $errorRecord = [System.Management.Automation.ErrorRecord]::new(
+ $_.Exception,
+ 'ResourceCreationFailed',
+ [System.Management.Automation.ErrorCategory]::NotSpecified,
+ $Name
+ )
+ $PSCmdlet.ThrowTerminatingError($errorRecord)
+ }
+ }
+
+ end {
+ Write-Verbose 'Completed resource creation process'
+ }
+}
+```
diff --git a/.github/instructions/prompt.instructions.md b/.github/instructions/prompt.instructions.md
new file mode 100644
index 0000000..7ca0432
--- /dev/null
+++ b/.github/instructions/prompt.instructions.md
@@ -0,0 +1,73 @@
+---
+description: 'Guidelines for creating high-quality prompt files for GitHub Copilot'
+applyTo: '**/*.prompt.md'
+---
+
+# Copilot Prompt Files Guidelines
+
+Instructions for creating effective and maintainable prompt files that guide GitHub Copilot in delivering consistent, high-quality outcomes across any repository.
+
+## Scope and Principles
+- Target audience: maintainers and contributors authoring reusable prompts for Copilot Chat.
+- Goals: predictable behaviour, clear expectations, minimal permissions, and portability across repositories.
+- Primary references: VS Code documentation on prompt files and organization-specific conventions.
+
+## Frontmatter Requirements
+- Include `description` (single sentence, actionable outcome), `mode` (explicitly choose `ask`, `edit`, or `agent`), and `tools` (minimal set of tool bundles required to fulfill the prompt).
+- Declare `model` when the prompt depends on a specific capability tier; otherwise inherit the active model.
+- Preserve any additional metadata (`language`, `tags`, `visibility`, etc.) required by your organization.
+- Use consistent quoting (single quotes recommended) and keep one field per line for readability and version control clarity.
+
+## File Naming and Placement
+- Use kebab-case filenames ending with `.prompt.md` and store them under `.github/prompts/` unless your workspace standard specifies another directory.
+- Provide a short filename that communicates the action (for example, `generate-readme.prompt.md` rather than `prompt1.prompt.md`).
+
+## Body Structure
+- Start with an `#` level heading that matches the prompt intent so it surfaces well in Quick Pick search.
+- Organize content with predictable sections. Recommended baseline: `Mission` or `Primary Directive`, `Scope & Preconditions`, `Inputs`, `Workflow` (step-by-step), `Output Expectations`, and `Quality Assurance`.
+- Adjust section names to fit the domain, but retain the logical flow: why → context → inputs → actions → outputs → validation.
+- Reference related prompts or instruction files using relative links to aid discoverability.
+
+## Input and Context Handling
+- Use `${input:variableName[:placeholder]}` for required values and explain when the user must supply them. Provide defaults or alternatives where possible.
+- Call out contextual variables such as `${selection}`, `${file}`, `${workspaceFolder}` only when they are essential, and describe how Copilot should interpret them.
+- Document how to proceed when mandatory context is missing (for example, “Request the file path and stop if it remains undefined”).
+
+## Tool and Permission Guidance
+- Limit `tools` to the smallest set that enables the task. List them in the preferred execution order when the sequence matters.
+- If the prompt inherits tools from a chat mode, mention that relationship and state any critical tool behaviours or side effects.
+- Warn about destructive operations (file creation, edits, terminal commands) and include guard rails or confirmation steps in the workflow.
+
+## Instruction Tone and Style
+- Write in direct, imperative sentences targeted at Copilot (for example, “Analyze”, “Generate”, “Summarize”).
+- Keep sentences short and unambiguous, following Google Developer Documentation translation best practices to support localization.
+- Avoid idioms, humor, or culturally specific references; favor neutral, inclusive language.
+
+## Output Definition
+- Specify the format, structure, and location of expected results (for example, “Create `docs/adr/adr-XXXX.md` using the template below”).
+- Include success criteria and failure triggers so Copilot knows when to halt or retry.
+- Provide validation steps—manual checks, automated commands, or acceptance criteria lists—that reviewers can execute after running the prompt.
+
+## Examples and Reusable Assets
+- Embed Good/Bad examples or scaffolds (Markdown templates, JSON stubs) that the prompt should produce or follow.
+- Maintain reference tables (capabilities, status codes, role descriptions) inline to keep the prompt self-contained. Update these tables when upstream resources change.
+- Link to authoritative documentation instead of duplicating lengthy guidance.
+
+## Quality Assurance Checklist
+- [ ] Frontmatter fields are complete, accurate, and least-privilege.
+- [ ] Inputs include placeholders, default behaviours, and fallbacks.
+- [ ] Workflow covers preparation, execution, and post-processing without gaps.
+- [ ] Output expectations include formatting and storage details.
+- [ ] Validation steps are actionable (commands, diff checks, review prompts).
+- [ ] Security, compliance, and privacy policies referenced by the prompt are current.
+- [ ] Prompt executes successfully in VS Code (`Chat: Run Prompt`) using representative scenarios.
+
+## Maintenance Guidance
+- Version-control prompts alongside the code they affect; update them when dependencies, tooling, or review processes change.
+- Review prompts periodically to ensure tool lists, model requirements, and linked documents remain valid.
+- Coordinate with other repositories: when a prompt proves broadly useful, extract common guidance into instruction files or shared prompt packs.
+
+## Additional Resources
+- [Prompt Files Documentation](https://code.visualstudio.com/docs/copilot/customization/prompt-files#_prompt-file-format)
+- [Awesome Copilot Prompt Files](https://github.com/github/awesome-copilot/tree/main/prompts)
+- [Tool Configuration](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode#_agent-mode-tools)