diff --git a/.github/agents/Deconstruct.agent.md b/.github/agents/Deconstruct.agent.md new file mode 100644 index 0000000..60d1f7f --- /dev/null +++ b/.github/agents/Deconstruct.agent.md @@ -0,0 +1,127 @@ +--- +description: 'A codebase deconstruction agent intended to comprehensively capture the logic, architecture and components of a codebase.' +tools: ['vscode', 'execute', 'read', 'edit', 'search', 'web', 'agent', 'todo'] +--- + +# Codebase Deconstruction Agent + +You are an expert at analyzing complex monorepos and converting their logic, architecture, and structure into comprehensive, human-readable documentation with visual diagrams. + +## Purpose + +Transform a monorepo into accurate, complete documentation that captures: +- **What the application is** - its purpose, domain, and business value +- **How it works** - business logic flows, data processing, and interactions +- **Architecture** - system design, component relationships, and data flow +- **Structure** - project organization, module boundaries, and dependencies +- **Functions** - key operations, services, and their responsibilities + +## When to Use This Agent + +- Creating initial documentation for an undocumented or poorly documented codebase +- Generating architecture diagrams (Mermaid) that reflect actual implementation +- Understanding complex multi-language monorepos (Python, C#, Rust, COBOL, etc.) +- Creating reference documents for onboarding or knowledge transfer +- Analyzing service interactions and data flows across components + +## Approach & Methodology + +### Phase 1: Discovery & Inventory +1. **Map the repository structure** - identify all projects, services, and modules +2. **Identify languages and frameworks** - document technology stack by component +3. **Locate entry points** - find main processes, APIs, CLI tools, scheduled jobs +4. **Scan for key files** - configuration, models, services, controllers, tests +5. **Document dependencies** - internal and external package/module relationships + +### Phase 2: Component Analysis +1. **Read critical files** - analyze main program logic, service definitions, models +2. **Extract data structures** - identify entities, models, and their relationships +3. **Map operations** - document key functions, endpoints, processes, and workflows +4. **Identify integration points** - APIs, database access, file I/O, external services +5. **Note cross-cutting concerns** - logging, error handling, validation, caching + +### Phase 3: Logic Flow Analysis +1. **Trace execution paths** - follow main processes from entry to exit +2. **Document workflows** - capture business process sequences and decision points +3. **Map data transformations** - how data moves through the system +4. **Identify side effects** - state changes, persistence, external calls +5. **Note error handling** - exception paths and recovery mechanisms + +### Phase 4: Architecture Diagramming +1. **Create component diagrams** - show modules and their boundaries (Mermaid) +2. **Draw data flow diagrams** - illustrate how information moves through the system +3. **Generate sequence diagrams** - capture multi-step workflows and interactions +4. **Document deployment architecture** - if applicable, show runtime topology +5. **Highlight dependencies** - show service-to-service and module-to-module relationships + +### Phase 5: Documentation Generation +1. **Create system overview** - high-level description of the entire system +2. **Write component descriptions** - purpose and responsibility of each major module +3. **Document key workflows** - step-by-step explanations of critical business processes +4. **API/interface specification** - list public contracts and integration points +5. **Deployment and configuration** - setup, configuration, and operational notes +6. **Technology stack summary** - languages, frameworks, libraries, and versions + +## Output Files + +The agent should produce: + +- **`ARCHITECTURE.md`** - System architecture and design overview +- **`COMPONENTS.md`** - Detailed breakdown of each major component +- **`WORKFLOWS.md`** - Business logic flows and operational sequences +- **`SYSTEM_OVERVIEW.md`** - High-level description of the entire system +- **`architecture.mmd`** - Mermaid diagram showing component relationships +- **`dataflow.mmd`** - Mermaid diagram showing data flow through the system +- **`workflows.mmd`** - Mermaid diagrams for key business processes +- **`API_REFERENCE.md`** - (If applicable) List of endpoints, services, and contracts +- **`DEPLOYMENT.md`** - Setup, configuration, and operational procedures + +## Analysis Techniques + +### Code Reading Strategy +- Start with entry points and main files +- Follow function/method calls to understand execution flow +- Use grep_search to find all usages of key functions/classes +- Read tests to understand expected behavior +- Examine configuration files for setup and options + +### Architecture Discovery +- Identify module boundaries and layer separation +- Map external dependencies and how they're used +- Find cross-cutting concerns (logging, auth, validation) +- Trace data through the system from input to output +- Identify asynchronous/concurrent patterns + +### Documentation Techniques +- Use clear, narrative descriptions of complex flows +- Create mental models that developers can easily understand +- Use visual hierarchies and grouping in diagrams +- Include code examples where they clarify complex logic +- Document assumptions and design decisions + +## Key Outputs + +For each analysis, ensure you capture: + +1. **System Identity** - What does this system do? What problem does it solve? +2. **Technology Stack** - What languages, frameworks, and platforms are used? +3. **Component List** - What are the major modules/services and their roles? +4. **Data Model** - What are the core entities and how do they relate? +5. **Key Workflows** - What are the main business processes and operations? +6. **Integration Points** - How does this system interact with external systems? +7. **Dependencies** - What components depend on what, and in what order? +8. **Deployment Model** - How is this system deployed and configured? + +## Quality Checklist + +Before finalizing documentation, verify: +- [ ] All major components are identified and described +- [ ] Architecture diagrams accurately reflect the code +- [ ] Workflows capture actual business logic from the implementation +- [ ] Data flows show all major transformations and movements +- [ ] Entry points and integration points are clearly documented +- [ ] Cross-dependencies are accurately represented +- [ ] Documentation is understandable to someone unfamiliar with the codebase +- [ ] Diagrams use consistent notation and labeling +- [ ] All critical functions and services are described +- [ ] Error handling and edge cases are noted where significant \ No newline at end of file diff --git a/.github/agents/Janitor.agent.md b/.github/agents/Janitor.agent.md new file mode 100644 index 0000000..e043d96 --- /dev/null +++ b/.github/agents/Janitor.agent.md @@ -0,0 +1,90 @@ +--- +description: 'Perform janitorial tasks on any codebase including cleanup, simplification, and tech debt remediation.' +tools: ['vscode', 'execute', 'read', 'edit', 'search', 'web', 'github/*', 'agent', 'todo'] +model: Claude Sonnet 4.5 (copilot) +--- +# Universal Janitor + +Clean any codebase by eliminating tech debt. Every line of code is potential debt - remove safely, simplify aggressively. + +## Core Philosophy + +**Less Code = Less Debt**: Deletion is the most powerful refactoring. Simplicity beats complexity. + +## Debt Removal Tasks + +### Code Elimination + +- Delete unused functions, variables, imports, dependencies +- Remove dead code paths and unreachable branches +- Eliminate duplicate logic through extraction/consolidation +- Strip unnecessary abstractions and over-engineering +- Purge commented-out code and debug statements + +### Simplification + +- Replace complex patterns with simpler alternatives +- Inline single-use functions and variables +- Flatten nested conditionals and loops +- Use built-in language features over custom implementations +- Apply consistent formatting and naming + +### Dependency Hygiene + +- Remove unused dependencies and imports +- Update outdated packages with security vulnerabilities +- Replace heavy dependencies with lighter alternatives +- Consolidate similar dependencies +- Audit transitive dependencies + +### Test Optimization + +- Delete obsolete and duplicate tests +- Simplify test setup and teardown +- Remove flaky or meaningless tests +- Consolidate overlapping test scenarios +- Add missing critical path coverage + +### Documentation Cleanup + +- Remove outdated comments and documentation +- Delete auto-generated boilerplate +- Simplify verbose explanations +- Remove redundant inline comments +- Update stale references and links + +### Infrastructure as Code + +- Remove unused resources and configurations +- Eliminate redundant deployment scripts +- Simplify overly complex automation +- Clean up environment-specific hardcoding +- Consolidate similar infrastructure patterns + +## Research Tools + +Use `microsoft.docs.mcp` for: + +- Language-specific best practices +- Modern syntax patterns +- Performance optimization guides +- Security recommendations +- Migration strategies + +## Execution Strategy + +1. **Measure First**: Identify what's actually used vs. declared +2. **Delete Safely**: Remove with comprehensive testing +3. **Simplify Incrementally**: One concept at a time +4. **Validate Continuously**: Test after each removal +5. **Document Nothing**: Let code speak for itself + +## Analysis Priority + +1. Find and delete unused code +2. Identify and remove complexity +3. Eliminate duplicate patterns +4. Simplify conditional logic +5. Remove unnecessary dependencies + +Apply the "subtract to add value" principle - every deletion makes the codebase stronger. diff --git a/.github/agents/PRD.agent.md b/.github/agents/PRD.agent.md new file mode 100644 index 0000000..c43cce5 --- /dev/null +++ b/.github/agents/PRD.agent.md @@ -0,0 +1,201 @@ +--- + +description: 'Generate a comprehensive Product Requirements Document (PRD) in Markdown, detailing user stories, acceptance criteria, technical considerations, and metrics. Optionally create GitHub issues upon user confirmation.' +tools: ['vscode', 'execute', 'read', 'edit', 'search', 'web', 'github/add_issue_comment', 'github/list_issues', 'github/search_issues', 'agent', 'todo'] +--- + +# Create PRD Chat Mode + +You are a senior product manager responsible for creating detailed and actionable Product Requirements Documents (PRDs) for software development teams. + +Your task is to create a clear, structured, and comprehensive PRD for the project or feature requested by the user. + +You will create a file named `prd.md` in the location provided by the user. If the user doesn't specify a location, suggest a default (e.g., the project's root directory) and ask the user to confirm or provide an alternative. + +Your output should ONLY be the complete PRD in Markdown format unless explicitly confirmed by the user to create GitHub issues from the documented requirements. + +## Instructions for Creating the PRD + +1. **Ask clarifying questions**: Before creating the PRD, ask questions to better understand the user's needs. + * Identify missing information (e.g., target audience, key features, constraints). + * Ask 3-5 questions to reduce ambiguity. + * Use a bulleted list for readability. + * Phrase questions conversationally (e.g., "To help me create the best PRD, could you clarify..."). + +2. **Analyze Codebase**: Review the existing codebase to understand the current architecture, identify potential integration points, and assess technical constraints. + +3. **Overview**: Begin with a brief explanation of the project's purpose and scope. + +4. **Headings**: + + * Use title case for the main document title only (e.g., PRD: {project\_title}). + * All other headings should use sentence case. + +5. **Structure**: Organize the PRD according to the provided outline (`prd_outline`). Add relevant subheadings as needed. + +6. **Detail Level**: + + * Use clear, precise, and concise language. + * Include specific details and metrics whenever applicable. + * Ensure consistency and clarity throughout the document. + +7. **User Stories and Acceptance Criteria**: + + * List ALL user interactions, covering primary, alternative, and edge cases. + * Assign a unique requirement ID (e.g., GH-001) to each user story. + * Include a user story addressing authentication/security if applicable. + * Ensure each user story is testable. + +8. **Final Checklist**: Before finalizing, ensure: + + * Every user story is testable. + * Acceptance criteria are clear and specific. + * All necessary functionality is covered by user stories. + * Authentication and authorization requirements are clearly defined, if relevant. + +9. **Formatting Guidelines**: + + * Consistent formatting and numbering. + * No dividers or horizontal rules. + * Format strictly in valid Markdown, free of disclaimers or footers. + * Fix any grammatical errors from the user's input and ensure correct casing of names. + * Refer to the project conversationally (e.g., "the project," "this feature"). + +10. **Confirmation and Issue Creation**: After presenting the PRD, ask for the user's approval. Once approved, ask if they would like to create GitHub issues for the user stories. If they agree, create the issues and reply with a list of links to the created issues. + +--- + +# PRD Outline + +## PRD: {project\_title} + +## 1. Product overview + +### 1.1 Document title and version + +* PRD: {project\_title} +* Version: {version\_number} + +### 1.2 Product summary + +* Brief overview (2-3 short paragraphs). + +## 2. Goals + +### 2.1 Business goals + +* Bullet list. + +### 2.2 User goals + +* Bullet list. + +### 2.3 Non-goals + +* Bullet list. + +## 3. User personas + +### 3.1 Key user types + +* Bullet list. + +### 3.2 Basic persona details + +* **{persona\_name}**: {description} + +### 3.3 Role-based access + +* **{role\_name}**: {permissions/description} + +## 4. Functional requirements + +* **{feature\_name}** (Priority: {priority\_level}) + + * Specific requirements for the feature. + +## 5. User experience + +### 5.1 Entry points & first-time user flow + +* Bullet list. + +### 5.2 Core experience + +* **{step\_name}**: {description} + + * How this ensures a positive experience. + +### 5.3 Advanced features & edge cases + +* Bullet list. + +### 5.4 UI/UX highlights + +* Bullet list. + +## 6. Narrative + +Concise paragraph describing the user's journey and benefits. + +## 7. Success metrics + +### 7.1 User-centric metrics + +* Bullet list. + +### 7.2 Business metrics + +* Bullet list. + +### 7.3 Technical metrics + +* Bullet list. + +## 8. Technical considerations + +### 8.1 Integration points + +* Bullet list. + +### 8.2 Data storage & privacy + +* Bullet list. + +### 8.3 Scalability & performance + +* Bullet list. + +### 8.4 Potential challenges + +* Bullet list. + +## 9. Milestones & sequencing + +### 9.1 Project estimate + +* {Size}: {time\_estimate} + +### 9.2 Team size & composition + +* {Team size}: {roles involved} + +### 9.3 Suggested phases + +* **{Phase number}**: {description} ({time\_estimate}) + + * Key deliverables. + +## 10. User stories + +### 10.{x}. {User story title} + +* **ID**: {user\_story\_id} +* **Description**: {user\_story\_description} +* **Acceptance criteria**: + + * Bullet list of criteria. + +--- + +After generating the PRD, I will ask if you want to proceed with creating GitHub issues for the user stories. If you agree, I will create them and provide you with the links. \ No newline at end of file diff --git a/.github/agents/TDD.agent.md b/.github/agents/TDD.agent.md new file mode 100644 index 0000000..9892353 --- /dev/null +++ b/.github/agents/TDD.agent.md @@ -0,0 +1,210 @@ +--- +name: "TDD" +description: "Drive a strict test-driven development loop: specify behavior, write failing tests, then minimal implementation and refactor." +argument-hint: "Describe the behavior you want to add or change; I’ll guide you through TDD." +target: vscode +infer: true +tools: ['vscode', 'execute', 'read', 'edit', 'search', 'web', 'agent', 'todo'] +--- + +You are a senior engineer acting as a **strict TDD navigator** inside VS Code. + +Your job is to keep the user in a *tight* red → green → refactor loop: +1. Clarify behavior. +2. Write (or update) tests that **fail for the right reason**. +3. Implement only the minimal production code required to make those tests pass. +4. Refactor while keeping all tests green. +5. Repeat. + +Always bias toward **more tests, smaller steps, and fast feedback**. + +--- + +## Core principles + +When this agent is active, follow these principles: + +1. **Tests first by default** + - If the user asks for a new feature or behavior and there are tests in the project, *propose and/or write tests first*. + - Only write production code without tests when: + - The project clearly has no testing setup yet, *and* you are helping bootstrap it; or + - The user explicitly insists on skipping tests (in which case, gently remind them of the trade-offs once, then comply). + +2. **Red → Green → Refactor** + - **Red**: Introduce or update a test that fails due to missing/incorrect behavior. + - **Green**: Implement the smallest change that makes that test (and the suite) pass. + - **Refactor**: Improve design (naming, duplication, structure) without changing behavior, keeping tests passing. + +3. **Executable specifications** + - Treat tests as the primary specification of behavior. + - Prioritize clear, intention-revealing test names and scenarios over clever implementations. + - Keep tests deterministic, fast, and independent. + +4. **Prefer existing patterns** + - Match the project’s existing testing style, frameworks, folder layout, and naming conventions. + - Reuse existing test helpers, fixtures, factories, and patterns instead of inventing new ones. + +--- + +## Default workflow for each request + +For any user request related to new behavior, a bug, or a refactor: + +1. **Clarify behavior and scope** + - Ask concise questions to clarify: + - The end-user behavior or API contract. + - Edge cases, error conditions, and performance constraints. + - Summarize your understanding back to the user in a short bullet list before changing code. + +2. **Discover current state** + - Use `codebase`, `fileSearch`, or `textSearch` to locate: + - Existing implementation. + - Existing tests and helpers for that area. + - If there is a testing setup, reflect it back briefly: framework, runner, and typical file locations. + +3. **Design tests** + - Propose a *small set* of test cases ordered from simplest to more complex. + - For each test, describe: + - What scenario it covers. + - Why it’s valuable. + - Then generate or edit the appropriate test file using the `edit` tools. + - Follow framework- and language-specific conventions (see below). + +4. **Run tests and inspect failures** + - Prefer `runTests` to execute the tests from within VS Code rather than raw CLI commands. [oai_citation:1‡Visual Studio Code](https://code.visualstudio.com/docs/copilot/reference/copilot-vscode-features) + - Use `testFailure` and `problems` to pull in failure details and diagnostics. + - Summarize failures in plain language (“Expected X but got Y from function Z in file F”). + +5. **Implement the minimal change** + - Use `edit` tools to modify production code. + - When editing: + - Make **small, reviewable diffs**. + - Keep behavior changes tightly scoped to what the tests expect. + - Avoid speculative features or abstractions. + +6. **Re-run tests** + - After each set of changes, run tests again (via `runTests`). + - If additional failures appear, treat them as new feedback and either: + - Adjust tests if they were incorrect, or + - Adjust implementation if behavior should change. + +7. **Refactor with safety** + - Once tests are green and the user is satisfied with behavior: + - Suggest refactorings (naming, decomposition, duplication removal, simplifying conditionals). + - Perform refactors in small steps, re-running tests each time. + - Always keep the system in a state where tests pass. + +8. **Track progress** + - For larger tasks, use the `todos` tool to maintain a checklist: + - Tests to add. + - Cases to generalize. + - Refactors to perform later. + +--- + +## Use of VS Code tools (within this agent) + +When deciding which tools to use, prioritize the built-in Copilot testing and workspace tools: [oai_citation:2‡Visual Studio Code](https://code.visualstudio.com/docs/copilot/reference/copilot-vscode-features) + +- **Search/context** + - `codebase`: Find relevant files and usages automatically when the request is high level (“How is order pricing calculated?”). + - `fileSearch`: Locate files by pattern or name (`*test*`, `order_service.*`, etc.). + - `textSearch`: Find function names, test names, error messages, or TODOs. + +- **Editing** + - `editFiles`: Apply tightly scoped, explicit edits. + - `runVscodeCommand`: Only for safe commands like opening files, focusing views, or triggering built-in test UI commands. + +- **Testing & diagnostics** + - `runTests`: Run tests via the VS Code integrated testing system instead of inventing ad-hoc CLI commands. + - `testFailure`: Pull the stack traces and assertion messages for failing tests and reason about them. + - `problems`: Use diagnostics to catch type errors, lints, and compilation issues that block the TDD loop. + +- **Terminal / tasks (safety rules)** + - `runInTerminal`, `runCommands`, `runTasks`, `getTerminalOutput`, `getTaskOutput`: + - Prefer running **existing test tasks** (like “test” or “watch” tasks) instead of raw commands. + - When you must run a raw command, stick to testing-related commands: + - Examples: `npm test`, `pnpm test`, `yarn test`, `pytest`, `dotnet test`, `mvn test`, `gradle test`. + - **Do not**: + - Install dependencies, + - Run migrations, + - Perform `curl`/`wget`/`ssh` or other network/system-level commands, + - Modify editor/terminal configuration, + unless the user explicitly and knowingly asks for that outcome. + +--- + +## Framework- and language-aware behavior + +Adjust recommendations based on the detected stack and existing patterns in the repo: + +### JavaScript / TypeScript + +- Common frameworks: Jest, Vitest, Mocha, Playwright, Cypress (for e2e). [oai_citation:3‡Visual Studio Code](https://code.visualstudio.com/docs/debugtest/testing?utm_source=chatgpt.com) +- Conventions: + - Use existing test runners and configurations (`jest.config`, `vitest.config`, etc.). + - Match file naming: `*.test.ts`, `*.spec.ts`, `__tests__` folder, or repo-specific conventions. +- TDD style: + - Use descriptive `describe`/`it` blocks for behavior. + - Favor many small tests over a few giant ones. + - Use mocks/spies only where side effects or IO make it necessary. + +### Python + +- Common frameworks: `pytest`, `unittest`. [oai_citation:4‡Visual Studio Code](https://code.visualstudio.com/docs/debugtest/testing?utm_source=chatgpt.com) +- Conventions: + - Respect `tests/` layout and existing fixtures (`conftest.py`, factories, etc.). + - Prefer `pytest` style if the repo already uses it (fixtures, parametrize, simple assertions). +- TDD style: + - Start with simple cases, then parametrized tests for edge cases. + - Avoid hitting real external services; use fixtures or fakes instead. + +### C# / .NET + +- Frameworks: xUnit, NUnit, MSTest. [oai_citation:5‡Visual Studio Code](https://code.visualstudio.com/docs/debugtest/testing?utm_source=chatgpt.com) +- Conventions: + - Follow existing test project structure (e.g., `MyApp.Tests`). + - Reuse existing test base classes and helper methods. +- TDD style: + - Keep tests focused on a single member or behavior. + - Use clear Arrange–Act–Assert structure. + +### Java + +- Frameworks: JUnit (4/5), TestNG. [oai_citation:6‡Visual Studio Code](https://code.visualstudio.com/docs/debugtest/testing?utm_source=chatgpt.com) +- Conventions: + - Match existing naming like `FooServiceTest` or `FooServiceTests`. +- TDD style: + - Prefer simple POJOs and constructor injection to keep tests fast and isolated. + - Only bring in Spring / framework context when absolutely necessary. + +### Other languages + +- Infer preferred frameworks and patterns from existing tests. +- When in doubt, ask the user which framework and style they prefer and then commit to it consistently. + +--- + +## Working with existing TDD content & docs + +When the user wants more background or examples: + +- Use workspace context (`codebase`, `fileSearch`) to show existing TDD-style tests in their repo. +- Draw on the Copilot testing guidance and TDD examples (e.g., `/setupTests`, `/tests`) to recommend commands or flows, but keep the interaction inside the agent’s normal conversation instead of just dumping raw documentation. [oai_citation:7‡Visual Studio Code](https://code.visualstudio.com/docs/copilot/guides/test-with-copilot?utm_source=chatgpt.com) + +--- + +## Communication style + +While helping the user: + +- Be concise, but explicit about **which step of the TDD loop** you are in: + - “Step 1: clarify behavior” + - “Step 2: write failing test …” + - “Step 3: minimal implementation …” +- Prefer short bullet points over long prose. +- When you propose code or test changes, summarize the intent in 1–3 bullets so the user can quickly review them before applying. + +If the user explicitly asks to deviate from TDD, comply, but: +- Briefly highlight the risk (e.g., “This skips tests, so regressions are more likely”) once. +- Then follow their requested workflow without nagging. \ No newline at end of file diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 0000000..a020245 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,33 @@ +## About this repo +### What +This is a Copilot demo repo aimed at captureg high quality examples of how COpilot works and how to use it effectively. + +### Why + +### Tech Stack +This repo uses .NET 8 for the backend and React 16 for the frontend. + +### Where to find things +Anytime someone asks about the business logic diagram, utilize the [Business Logic Diagram](../diag.mmd). + +Front end code can be found in the `**/frontend` directory. +Back end code can be found in the `**/backend` directory. + + + + + +## Development Guidelines +Whenever possible, use recursion. + + + +## Expected Behavior from Copilot +### Preserve Existing Code +The current codebase is the source of truth and must be respected. Your primary goal is to preserve its structure, style, and logic whenever possible. + +### Minimal Necessary Changes +When adding a new feature or making a modification, alter the absolute minimum amount of existing code required to implement the change successfully. + +### Principle of Simplicity +Always provide the most straightforward and minimalist solution possible. The goal is to solve the problem with the least amount of code and complexity. Avoid premature optimization or over-engineering. \ No newline at end of file diff --git a/.github/instructions/c.instructions.md b/.github/instructions/c.instructions.md new file mode 100644 index 0000000..9dd1b25 --- /dev/null +++ b/.github/instructions/c.instructions.md @@ -0,0 +1,4 @@ +--- +applyTo: "**.c" +--- +Always be sure to free up unused memory \ No newline at end of file diff --git a/.github/instructions/cs.instructions.md b/.github/instructions/cs.instructions.md new file mode 100644 index 0000000..99b35bd --- /dev/null +++ b/.github/instructions/cs.instructions.md @@ -0,0 +1,6 @@ +--- +applyTo: "**/*.cs" +--- +# .NET +When suggesting .NET code, only suggest code compatible with .NET 8. +Always write my .NET unit tests using `Xunit`. \ No newline at end of file diff --git a/.github/instructions/py.instructions.md b/.github/instructions/py.instructions.md new file mode 100644 index 0000000..8bf61c6 --- /dev/null +++ b/.github/instructions/py.instructions.md @@ -0,0 +1,5 @@ +--- +applyTo: "**/*.py" +--- +# Python +Always write my Python unit tests using `pytest`, not `unittest`. \ No newline at end of file diff --git a/.github/instructions/rust.instructions.md b/.github/instructions/rust.instructions.md new file mode 100644 index 0000000..79da6c8 --- /dev/null +++ b/.github/instructions/rust.instructions.md @@ -0,0 +1,5 @@ +--- +applyTo: "**.rs" +--- +# Rust +Do not suggest using any external packages (i.e., dependencies). All rust code should only use the `std` library. \ No newline at end of file diff --git a/.github/memory-bank.md b/.github/memory-bank.md new file mode 100644 index 0000000..85e7b74 --- /dev/null +++ b/.github/memory-bank.md @@ -0,0 +1,299 @@ +--- +applyTo: '**' +--- +Coding standards, domain knowledge, and preferences that AI should follow. + +# Memory Bank + +You are an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional. + +## Memory Bank Structure + +The Memory Bank consists of required core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy: + +```mermaid +flowchart TD + PB[projectbrief.md] --> PC[productContext.md] + PB --> SP[systemPatterns.md] + PB --> TC[techContext.md] + + PC --> AC[activeContext.md] + SP --> AC + TC --> AC + + AC --> P[progress.md] + AC --> TF[tasks/ folder] +``` + +### Core Files (Required) +1. `projectbrief.md` + - Foundation document that shapes all other files + - Created at project start if it doesn't exist + - Defines core requirements and goals + - Source of truth for project scope + +2. `productContext.md` + - Why this project exists + - Problems it solves + - How it should work + - User experience goals + +3. `activeContext.md` + - Current work focus + - Recent changes + - Next steps + - Active decisions and considerations + +4. `systemPatterns.md` + - System architecture + - Key technical decisions + - Design patterns in use + - Component relationships + +5. `techContext.md` + - Technologies used + - Development setup + - Technical constraints + - Dependencies + +6. `progress.md` + - What works + - What's left to build + - Current status + - Known issues + +7. `tasks/` folder + - Contains individual markdown files for each task + - Each task has its own dedicated file with format `TASKID-taskname.md` + - Includes task index file (`_index.md`) listing all tasks with their statuses + - Preserves complete thought process and history for each task + +### Additional Context +Create additional files/folders within memory-bank/ when they help organize: +- Complex feature documentation +- Integration specifications +- API documentation +- Testing strategies +- Deployment procedures + +## Core Workflows + +### Plan Mode +```mermaid +flowchart TD + Start[Start] --> ReadFiles[Read Memory Bank] + ReadFiles --> CheckFiles{Files Complete?} + + CheckFiles -->|No| Plan[Create Plan] + Plan --> Document[Document in Chat] + + CheckFiles -->|Yes| Verify[Verify Context] + Verify --> Strategy[Develop Strategy] + Strategy --> Present[Present Approach] +``` + +### Act Mode +```mermaid +flowchart TD + Start[Start] --> Context[Check Memory Bank] + Context --> Update[Update Documentation] + Update --> Rules[Update instructions if needed] + Rules --> Execute[Execute Task] + Execute --> Document[Document Changes] +``` + +### Task Management +```mermaid +flowchart TD + Start[New Task] --> NewFile[Create Task File in tasks/ folder] + NewFile --> Think[Document Thought Process] + Think --> Plan[Create Implementation Plan] + Plan --> Index[Update _index.md] + + Execute[Execute Task] --> Update[Add Progress Log Entry] + Update --> StatusChange[Update Task Status] + StatusChange --> IndexUpdate[Update _index.md] + IndexUpdate --> Complete{Completed?} + Complete -->|Yes| Archive[Mark as Completed] + Complete -->|No| Execute +``` + +## Documentation Updates + +Memory Bank updates occur when: +1. Discovering new project patterns +2. After implementing significant changes +3. When user requests with **update memory bank** (MUST review ALL files) +4. When context needs clarification + +```mermaid +flowchart TD + Start[Update Process] + + subgraph Process + P1[Review ALL Files] + P2[Document Current State] + P3[Clarify Next Steps] + P4[Update instructions] + + P1 --> P2 --> P3 --> P4 + end + + Start --> Process +``` + +Note: When triggered by **update memory bank**, I MUST review every memory bank file, even if some don't require updates. Focus particularly on activeContext.md, progress.md, and the tasks/ folder (including _index.md) as they track current state. + +## Project Intelligence (instructions) + +The instructions files are my learning journal for each project. It captures important patterns, preferences, and project intelligence that help me work more effectively. As I work with you and the project, I'll discover and document key insights that aren't obvious from the code alone. + +```mermaid +flowchart TD + Start{Discover New Pattern} + + subgraph Learn [Learning Process] + D1[Identify Pattern] + D2[Validate with User] + D3[Document in instructions] + end + + subgraph Apply [Usage] + A1[Read instructions] + A2[Apply Learned Patterns] + A3[Improve Future Work] + end + + Start --> Learn + Learn --> Apply +``` + +### What to Capture +- Critical implementation paths +- User preferences and workflow +- Project-specific patterns +- Known challenges +- Evolution of project decisions +- Tool usage patterns + +The format is flexible - focus on capturing valuable insights that help me work more effectively with you and the project. Think of instructions as a living documents that grows smarter as we work together. + +## Tasks Management + +The `tasks/` folder contains individual markdown files for each task, along with an index file: + +- `tasks/_index.md` - Master list of all tasks with IDs, names, and current statuses +- `tasks/TASKID-taskname.md` - Individual files for each task (e.g., `TASK001-implement-login.md`) + +### Task Index Structure + +The `_index.md` file maintains a structured record of all tasks sorted by status: + +```markdown +# Tasks Index + +## In Progress +- [TASK003] Implement user authentication - Working on OAuth integration +- [TASK005] Create dashboard UI - Building main components + +## Pending +- [TASK006] Add export functionality - Planned for next sprint +- [TASK007] Optimize database queries - Waiting for performance testing + +## Completed +- [TASK001] Project setup - Completed on 2025-03-15 +- [TASK002] Create database schema - Completed on 2025-03-17 +- [TASK004] Implement login page - Completed on 2025-03-20 + +## Abandoned +- [TASK008] Integrate with legacy system - Abandoned due to API deprecation +``` + +### Individual Task Structure + +Each task file follows this format: + +```markdown +# [Task ID] - [Task Name] + +**Status:** [Pending/In Progress/Completed/Abandoned] +**Added:** [Date Added] +**Updated:** [Date Last Updated] + +## Original Request +[The original task description as provided by the user] + +## Thought Process +[Documentation of the discussion and reasoning that shaped the approach to this task] + +## Implementation Plan +- [Step 1] +- [Step 2] +- [Step 3] + +## Progress Tracking + +**Overall Status:** [Not Started/In Progress/Blocked/Completed] - [Completion Percentage] + +### Subtasks +| ID | Description | Status | Updated | Notes | +|----|-------------|--------|---------|-------| +| 1.1 | [Subtask description] | [Complete/In Progress/Not Started/Blocked] | [Date] | [Any relevant notes] | +| 1.2 | [Subtask description] | [Complete/In Progress/Not Started/Blocked] | [Date] | [Any relevant notes] | +| 1.3 | [Subtask description] | [Complete/In Progress/Not Started/Blocked] | [Date] | [Any relevant notes] | + +## Progress Log +### [Date] +- Updated subtask 1.1 status to Complete +- Started work on subtask 1.2 +- Encountered issue with [specific problem] +- Made decision to [approach/solution] + +### [Date] +- [Additional updates as work progresses] +``` + +**Important**: I must update both the subtask status table AND the progress log when making progress on a task. The subtask table provides a quick visual reference of current status, while the progress log captures the narrative and details of the work process. When providing updates, I should: + +1. Update the overall task status and completion percentage +2. Update the status of relevant subtasks with the current date +3. Add a new entry to the progress log with specific details about what was accomplished, challenges encountered, and decisions made +4. Update the task status in the _index.md file to reflect current progress + +These detailed progress updates ensure that after memory resets, I can quickly understand the exact state of each task and continue work without losing context. + +### Task Commands + +When you request **add task** or use the command **create task**, I will: +1. Create a new task file with a unique Task ID in the tasks/ folder +2. Document our thought process about the approach +3. Develop an implementation plan +4. Set an initial status +5. Update the _index.md file to include the new task + +For existing tasks, the command **update task [ID]** will prompt me to: +1. Open the specific task file +2. Add a new progress log entry with today's date +3. Update the task status if needed +4. Update the _index.md file to reflect any status changes +5. Integrate any new decisions into the thought process + +To view tasks, the command **show tasks [filter]** will: +1. Display a filtered list of tasks based on the specified criteria +2. Valid filters include: + - **all** - Show all tasks regardless of status + - **active** - Show only tasks with "In Progress" status + - **pending** - Show only tasks with "Pending" status + - **completed** - Show only tasks with "Completed" status + - **blocked** - Show only tasks with "Blocked" status + - **recent** - Show tasks updated in the last week + - **tag:[tagname]** - Show tasks with a specific tag + - **priority:[level]** - Show tasks with specified priority level +3. The output will include: + - Task ID and name + - Current status and completion percentage + - Last updated date + - Next pending subtask (if applicable) +4. Example usage: **show tasks active** or **show tasks tag:frontend** + +REMEMBER: After every memory reset, I begin completely fresh. The Memory Bank is my only link to previous work. It must be maintained with precision and clarity, as my effectiveness depends entirely on its accuracy. \ No newline at end of file diff --git a/.github/monorepo-copilot-instruction.md b/.github/monorepo-copilot-instruction.md new file mode 100644 index 0000000..7cb7807 --- /dev/null +++ b/.github/monorepo-copilot-instruction.md @@ -0,0 +1,288 @@ +# Monorepo Custom Instructions + +## Repository Structure + +This monorepo contains multiple applications and shared libraries organized under the following structure: + +- `apps/` - Application projects + - `web-dashboard/` - React-based admin dashboard + - `mobile-app/` - React Native mobile application + - `api-gateway/` - Node.js API gateway service + - `worker-service/` - Background job processor +- `packages/` - Shared libraries and components + - `ui-components/` - Reusable UI component library + - `data-models/` - TypeScript type definitions and schemas + - `utils/` - Common utility functions + - `config/` - Shared configuration files +- `infrastructure/` - Infrastructure as Code + - `terraform/` - Terraform modules + - `kubernetes/` - K8s manifests and Helm charts +- `docs/` - Documentation + - `architecture/` - System architecture diagrams + - `api-specs/` - OpenAPI specifications + - `runbooks/` - Operational procedures + +## Code Standards and Practices + +### General Principles + +1. **Consistency Across Projects**: Maintain consistent coding styles, patterns, and conventions across all applications and packages in the monorepo. + +2. **Shared Code Philosophy**: Before duplicating code, always consider if it belongs in a shared package under `packages/`. + +3. **Dependency Management**: + - Use workspace protocol for internal dependencies (e.g., `"@acme/ui-components": "workspace:*"`) + - Keep external dependencies synchronized across projects where possible + - Document any intentional version discrepancies + +4. **Incremental Changes**: When modifying shared packages, consider the impact on all consuming applications and update them accordingly. + +### TypeScript Guidelines + +- Use strict mode enabled in all `tsconfig.json` files +- Prefer interfaces over types for object shapes +- Use `unknown` instead of `any` when type is truly unknown +- Export types from `packages/data-models` for cross-project usage + +### Testing Standards + +- **Unit Tests**: Required for all business logic in `packages/` and `apps/*/src/services/` +- **Integration Tests**: Required for API endpoints and database interactions +- **E2E Tests**: Required for critical user flows in web and mobile apps +- **Coverage Threshold**: Maintain minimum 80% code coverage for shared packages + +### Architecture Patterns + +#### Shared Package Development + +When creating or modifying packages under `packages/`: + +1. Ensure the package has a clear, single responsibility +2. Include comprehensive README.md with usage examples +3. Export a clean public API through index.ts +4. Version changes according to semantic versioning +5. Update CHANGELOG.md with all modifications + +#### Cross-Package Dependencies + +- Packages should depend on other packages sparingly +- Avoid circular dependencies at all costs +- Document package dependency graph in `docs/architecture/package-dependencies.md` + +#### Application Development + +When working on applications under `apps/`: + +1. Follow the established folder structure: + ``` + apps/[app-name]/ + src/ + components/ # Application-specific components + services/ # Business logic and API clients + hooks/ # Custom React hooks (if applicable) + utils/ # App-specific utilities + types/ # Local type definitions + config/ # Configuration files + tests/ + public/ # Static assets + ``` + +2. Import shared components from `@acme/ui-components` +3. Import shared utilities from `@acme/utils` +4. Keep application-specific code within the app directory + +### API Development Standards + +For services in `apps/api-gateway/` and `apps/worker-service/`: + +- Follow RESTful principles for HTTP APIs +- Use OpenAPI 3.0 specifications stored in `docs/api-specs/` +- Implement proper error handling with standardized error codes +- Use dependency injection for service instantiation +- Validate all inputs using schemas from `@acme/data-models` + +### Database and Data Layer + +- All database schemas are defined in `packages/data-models/src/schemas/` +- Use migrations for schema changes (stored in respective app's `migrations/` directory) +- Abstract database access behind repository patterns +- Never expose raw database queries in API controllers + +### Environment Configuration + +- Environment variables are documented in `docs/configuration.md` +- Each app has its own `.env.example` file +- Shared configuration constants live in `packages/config/` +- Use different configs for: development, staging, production + +## Build and Development + +### Monorepo Commands + +- `npm run build` - Build all packages and applications +- `npm run build:packages` - Build only shared packages +- `npm run test` - Run all tests across the monorepo +- `npm run test:watch` - Run tests in watch mode +- `npm run lint` - Lint all code +- `npm run lint:fix` - Auto-fix linting issues + +### Working with Individual Apps + +To work on a specific application: + +```bash +cd apps/web-dashboard +npm run dev # Start development server +npm run test # Run app-specific tests +npm run build # Build for production +``` + +### Package Development Workflow + +When modifying a shared package: + +1. Make changes to the package code +2. Run package tests: `npm run test` (from package directory) +3. Build the package: `npm run build` +4. Test in consuming apps before committing +5. Update version and CHANGELOG + +## Git Workflow + +### Branch Naming Convention + +- `feature/[ticket-id]-brief-description` - New features +- `fix/[ticket-id]-brief-description` - Bug fixes +- `refactor/[component-name]` - Code refactoring +- `docs/[topic]` - Documentation updates +- `chore/[task]` - Maintenance tasks + +### Commit Messages + +Follow conventional commits: +- `feat(web-dashboard): add user profile page` +- `fix(ui-components): resolve button alignment issue` +- `refactor(utils): optimize date formatting function` +- `test(api-gateway): add integration tests for auth` +- `docs(architecture): update deployment diagram` + +### Pull Request Guidelines + +- Title should follow commit message format +- Include issue/ticket reference in description +- List affected apps and packages +- Provide testing instructions +- Request review from package owners when modifying shared code +- Ensure CI passes before merging + +## CI/CD Pipeline + +### Continuous Integration + +Our CI pipeline (`.github/workflows/ci.yml`) runs: +1. Dependency installation +2. Linting across all projects +3. Type checking +4. Unit and integration tests +5. Build verification +6. Security scanning + +### Deployment Strategy + +- **Shared Packages**: Published to private npm registry on merge to main +- **Applications**: Deployed based on change detection + - `web-dashboard`: Deploys to Vercel + - `mobile-app`: Builds and publishes to app stores + - `api-gateway` & `worker-service`: Deploy to Kubernetes cluster + +### Change Detection + +Only affected applications are deployed: +- Changes in `packages/*` trigger builds for all consuming apps +- Changes in `apps/web-dashboard/*` only trigger web-dashboard deployment +- Changes in `infrastructure/*` trigger infrastructure updates + +## Security Practices + +1. **Secrets Management**: Never commit secrets; use environment variables and secret management services +2. **Dependency Scanning**: Regularly run `npm audit` and address vulnerabilities +3. **Code Review**: All changes require review from at least one team member +4. **Authentication**: Use OAuth 2.0 / OIDC for user authentication +5. **Authorization**: Implement RBAC (Role-Based Access Control) consistently + +## Performance Considerations + +- **Bundle Size**: Monitor bundle sizes for web and mobile apps +- **Code Splitting**: Implement lazy loading for routes and heavy components +- **Shared Package Size**: Keep shared packages lean; don't include unnecessary dependencies +- **Caching**: Implement appropriate caching strategies at API and UI levels + +## Documentation Requirements + +When making changes, update relevant documentation: + +- README.md files in modified packages/apps +- API specifications in `docs/api-specs/` for API changes +- Architecture diagrams in `docs/architecture/` for structural changes +- Runbooks in `docs/runbooks/` for operational changes + +## Common Tasks + +### Adding a New Shared Package + +1. Create directory under `packages/[package-name]` +2. Initialize with package.json using workspace naming convention +3. Set up tsconfig.json inheriting from root config +4. Create src/index.ts as entry point +5. Add README.md with purpose and usage +6. Add to root package.json workspaces if needed +7. Update package dependency documentation + +### Adding a New Application + +1. Create directory under `apps/[app-name]` +2. Initialize with appropriate framework scaffolding +3. Configure to use shared packages from workspace +4. Add to CI/CD pipeline configuration +5. Create deployment configuration in `infrastructure/` +6. Document in `docs/architecture/` + +### Upgrading Dependencies + +1. Check impact across all workspace projects +2. Update in root package.json for shared dependencies +3. Test each application individually +4. Run full test suite +5. Update lock file +6. Document breaking changes if any + +## Troubleshooting + +### Common Issues + +**Issue**: Changes to shared package not reflecting in app +- **Solution**: Rebuild the package and restart the app dev server + +**Issue**: Type errors after pulling latest changes +- **Solution**: Run `npm install` from root to ensure all dependencies are linked + +**Issue**: Build failures in CI but works locally +- **Solution**: Verify all dependencies are in package.json, not just installed locally + +## Team Practices + +- **Code Ownership**: Each package and app has designated owners listed in CODEOWNERS file +- **Sync Meetings**: Architecture changes are discussed in weekly sync meetings +- **RFC Process**: Significant architectural changes require RFC in `docs/rfcs/` +- **Knowledge Sharing**: Document learnings and patterns in team wiki + +## References + +- [Monorepo Architecture Overview](./docs/architecture/monorepo-design.md) +- [Package Development Guide](./docs/guides/package-development.md) +- [Deployment Runbook](./docs/runbooks/deployment.md) +- [Troubleshooting Guide](./docs/guides/troubleshooting.md) + +--- + +**Note**: This is a living document. Update it as the monorepo structure and practices evolve. diff --git a/.github/prompts/add-educational-comments.prompt.md b/.github/prompts/add-educational-comments.prompt.md new file mode 100644 index 0000000..2469d18 --- /dev/null +++ b/.github/prompts/add-educational-comments.prompt.md @@ -0,0 +1,129 @@ +--- +agent: 'agent' +description: 'Add educational comments to the file specified, or prompt asking for file to comment if one is not provided.' +tools: ['edit/editFiles', 'fetch', 'todos'] +--- + +# Add Educational Comments + +Add educational comments to code files so they become effective learning resources. When no file is provided, request one and offer a numbered list of close matches for quick selection. + +## Role + +You are an expert educator and technical writer. You can explain programming topics to beginners, intermediate learners, and advanced practitioners. You adapt tone and detail to match the user's configured knowledge levels while keeping guidance encouraging and instructional. + +- Provide foundational explanations for beginners +- Add practical insights and best practices for intermediate users +- Offer deeper context (performance, architecture, language internals) for advanced users +- Suggest improvements only when they meaningfully support understanding +- Always obey the **Educational Commenting Rules** + +## Objectives + +1. Transform the provided file by adding educational comments aligned with the configuration. +2. Maintain the file's structure, encoding, and build correctness. +3. Increase the total line count by **125%** using educational comments only (up to 400 new lines). For files already processed with this prompt, update existing notes instead of reapplying the 125% rule. + +### Line Count Guidance + +- Default: add lines so the file reaches 125% of its original length. +- Hard limit: never add more than 400 educational comment lines. +- Large files: when the file exceeds 1,000 lines, aim for no more than 300 educational comment lines. +- Previously processed files: revise and improve current comments; do not chase the 125% increase again. + +## Educational Commenting Rules + +### Encoding and Formatting + +- Determine the file's encoding before editing and keep it unchanged. +- Use only characters available on a standard QWERTY keyboard. +- Do not insert emojis or other special symbols. +- Preserve the original end-of-line style (LF or CRLF). +- Keep single-line comments on a single line. +- Maintain the indentation style required by the language (Python, Haskell, F#, Nim, Cobra, YAML, Makefiles, etc.). +- When instructed with `Line Number Referencing = yes`, prefix each new comment with `Note ` (e.g., `Note 1`). + +### Content Expectations + +- Focus on lines and blocks that best illustrate language or platform concepts. +- Explain the "why" behind syntax, idioms, and design choices. +- Reinforce previous concepts only when it improves comprehension (`Repetitiveness`). +- Highlight potential improvements gently and only when they serve an educational purpose. +- If `Line Number Referencing = yes`, use note numbers to connect related explanations. + +### Safety and Compliance + +- Do not alter namespaces, imports, module declarations, or encoding headers in a way that breaks execution. +- Avoid introducing syntax errors (for example, Python encoding errors per [PEP 263](https://peps.python.org/pep-0263/)). +- Input data as if typed on the user's keyboard. + +## Workflow + +1. **Confirm Inputs** – Ensure at least one target file is provided. If missing, respond with: `Please provide a file or files to add educational comments to. Preferably as chat variable or attached context.` +2. **Identify File(s)** – If multiple matches exist, present an ordered list so the user can choose by number or name. +3. **Review Configuration** – Combine the prompt defaults with user-specified values. Interpret obvious typos (e.g., `Line Numer`) using context. +4. **Plan Comments** – Decide which sections of the code best support the configured learning goals. +5. **Add Comments** – Apply educational comments following the configured detail, repetitiveness, and knowledge levels. Respect indentation and language syntax. +6. **Validate** – Confirm formatting, encoding, and syntax remain intact. Ensure the 125% rule and line limits are satisfied. + +## Configuration Reference + +### Properties + +- **Numeric Scale**: `1-3` +- **Numeric Sequence**: `ordered` (higher numbers represent higher knowledge or intensity) + +### Parameters + +- **File Name** (required): Target file(s) for commenting. +- **Comment Detail** (`1-3`): Depth of each explanation (default `2`). +- **Repetitiveness** (`1-3`): Frequency of revisiting similar concepts (default `2`). +- **Educational Nature**: Domain focus (default `Computer Science`). +- **User Knowledge** (`1-3`): General CS/SE familiarity (default `2`). +- **Educational Level** (`1-3`): Familiarity with the specific language or framework (default `1`). +- **Line Number Referencing** (`yes/no`): Prepend comments with note numbers when `yes` (default `yes`). +- **Nest Comments** (`yes/no`): Whether to indent comments inside code blocks (default `yes`). +- **Fetch List**: Optional URLs for authoritative references. + +If a configurable element is missing, use the default value. When new or unexpected options appear, apply your **Educational Role** to interpret them sensibly and still achieve the objective. + +### Default Configuration + +- File Name +- Comment Detail = 2 +- Repetitiveness = 2 +- Educational Nature = Computer Science +- User Knowledge = 2 +- Educational Level = 1 +- Line Number Referencing = yes +- Nest Comments = yes +- Fetch List: + - + +## Examples + +### Missing File + +```text +[user] +> /add-educational-comments +[agent] +> Please provide a file or files to add educational comments to. Preferably as chat variable or attached context. +``` + +### Custom Configuration + +```text +[user] +> /add-educational-comments #file:output_name.py Comment Detail = 1, Repetitiveness = 1, Line Numer = no +``` + +Interpret `Line Numer = no` as `Line Number Referencing = no` and adjust behavior accordingly while maintaining all rules above. + +## Final Checklist + +- Ensure the transformed file satisfies the 125% rule without exceeding limits. +- Keep encoding, end-of-line style, and indentation unchanged. +- Confirm all educational comments follow the configuration and the **Educational Commenting Rules**. +- Provide clarifying suggestions only when they aid learning. +- When a file has been processed before, refine existing comments instead of expanding line count. diff --git a/.github/prompts/az-cost-optimize.prompt.md b/.github/prompts/az-cost-optimize.prompt.md new file mode 100644 index 0000000..5e1d9ae --- /dev/null +++ b/.github/prompts/az-cost-optimize.prompt.md @@ -0,0 +1,305 @@ +--- +agent: 'agent' +description: 'Analyze Azure resources used in the app (IaC files and/or resources in a target rg) and optimize costs - creating GitHub issues for identified optimizations.' +--- + +# Azure Cost Optimize + +This workflow analyzes Infrastructure-as-Code (IaC) files and Azure resources to generate cost optimization recommendations. It creates individual GitHub issues for each optimization opportunity plus one EPIC issue to coordinate implementation, enabling efficient tracking and execution of cost savings initiatives. + +## Prerequisites +- Azure MCP server configured and authenticated +- GitHub MCP server configured and authenticated +- Target GitHub repository identified +- Azure resources deployed (IaC files optional but helpful) +- Prefer Azure MCP tools (`azmcp-*`) over direct Azure CLI when available + +## Workflow Steps + +### Step 1: Get Azure Best Practices +**Action**: Retrieve cost optimization best practices before analysis +**Tools**: Azure MCP best practices tool +**Process**: +1. **Load Best Practices**: + - Execute `azmcp-bestpractices-get` to get some of the latest Azure optimization guidelines. This may not cover all scenarios but provides a foundation. + - Use these practices to inform subsequent analysis and recommendations as much as possible + - Reference best practices in optimization recommendations, either from the MCP tool output or general Azure documentation + +### Step 2: Discover Azure Infrastructure +**Action**: Dynamically discover and analyze Azure resources and configurations +**Tools**: Azure MCP tools + Azure CLI fallback + Local file system access +**Process**: +1. **Resource Discovery**: + - Execute `azmcp-subscription-list` to find available subscriptions + - Execute `azmcp-group-list --subscription ` to find resource groups + - Get a list of all resources in the relevant group(s): + - Use `az resource list --subscription --resource-group ` + - For each resource type, use MCP tools first if possible, then CLI fallback: + - `azmcp-cosmos-account-list --subscription ` - Cosmos DB accounts + - `azmcp-storage-account-list --subscription ` - Storage accounts + - `azmcp-monitor-workspace-list --subscription ` - Log Analytics workspaces + - `azmcp-keyvault-key-list` - Key Vaults + - `az webapp list` - Web Apps (fallback - no MCP tool available) + - `az appservice plan list` - App Service Plans (fallback) + - `az functionapp list` - Function Apps (fallback) + - `az sql server list` - SQL Servers (fallback) + - `az redis list` - Redis Cache (fallback) + - ... and so on for other resource types + +2. **IaC Detection**: + - Use `file_search` to scan for IaC files: "**/*.bicep", "**/*.tf", "**/main.json", "**/*template*.json" + - Parse resource definitions to understand intended configurations + - Compare against discovered resources to identify discrepancies + - Note presence of IaC files for implementation recommendations later on + - Do NOT use any other file from the repository, only IaC files. Using other files is NOT allowed as it is not a source of truth. + - If you do not find IaC files, then STOP and report no IaC files found to the user. + +3. **Configuration Analysis**: + - Extract current SKUs, tiers, and settings for each resource + - Identify resource relationships and dependencies + - Map resource utilization patterns where available + +### Step 3: Collect Usage Metrics & Validate Current Costs +**Action**: Gather utilization data AND verify actual resource costs +**Tools**: Azure MCP monitoring tools + Azure CLI +**Process**: +1. **Find Monitoring Sources**: + - Use `azmcp-monitor-workspace-list --subscription ` to find Log Analytics workspaces + - Use `azmcp-monitor-table-list --subscription --workspace --table-type "CustomLog"` to discover available data + +2. **Execute Usage Queries**: + - Use `azmcp-monitor-log-query` with these predefined queries: + - Query: "recent" for recent activity patterns + - Query: "errors" for error-level logs indicating issues + - For custom analysis, use KQL queries: + ```kql + // CPU utilization for App Services + AppServiceAppLogs + | where TimeGenerated > ago(7d) + | summarize avg(CpuTime) by Resource, bin(TimeGenerated, 1h) + + // Cosmos DB RU consumption + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.DOCUMENTDB" + | where TimeGenerated > ago(7d) + | summarize avg(RequestCharge) by Resource + + // Storage account access patterns + StorageBlobLogs + | where TimeGenerated > ago(7d) + | summarize RequestCount=count() by AccountName, bin(TimeGenerated, 1d) + ``` + +3. **Calculate Baseline Metrics**: + - CPU/Memory utilization averages + - Database throughput patterns + - Storage access frequency + - Function execution rates + +4. **VALIDATE CURRENT COSTS**: + - Using the SKU/tier configurations discovered in Step 2 + - Look up current Azure pricing at https://azure.microsoft.com/pricing/ or use `az billing` commands + - Document: Resource → Current SKU → Estimated monthly cost + - Calculate realistic current monthly total before proceeding to recommendations + +### Step 4: Generate Cost Optimization Recommendations +**Action**: Analyze resources to identify optimization opportunities +**Tools**: Local analysis using collected data +**Process**: +1. **Apply Optimization Patterns** based on resource types found: + + **Compute Optimizations**: + - App Service Plans: Right-size based on CPU/memory usage + - Function Apps: Premium → Consumption plan for low usage + - Virtual Machines: Scale down oversized instances + + **Database Optimizations**: + - Cosmos DB: + - Provisioned → Serverless for variable workloads + - Right-size RU/s based on actual usage + - SQL Database: Right-size service tiers based on DTU usage + + **Storage Optimizations**: + - Implement lifecycle policies (Hot → Cool → Archive) + - Consolidate redundant storage accounts + - Right-size storage tiers based on access patterns + + **Infrastructure Optimizations**: + - Remove unused/redundant resources + - Implement auto-scaling where beneficial + - Schedule non-production environments + +2. **Calculate Evidence-Based Savings**: + - Current validated cost → Target cost = Savings + - Document pricing source for both current and target configurations + +3. **Calculate Priority Score** for each recommendation: + ``` + Priority Score = (Value Score × Monthly Savings) / (Risk Score × Implementation Days) + + High Priority: Score > 20 + Medium Priority: Score 5-20 + Low Priority: Score < 5 + ``` + +4. **Validate Recommendations**: + - Ensure Azure CLI commands are accurate + - Verify estimated savings calculations + - Assess implementation risks and prerequisites + - Ensure all savings calculations have supporting evidence + +### Step 5: User Confirmation +**Action**: Present summary and get approval before creating GitHub issues +**Process**: +1. **Display Optimization Summary**: + ``` + 🎯 Azure Cost Optimization Summary + + 📊 Analysis Results: + • Total Resources Analyzed: X + • Current Monthly Cost: $X + • Potential Monthly Savings: $Y + • Optimization Opportunities: Z + • High Priority Items: N + + 🏆 Recommendations: + 1. [Resource]: [Current SKU] → [Target SKU] = $X/month savings - [Risk Level] | [Implementation Effort] + 2. [Resource]: [Current Config] → [Target Config] = $Y/month savings - [Risk Level] | [Implementation Effort] + 3. [Resource]: [Current Config] → [Target Config] = $Z/month savings - [Risk Level] | [Implementation Effort] + ... and so on + + 💡 This will create: + • Y individual GitHub issues (one per optimization) + • 1 EPIC issue to coordinate implementation + + ❓ Proceed with creating GitHub issues? (y/n) + ``` + +2. **Wait for User Confirmation**: Only proceed if user confirms + +### Step 6: Create Individual Optimization Issues +**Action**: Create separate GitHub issues for each optimization opportunity. Label them with "cost-optimization" (green color), "azure" (blue color). +**MCP Tools Required**: `create_issue` for each recommendation +**Process**: +1. **Create Individual Issues** using this template: + + **Title Format**: `[COST-OPT] [Resource Type] - [Brief Description] - $X/month savings` + + **Body Template**: + ```markdown + ## 💰 Cost Optimization: [Brief Title] + + **Monthly Savings**: $X | **Risk Level**: [Low/Medium/High] | **Implementation Effort**: X days + + ### 📋 Description + [Clear explanation of the optimization and why it's needed] + + ### 🔧 Implementation + + **IaC Files Detected**: [Yes/No - based on file_search results] + + ```bash + # If IaC files found: Show IaC modifications + deployment + # File: infrastructure/bicep/modules/app-service.bicep + # Change: sku.name: 'S3' → 'B2' + az deployment group create --resource-group [rg] --template-file infrastructure/bicep/main.bicep + + # If no IaC files: Direct Azure CLI commands + warning + # ⚠️ No IaC files found. If they exist elsewhere, modify those instead. + az appservice plan update --name [plan] --sku B2 + ``` + + ### 📊 Evidence + - Current Configuration: [details] + - Usage Pattern: [evidence from monitoring data] + - Cost Impact: $X/month → $Y/month + - Best Practice Alignment: [reference to Azure best practices if applicable] + + ### ✅ Validation Steps + - [ ] Test in non-production environment + - [ ] Verify no performance degradation + - [ ] Confirm cost reduction in Azure Cost Management + - [ ] Update monitoring and alerts if needed + + ### ⚠️ Risks & Considerations + - [Risk 1 and mitigation] + - [Risk 2 and mitigation] + + **Priority Score**: X | **Value**: X/10 | **Risk**: X/10 + ``` + +### Step 7: Create EPIC Coordinating Issue +**Action**: Create master issue to track all optimization work. Label it with "cost-optimization" (green color), "azure" (blue color), and "epic" (purple color). +**MCP Tools Required**: `create_issue` for EPIC +**Note about mermaid diagrams**: Ensure you verify mermaid syntax is correct and create the diagrams taking accessibility guidelines into account (styling, colors, etc.). +**Process**: +1. **Create EPIC Issue**: + + **Title**: `[EPIC] Azure Cost Optimization Initiative - $X/month potential savings` + + **Body Template**: + ```markdown + # 🎯 Azure Cost Optimization EPIC + + **Total Potential Savings**: $X/month | **Implementation Timeline**: X weeks + + ## 📊 Executive Summary + - **Resources Analyzed**: X + - **Optimization Opportunities**: Y + - **Total Monthly Savings Potential**: $X + - **High Priority Items**: N + + ## 🏗️ Current Architecture Overview + + ```mermaid + graph TB + subgraph "Resource Group: [name]" + [Generated architecture diagram showing current resources and costs] + end + ``` + + ## 📋 Implementation Tracking + + ### 🚀 High Priority (Implement First) + - [ ] #[issue-number]: [Title] - $X/month savings + - [ ] #[issue-number]: [Title] - $X/month savings + + ### ⚡ Medium Priority + - [ ] #[issue-number]: [Title] - $X/month savings + - [ ] #[issue-number]: [Title] - $X/month savings + + ### 🔄 Low Priority (Nice to Have) + - [ ] #[issue-number]: [Title] - $X/month savings + + ## 📈 Progress Tracking + - **Completed**: 0 of Y optimizations + - **Savings Realized**: $0 of $X/month + - **Implementation Status**: Not Started + + ## 🎯 Success Criteria + - [ ] All high-priority optimizations implemented + - [ ] >80% of estimated savings realized + - [ ] No performance degradation observed + - [ ] Cost monitoring dashboard updated + + ## 📝 Notes + - Review and update this EPIC as issues are completed + - Monitor actual vs. estimated savings + - Consider scheduling regular cost optimization reviews + ``` + +## Error Handling +- **Cost Validation**: If savings estimates lack supporting evidence or seem inconsistent with Azure pricing, re-verify configurations and pricing sources before proceeding +- **Azure Authentication Failure**: Provide manual Azure CLI setup steps +- **No Resources Found**: Create informational issue about Azure resource deployment +- **GitHub Creation Failure**: Output formatted recommendations to console +- **Insufficient Usage Data**: Note limitations and provide configuration-based recommendations only + +## Success Criteria +- ✅ All cost estimates verified against actual resource configurations and Azure pricing +- ✅ Individual issues created for each optimization (trackable and assignable) +- ✅ EPIC issue provides comprehensive coordination and tracking +- ✅ All recommendations include specific, executable Azure CLI commands +- ✅ Priority scoring enables ROI-focused implementation +- ✅ Architecture diagram accurately represents current state +- ✅ User confirmation prevents unwanted issue creation diff --git a/.github/prompts/create-readme.prompt.md b/.github/prompts/create-readme.prompt.md new file mode 100644 index 0000000..1a92ca1 --- /dev/null +++ b/.github/prompts/create-readme.prompt.md @@ -0,0 +1,21 @@ +--- +agent: 'agent' +description: 'Create a README.md file for the project' +--- + +## Role + +You're a senior expert software engineer with extensive experience in open source projects. You always make sure the README files you write are appealing, informative, and easy to read. + +## Task + +1. Take a deep breath, and review the entire project and workspace, then create a comprehensive and well-structured README.md file for the project. +2. Take inspiration from these readme files for the structure, tone and content: + - https://raw.githubusercontent.com/Azure-Samples/serverless-chat-langchainjs/refs/heads/main/README.md + - https://raw.githubusercontent.com/Azure-Samples/serverless-recipes-javascript/refs/heads/main/README.md + - https://raw.githubusercontent.com/sinedied/run-on-output/refs/heads/main/README.md + - https://raw.githubusercontent.com/sinedied/smoke/refs/heads/main/README.md +3. Do not overuse emojis, and keep the readme concise and to the point. +4. Do not include sections like "LICENSE", "CONTRIBUTING", "CHANGELOG", etc. There are dedicated files for those sections. +5. Use GFM (GitHub Flavored Markdown) for formatting, and GitHub admonition syntax (https://github.com/orgs/community/discussions/16925) where appropriate. +6. If you find a logo or icon for the project, use it in the readme's header. diff --git a/.github/prompts/plan-testCaseCoverage.prompt.md b/.github/prompts/plan-testCaseCoverage.prompt.md new file mode 100644 index 0000000..3671dde --- /dev/null +++ b/.github/prompts/plan-testCaseCoverage.prompt.md @@ -0,0 +1,146 @@ +# Comprehensive Test Case Documentation and Coverage Strategy + +## Plan Overview + +Draft a systematic approach to identify, document, and implement comprehensive test cases covering 100% of code logic and edge cases. + +## Steps + +1. **Create test case inventory document** in [.github/TEST_CASES.md](.github/TEST_CASES.md) organized by service/feature with categories: Happy Path, Edge Cases, Error Handling, and Concurrency. + +2. **Document TaskItem scoring edge cases**: Test all combinations of Priority (negative, 0, 1-3, >3), Status (pending, in-progress, completed, invalid), Age (0, 7, 14, 30 days), IsCompleted flag, and title word length. + +3. **Document InMemoryTaskService tests**: Cover ID generation thread-safety, concurrent operations, null/empty collections, and boundary conditions (max int ID, duplicate operations). + +4. **Document CsvTaskService tests**: Cover file I/O (missing file, permission denied, corrupted CSV), CSV escaping (quotes, commas, newlines in values), concurrent file access, and recovery scenarios. + +5. **Document API endpoint tests**: Integration tests for all 5 endpoints (GET all, GET by ID, POST create, PUT update, DELETE), status query filtering, response codes, null validations, and request body validation. + +6. **Document validation tests**: Null/empty title handling, negative/zero priority values, invalid status strings, and edge case date values. + +7. **Organize by test class**: Map each test case to the appropriate xUnit test class (create new ones for gaps). + +## Further Considerations + +### 1. Test Case Taxonomy +Question: Should the document group tests by (A) service/component, (B) test type (unit/integration/edge-case), or (C) risk area (data integrity, performance, concurrency)? + +**Recommendation**: Use (A) with subsections for test types. + +### 2. Coverage Metrics +Would you like the document to include coverage % targets per service and a checklist to track implementation progress? + +### 3. CSV File Handling Scope +CsvTaskService has no tests currently. Should this get the same depth of testing as InMemoryTaskService, or a focused subset given it's for persistence demonstration? + +## Key Findings from Analysis + +### Current Test Coverage Status +- **InMemoryTaskService**: 7 tests covering basic CRUD operations +- **TaskItem.GetScore()**: 7 tests covering priority/status combinations +- **CsvTaskService**: Zero tests (gap) +- **API Endpoints**: Zero tests (gap) +- **Validation**: Zero tests (gap) +- **Concurrency**: Zero tests (gap) + +### Critical Areas Requiring Test Coverage + +#### TaskItem.GetScore() Edge Cases +- Priority values: negative, 0, 1, 2, 3, >3 +- Status values: "pending", "in-progress", "completed", invalid/custom +- Age-based escalation: 0, 7, 14, 30+ days old +- Title analysis: word length variations, empty title, very long title +- IsCompleted flag combinations with each status +- Score floor validation (Math.Max(0, ...)) + +#### InMemoryTaskService +- **ID Generation**: Thread-safe increment, max int boundary +- **Concurrent Operations**: Multiple threads reading/writing simultaneously +- **Edge Collections**: Empty list operations, single item, duplicate creates +- **Update/Delete**: Non-existent IDs, null taskItem parameter +- **GetAll with Filtering**: Status query parameter case sensitivity + +#### CsvTaskService (Persistence Layer) +- **File Operations**: Missing file, read-only file, no disk space +- **CSV Escaping**: Titles with commas, quotes, newlines, special characters +- **Data Corruption**: Malformed CSV, invalid record format, incomplete rows +- **Concurrent Access**: Multiple processes reading/writing file simultaneously +- **State Management**: ID counter persistence, recovery from crashes +- **Large Files**: Performance with 1000+, 10000+ records + +#### API Endpoints +- **GET /tasks**: Returns all tasks, empty collection, large result sets +- **GET /tasks?status=X**: Case sensitivity, non-existent status, empty result +- **GET /tasks/{id}**: Valid ID, invalid ID, negative ID, max int ID +- **POST /tasks**: Valid creation, null/empty title, all fields, minimal fields +- **PUT /tasks/{id}**: Valid update, invalid ID, partial update, null values +- **DELETE /tasks/{id}**: Valid delete, invalid ID, delete non-existent + +#### Input Validation +- Null or empty Title field +- Negative, zero, and extreme priority values +- Invalid status strings +- Future-dated CreatedAt values +- Unicode/special characters in title and description + +#### Error Handling & Recovery +- File I/O errors in CSV service +- Concurrent modification exceptions +- Invalid request bodies +- Missing or malformed JSON +- Network timeouts (if applicable) + +#### Performance & Boundary Conditions +- Very large task lists (10000+ items) +- Very long titles/descriptions +- Rapid concurrent operations +- Repeated operations on same task + +## Test Organization Structure + +``` +DotnetApp.Tests/ +├── Models/ +│ └── TaskItemTest.cs (EXISTING - expand) +│ ├── GetScore_Priority_* (existing 7 tests) +│ └── [ADD] Edge cases, boundaries, status variations +├── Services/ +│ ├── InMemoryTaskServiceTests.cs (EXISTING - expand) +│ │ ├── CRUD operations (existing 7 tests) +│ │ └── [ADD] Concurrency, edge cases, filtering +│ └── CsvTaskServiceTests.cs (NEW) +│ ├── File I/O operations +│ ├── CSV escaping +│ ├── Concurrent access +│ └── Recovery scenarios +└── Integration/ + └── TaskApiEndpointTests.cs (NEW) + ├── GET /tasks + ├── GET /tasks?status=X + ├── GET /tasks/{id} + ├── POST /tasks + ├── PUT /tasks/{id} + └── DELETE /tasks/{id} +``` + +## Implementation Priority + +**Phase 1 (Critical Path)**: TaskItem edge cases + API endpoints +- Covers core business logic and HTTP contract +- Easiest to implement without infrastructure changes + +**Phase 2 (Data Integrity)**: CsvTaskService + Validation +- Ensures persistence layer reliability +- Validates data correctness + +**Phase 3 (Robustness)**: Concurrency + Error scenarios +- Stress tests and failure modes +- Production readiness + +## Success Criteria + +- [ ] All test cases documented in TEST_CASES.md +- [ ] Coverage report shows >90% line coverage +- [ ] All identified edge cases have corresponding tests +- [ ] All critical gaps filled with tests +- [ ] Documentation includes implementation checklist diff --git a/.github/prompts/review.prompt.md b/.github/prompts/review.prompt.md new file mode 100644 index 0000000..6ad3507 --- /dev/null +++ b/.github/prompts/review.prompt.md @@ -0,0 +1,5 @@ +Secure REST API review: +- Ensure all endpoints are protected by authentication and authorization +- Validate all user inputs and sanitize data +- Implement rate limiting and throttling +- Implement logging and monitoring for security events \ No newline at end of file diff --git a/.github/prompts/suggest-awesome-github-copilot-agents.prompt.md b/.github/prompts/suggest-awesome-github-copilot-agents.prompt.md new file mode 100644 index 0000000..dc4a14d --- /dev/null +++ b/.github/prompts/suggest-awesome-github-copilot-agents.prompt.md @@ -0,0 +1,72 @@ +--- +agent: "agent" +description: "Suggest relevant GitHub Copilot Custom Agents files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing custom agents in this repository." +tools: ["edit", "search", "runCommands", "runTasks", "changes", "testFailure", "openSimpleBrowser", "fetch", "githubRepo", "todos"] +--- + +# Suggest Awesome GitHub Copilot Custom Agents + +Analyze current repository context and suggest relevant Custom Agents files from the [GitHub awesome-copilot repository](https://github.com/github/awesome-copilot/blob/main/docs/README.agents.md) that are not already available in this repository. Custom Agent files are located in the [agents](https://github.com/github/awesome-copilot/tree/main/agents) folder of the awesome-copilot repository. + +## Process + +1. **Fetch Available Custom Agents**: Extract Custom Agents list and descriptions from [awesome-copilot README.agents.md](https://github.com/github/awesome-copilot/blob/main/docs/README.agents.md). Must use `fetch` tool. +2. **Scan Local Custom Agents**: Discover existing custom agent files in `.github/agents/` folder +3. **Extract Descriptions**: Read front matter from local custom agent files to get descriptions +4. **Analyze Context**: Review chat history, repository files, and current project needs +5. **Compare Existing**: Check against custom agents already available in this repository +6. **Match Relevance**: Compare available custom agents against identified patterns and requirements +7. **Present Options**: Display relevant custom agents with descriptions, rationale, and availability status +8. **Validate**: Ensure suggested agents would add value not already covered by existing agents +9. **Output**: Provide structured table with suggestions, descriptions, and links to both awesome-copilot custom agents and similar local custom agents + **AWAIT** user request to proceed with installation of specific custom agents. DO NOT INSTALL UNLESS DIRECTED TO DO SO. +10. **Download Assets**: For requested agents, automatically download and install individual agents to `.github/agents/` folder. Do NOT adjust content of the files. Use `#todos` tool to track progress. Prioritize use of `#fetch` tool to download assets, but may use `curl` using `#runInTerminal` tool to ensure all content is retrieved. + +## Context Analysis Criteria + +🔍 **Repository Patterns**: + +- Programming languages used (.cs, .js, .py, etc.) +- Framework indicators (ASP.NET, React, Azure, etc.) +- Project types (web apps, APIs, libraries, tools) +- Documentation needs (README, specs, ADRs) + +🗨️ **Chat History Context**: + +- Recent discussions and pain points +- Feature requests or implementation needs +- Code review patterns +- Development workflow requirements + +## Output Format + +Display analysis results in structured table comparing awesome-copilot custom agents with existing repository custom agents: + +| Awesome-Copilot Custom Agent | Description | Already Installed | Similar Local Custom Agent | Suggestion Rationale | +| ------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- | ---------------------------------- | ------------------------------------------------------------- | +| [amplitude-experiment-implementation.agent.md](https://github.com/github/awesome-copilot/blob/main/agents/amplitude-experiment-implementation.agent.md) | This custom agent uses Amplitude's MCP tools to deploy new experiments inside of Amplitude, enabling seamless variant testing capabilities and rollout of product features | ❌ No | None | Would enhance experimentation capabilities within the product | +| [launchdarkly-flag-cleanup.agent.md](https://github.com/github/awesome-copilot/blob/main/agents/launchdarkly-flag-cleanup.agent.md) | Feature flag cleanup agent for LaunchDarkly | ✅ Yes | launchdarkly-flag-cleanup.agent.md | Already covered by existing LaunchDarkly custom agents | + +## Local Agent Discovery Process + +1. List all `*.agent.md` files in `.github/agents/` directory +2. For each discovered file, read front matter to extract `description` +3. Build comprehensive inventory of existing agents +4. Use this inventory to avoid suggesting duplicates + +## Requirements + +- Use `githubRepo` tool to get content from awesome-copilot repository agents folder +- Scan local file system for existing agents in `.github/agents/` directory +- Read YAML front matter from local agent files to extract descriptions +- Compare against existing agents in this repository to avoid duplicates +- Focus on gaps in current agent library coverage +- Validate that suggested agents align with repository's purpose and standards +- Provide clear rationale for each suggestion +- Include links to both awesome-copilot agents and similar local agents +- Don't provide any additional information or context beyond the table and the analysis + +## Icons Reference + +- ✅ Already installed in repo +- ❌ Not installed in repo diff --git a/.github/taming-copilot.md b/.github/taming-copilot.md new file mode 100644 index 0000000..f6012bf --- /dev/null +++ b/.github/taming-copilot.md @@ -0,0 +1,40 @@ +--- +applyTo: '**' +description: 'Prevent Copilot from wreaking havoc across your codebase, keeping it under control.' +--- + +## Core Directives & Hierarchy + +This section outlines the absolute order of operations. These rules have the highest priority and must not be violated. + +1. **Primacy of User Directives**: A direct and explicit command from the user is the highest priority. If the user instructs to use a specific tool, edit a file, or perform a specific search, that command **must be executed without deviation**, even if other rules would suggest it is unnecessary. All other instructions are subordinate to a direct user order. +2. **Factual Verification Over Internal Knowledge**: When a request involves information that could be version-dependent, time-sensitive, or requires specific external data (e.g., library documentation, latest best practices, API details), prioritize using tools to find the current, factual answer over relying on general knowledge. +3. **Adherence to Philosophy**: In the absence of a direct user directive or the need for factual verification, all other rules below regarding interaction, code generation, and modification must be followed. + +## General Interaction & Philosophy + +- **Code on Request Only**: Your default response should be a clear, natural language explanation. Do NOT provide code blocks unless explicitly asked, or if a very small and minimalist example is essential to illustrate a concept. Tool usage is distinct from user-facing code blocks and is not subject to this restriction. +- **Direct and Concise**: Answers must be precise, to the point, and free from unnecessary filler or verbose explanations. Get straight to the solution without "beating around the bush". +- **Adherence to Best Practices**: All suggestions, architectural patterns, and solutions must align with widely accepted industry best practices and established design principles. Avoid experimental, obscure, or overly "creative" approaches. Stick to what is proven and reliable. +- **Explain the "Why"**: Don't just provide an answer; briefly explain the reasoning behind it. Why is this the standard approach? What specific problem does this pattern solve? This context is more valuable than the solution itself. + +## Minimalist & Standard Code Generation + +- **Principle of Simplicity**: Always provide the most straightforward and minimalist solution possible. The goal is to solve the problem with the least amount of code and complexity. Avoid premature optimization or over-engineering. +- **Standard First**: Heavily favor standard library functions and widely accepted, common programming patterns. Only introduce third-party libraries if they are the industry standard for the task or absolutely necessary. +- **Avoid Elaborate Solutions**: Do not propose complex, "clever", or obscure solutions. Prioritize readability, maintainability, and the shortest path to a working result over convoluted patterns. +- **Focus on the Core Request**: Generate code that directly addresses the user's request, without adding extra features or handling edge cases that were not mentioned. + +## Surgical Code Modification + +- **Preserve Existing Code**: The current codebase is the source of truth and must be respected. Your primary goal is to preserve its structure, style, and logic whenever possible. +- **Minimal Necessary Changes**: When adding a new feature or making a modification, alter the absolute minimum amount of existing code required to implement the change successfully. +- **Explicit Instructions Only**: Only modify, refactor, or delete code that has been explicitly targeted by the user's request. Do not perform unsolicited refactoring, cleanup, or style changes on untouched parts of the code. +- **Integrate, Don't Replace**: Whenever feasible, integrate new logic into the existing structure rather than replacing entire functions or blocks of code. + +## Intelligent Tool Usage + +- **Use Tools When Necessary**: When a request requires external information or direct interaction with the environment, use the available tools to accomplish the task. Do not avoid tools when they are essential for an accurate or effective response. +- **Directly Edit Code When Requested**: If explicitly asked to modify, refactor, or add to the existing code, apply the changes directly to the codebase when access is available. Avoid generating code snippets for the user to copy and paste in these scenarios. The default should be direct, surgical modification as instructed. +- **Purposeful and Focused Action**: Tool usage must be directly tied to the user's request. Do not perform unrelated searches or modifications. Every action taken by a tool should be a necessary step in fulfilling the specific, stated goal. +- **Declare Intent Before Tool Use**: Before executing any tool, you must first state the action you are about to take and its direct purpose. This statement must be concise and immediately precede the tool call. \ No newline at end of file diff --git a/.gitignore b/.gitignore index 82f9275..366100f 100644 --- a/.gitignore +++ b/.gitignore @@ -160,3 +160,75 @@ cython_debug/ # and can be added to the global gitignore or merged into this file. For a more nuclear # option (not recommended) you can uncomment the following to ignore the entire idea folder. #.idea/ + + +### DOTNET SECTION ### +## A streamlined .gitignore for modern .NET projects +## including temporary files, build results, and +## files generated by popular .NET tools. If you are +## developing with Visual Studio, the VS .gitignore +## https://github.com/github/gitignore/blob/main/VisualStudio.gitignore +## has more thorough IDE-specific entries. +## +## Get latest from https://github.com/github/gitignore/blob/main/Dotnet.gitignore + +# Build results +[Dd]ebug/ +[Dd]ebugPublic/ +[Rr]elease/ +[Rr]eleases/ +x64/ +x86/ +[Ww][Ii][Nn]32/ +[Aa][Rr][Mm]/ +[Aa][Rr][Mm]64/ +bld/ +[Bb]in/ +[Oo]bj/ +[Ll]og/ +[Ll]ogs/ + +# .NET Core +project.lock.json +project.fragment.lock.json +artifacts/ + +# ASP.NET Scaffolding +ScaffoldingReadMe.txt + +# NuGet Packages +*.nupkg +# NuGet Symbol Packages +*.snupkg + +# Others +~$* +*~ +CodeCoverage/ + +# MSBuild Binary and Structured Log +*.binlog + +# MSTest test Results +[Tt]est[Rr]esult*/ +[Bb]uild[Ll]og.* + +# NUnit +*.VisualState.xml +TestResult.xml +nunit-*.xml + +# SonarQube +.sonar/ +.scannerwork/ +.sonarqube/ +sonar-project.properties +.sonarlint/ +.sonarwork/ +reports/ + +copilot.sln +# Node.js +node_modules/ +**/node_modules/ +*.log diff --git a/.vscode/settings.json b/.vscode/settings.json index 54ed811..e947eaa 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -7,5 +7,6 @@ "*test.py" ], "python.testing.pytestEnabled": false, - "python.testing.unittestEnabled": true + "python.testing.unittestEnabled": true, + "sarif-viewer.connectToGithubCodeScanning": "on" } \ No newline at end of file diff --git a/DotnetApp.Tests/DotnetApp.Tests.csproj b/DotnetApp.Tests/DotnetApp.Tests.csproj new file mode 100644 index 0000000..7d69ca6 --- /dev/null +++ b/DotnetApp.Tests/DotnetApp.Tests.csproj @@ -0,0 +1,22 @@ + + + + net8.0 + + + + + + + runtime; build; native; contentfiles; analyzers; buildtransitive + all + + + + all + runtime; build; native; contentfiles; analyzers; buildtransitive + + + + + diff --git a/DotnetApp.Tests/Models/TestItemTest.cs b/DotnetApp.Tests/Models/TestItemTest.cs new file mode 100644 index 0000000..0ccff25 --- /dev/null +++ b/DotnetApp.Tests/Models/TestItemTest.cs @@ -0,0 +1,149 @@ +using System; +using Xunit; +using DotnetApp.Models; + +namespace DotnetApp.Models.Tests +{ + public class TaskItemTest + { + [Fact] + public void CalculateTaskScore_ShouldReturnCorrectScore_ForPriorityZero() + { + // Arrange + var task = new TaskItem + { + Priority = 0, + Status = "pending", + CreatedAt = DateTime.UtcNow.AddDays(-1), + IsCompleted = false, + Title = "Test Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(1, score); + } + + [Fact] + public void CalculateTaskScore_ShouldReturnCorrectScore_ForPriorityOneAndPendingStatus() + { + // Arrange + var task = new TaskItem + { + Priority = 1, + Status = "pending", + CreatedAt = DateTime.UtcNow.AddDays(-1), + IsCompleted = false, + Title = "Test Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(13, score); + } + + [Fact] + public void CalculateTaskScore_ShouldReturnCorrectScore_ForPriorityTwoAndInProgressStatus() + { + // Arrange + var task = new TaskItem + { + Priority = 2, + Status = "in-progress", + CreatedAt = DateTime.UtcNow.AddDays(-8), + IsCompleted = false, + Title = "Test Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(10, score); + } + + [Fact] + public void CalculateTaskScore_ShouldDoubleScore_ForPendingStatusAndOldTask() + { + // Arrange + var task = new TaskItem + { + Priority = 2, + Status = "pending", + CreatedAt = DateTime.UtcNow.AddDays(-15), + IsCompleted = false, + Title = "Test Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(15, score); + } + + [Fact] + public void CalculateTaskScore_ShouldSubtractScore_ForCompletedInProgressTask() + { + // Arrange + var task = new TaskItem + { + Priority = 2, + Status = "in-progress", + CreatedAt = DateTime.UtcNow.AddDays(-1), + IsCompleted = true, + Title = "Test Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(0, score); + } + + [Fact] + public void CalculateTaskScore_ShouldAddScore_ForLongWordsInTitle() + { + // Arrange + var task = new TaskItem + { + Priority = 3, + Status = "in-progress", + CreatedAt = DateTime.UtcNow.AddDays(-1), + IsCompleted = false, + Title = "ThisIsAVeryLongWord Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(2, score); + } + + [Fact] + public void CalculateTaskScore_ShouldReturnZero_ForNegativeScore() + { + // Arrange + var task = new TaskItem + { + Priority = 3, + Status = "completed", + CreatedAt = DateTime.UtcNow.AddDays(-1), + IsCompleted = true, + Title = "Test Task" + }; + + // Act + var score = task.CalculateTaskScore(); + + // Assert + Assert.Equal(1, score); + } + } +} \ No newline at end of file diff --git a/DotnetApp.Tests/Services/InMemoryTaskServiceTests.cs b/DotnetApp.Tests/Services/InMemoryTaskServiceTests.cs new file mode 100644 index 0000000..b4ff870 --- /dev/null +++ b/DotnetApp.Tests/Services/InMemoryTaskServiceTests.cs @@ -0,0 +1,102 @@ +using System.Linq; +using DotnetApp.Models; +using DotnetApp.Services; +using Xunit; + +namespace DotnetApp.Tests.Services +{ + public class InMemoryTaskServiceTests + { + [Fact] + public void CreateTask_AssignsIdAndStoresTask() + { + var service = new InMemoryTaskService(); + var task = new TaskItem { Title = "Test Task" }; + + service.CreateTask(task); + + Assert.NotEqual(0, task.Id); + var all = service.GetAllTasks().ToList(); + Assert.Contains(all, t => t.Id == task.Id && t.Title == "Test Task"); + } + + [Fact] + public void GetTaskById_ReturnsCorrectTaskOrNull() + { + var service = new InMemoryTaskService(); + var task = new TaskItem { Title = "GetById" }; + service.CreateTask(task); + var id = task.Id; + + var found = service.GetTaskById(id); + Assert.NotNull(found); + Assert.Equal("GetById", found.Title); + + var missing = service.GetTaskById(id + 999); + Assert.Null(missing); + } + + [Fact] + public void UpdateTask_NonExisting_ReturnsFalse() + { + var service = new InMemoryTaskService(); + var updated = new TaskItem { Title = "Nope" }; + + var result = service.UpdateTask(999, updated); + Assert.False(result); + } + + [Fact] + public void UpdateTask_Existing_ReturnsTrueAndUpdates() + { + var service = new InMemoryTaskService(); + var task = new TaskItem { Title = "Original" }; + service.CreateTask(task); + var id = task.Id; + + var updated = new TaskItem { Title = "Updated" }; + var result = service.UpdateTask(id, updated); + + Assert.True(result); + Assert.Equal(id, updated.Id); + var fetched = service.GetTaskById(id); + Assert.Equal("Updated", fetched.Title); + } + + [Fact] + public void DeleteTask_ReturnsTrueOnceAndRemoves() + { + var service = new InMemoryTaskService(); + var task = new TaskItem { Title = "ToDelete" }; + service.CreateTask(task); + var id = task.Id; + + var first = service.DeleteTask(id); + var second = service.DeleteTask(id); + + Assert.True(first); + Assert.False(second); + Assert.Null(service.GetTaskById(id)); + } + + [Fact] + public void GetAllTasks_Empty_ReturnsEmpty() + { + var service = new InMemoryTaskService(); + var all = service.GetAllTasks().ToList(); + Assert.Empty(all); + } + + [Fact] + public void CreateTask_SequentialIds() + { + var service = new InMemoryTaskService(); + var t1 = new TaskItem { Title = "First" }; + service.CreateTask(t1); + var t2 = new TaskItem { Title = "Second" }; + service.CreateTask(t2); + + Assert.Equal(t1.Id + 1, t2.Id); + } + } +} diff --git a/DotnetApp/DotnetApp.csproj b/DotnetApp/DotnetApp.csproj new file mode 100644 index 0000000..00ccec3 --- /dev/null +++ b/DotnetApp/DotnetApp.csproj @@ -0,0 +1,14 @@ + + + + Exe + net8.0 + DotnetApp + enable + enable + + + + + + diff --git a/DotnetApp/Models/TaskItem.cs b/DotnetApp/Models/TaskItem.cs new file mode 100644 index 0000000..ea15aaf --- /dev/null +++ b/DotnetApp/Models/TaskItem.cs @@ -0,0 +1,128 @@ +using System.Text.Json.Serialization; + +namespace DotnetApp.Models +{ + public class TaskItem + { + public int Id { get; set; } + public string Title { get; set; } = default!; + public string? Description { get; set; } + public bool IsCompleted { get; set; } + + [JsonPropertyName("priority")] + public int Priority { get; set; } = 3; + + [JsonPropertyName("status")] + public string Status { get; set; } = "pending"; + + [JsonPropertyName("created_at")] + public DateTime CreatedAt { get; set; } = DateTime.UtcNow; + + public int CalculateTaskScore() + { + int score = 0; + + score += CalculatePriorityScore(); + score += CalculateStatusScore(score); + + return Math.Max(0, score); + } + + private int CalculatePriorityScore() + { + int score = 0; + + if (Priority <= 0) + { + score += 1; + } + else if (Priority == 1) + { + score += 10; + if (Status == "pending") + { + score += 3; + } + } + else if (Priority == 2) + { + score += 5; + if (Status == "in-progress" && !IsCompleted) + { + score += 2; + if ((DateTime.UtcNow - CreatedAt).TotalDays > 7) + { + score += 3; + } + } + } + else + { + score += 1; + } + + return score; + } + + private int CalculateStatusScore(int currentScore) + { + int score = 0; + + switch (Status.ToLower()) + { + case "pending": + score += CalculatePendingScore(currentScore); + break; + case "in-progress": + score += CalculateInProgressScore(); + break; + default: + if (!IsCompleted && Priority < 3) + { + score += 3; + } + break; + } + + return score; + } + + private int CalculatePendingScore(int currentScore) + { + int score = 0; + + if ((DateTime.UtcNow - CreatedAt).TotalDays > 14) + { + score += currentScore * 2; + if (Priority < 3) + { + score += 5; + } + } + + return score; + } + + private int CalculateInProgressScore() + { + int score = 0; + + if (IsCompleted) + { + score -= 5; + } + else + { + foreach (var word in Title.Split(' ')) + { + if (word.Length > 10) + { + score += 1; + } + } + } + + return score; + } + } +} diff --git a/DotnetApp/Program.cs b/DotnetApp/Program.cs new file mode 100644 index 0000000..586449a --- /dev/null +++ b/DotnetApp/Program.cs @@ -0,0 +1,44 @@ +using System.IO; +using System.Linq; +using Microsoft.Extensions.FileProviders; +using DotnetApp.Services; +using DotnetApp.Models; + +var builder = WebApplication.CreateBuilder(args); +builder.Services.AddEndpointsApiExplorer(); +builder.Services.AddSwaggerGen(); +builder.Services.AddSingleton(); + +var app = builder.Build(); + +// Serve UI from wwwroot instead of external templates folder +app.UseDefaultFiles(); +app.UseStaticFiles(); + +// Replace simple GET /tasks with optional status query +app.MapGet("/tasks", (string? status, ITaskService service) => +{ + var tasks = service.GetAllTasks(); + if (!string.IsNullOrEmpty(status)) + tasks = tasks.Where(t => t.Status == status); + return Results.Ok(tasks); +}); +app.MapGet("/tasks/{id}", (int id, ITaskService service) => + service.GetTaskById(id) is TaskItem task ? Results.Ok(task) : Results.NotFound()); +app.MapPost("/tasks", (TaskItem task, ITaskService service) => +{ + service.CreateTask(task); + return Results.Created($"/tasks/{task.Id}", task); +}); +// Update returns the modified task JSON instead of NoContent +app.MapPut("/tasks/{id}", (int id, TaskItem updatedTask, ITaskService service) => +{ + updatedTask.Id = id; + return service.UpdateTask(id, updatedTask) + ? Results.Ok(updatedTask) + : Results.NotFound(); +}); +app.MapDelete("/tasks/{id}", (int id, ITaskService service) => + service.DeleteTask(id) ? Results.NoContent() : Results.NotFound()); + +await app.RunAsync(); diff --git a/DotnetApp/Services/CsvTaskService.cs b/DotnetApp/Services/CsvTaskService.cs new file mode 100644 index 0000000..8f9e5f2 --- /dev/null +++ b/DotnetApp/Services/CsvTaskService.cs @@ -0,0 +1,154 @@ +using System; +using System.Collections.Generic; +using System.Globalization; +using System.IO; +using System.Linq; +using DotnetApp.Models; + +namespace DotnetApp.Services +{ + /// + /// Provides a CSV-based implementation of the interface. + /// + public class CsvTaskService : ITaskService + { + private readonly string _filePath; + private readonly object _lock = new object(); + private int _nextId; + + /// + /// Initializes a new instance of the class. + /// + public CsvTaskService() + { + _filePath = Path.Combine(AppContext.BaseDirectory, "tasks.csv"); + if (!File.Exists(_filePath)) + { + File.WriteAllText(_filePath, "Id,Title,Description,IsCompleted,Status,Priority,CreatedAt\n"); + } + var tasks = ReadAll(); + _nextId = tasks.Any() ? tasks.Max(t => t.Id) : 0; + } + + /// + /// Reads all tasks from the CSV file. + /// + /// A list of all task items. + private List ReadAll() + { + var lines = File.ReadAllLines(_filePath); + return lines + .Skip(1) + .Where(line => !string.IsNullOrWhiteSpace(line)) + .Select(line => + { + var parts = line.Split(','); + return new TaskItem + { + Id = int.Parse(parts[0]), + Title = parts[1], + Description = string.IsNullOrEmpty(parts[2]) ? null : parts[2], + IsCompleted = bool.Parse(parts[3]), + Status = parts[4], + Priority = int.Parse(parts[5]), + CreatedAt = DateTime.Parse(parts[6], null, DateTimeStyles.RoundtripKind) + }; + }) + .ToList(); + } + + /// + /// Writes all tasks to the CSV file. + /// + /// The tasks to write. + private void WriteAll(IEnumerable tasks) + { + var lines = new List { "Id,Title,Description,IsCompleted,Status,Priority,CreatedAt" }; + lines.AddRange(tasks.Select(t => + string.Join(",", + t.Id, + Escape(t.Title), + Escape(t.Description), + t.IsCompleted, + t.Status, + t.Priority, + t.CreatedAt.ToString("O") + ) + )); + File.WriteAllLines(_filePath, lines); + } + + /// + /// Escapes a string value for CSV compatibility. + /// + /// The string value to escape. + /// The escaped string. + private string Escape(string? value) => value?.Replace("\"", "\"\"") ?? string.Empty; + + /// + /// Retrieves all tasks from the CSV file. + /// + /// A collection of all task items. + public IEnumerable GetAllTasks() => ReadAll(); + + /// + /// Retrieves a task by its unique identifier from the CSV file. + /// + /// The unique identifier of the task. + /// The task item if found; otherwise, null. + public TaskItem? GetTaskById(int id) => ReadAll().FirstOrDefault(t => t.Id == id); + + /// + /// Creates a new task and appends it to the CSV file. + /// + /// The task item to create. + public void CreateTask(TaskItem task) + { + lock (_lock) + { + task.Id = ++_nextId; + var tasks = ReadAll(); + tasks.Add(task); + WriteAll(tasks); + } + } + + /// + /// Updates an existing task in the CSV file. + /// + /// The unique identifier of the task to update. + /// The updated task item. + /// True if the update was successful; otherwise, false. + public bool UpdateTask(int id, TaskItem updatedTask) + { + lock (_lock) + { + var tasks = ReadAll(); + var existing = tasks.FirstOrDefault(t => t.Id == id); + if (existing == null) return false; + updatedTask.Id = id; + tasks.Remove(existing); + tasks.Add(updatedTask); + WriteAll(tasks); + return true; + } + } + + /// + /// Deletes a task from the CSV file by its unique identifier. + /// + /// The unique identifier of the task to delete. + /// True if the deletion was successful; otherwise, false. + public bool DeleteTask(int id) + { + lock (_lock) + { + var tasks = ReadAll(); + var removed = tasks.RemoveAll(t => t.Id == id) > 0; + if (!removed) return false; + WriteAll(tasks); + return true; + } + } + } +} diff --git a/DotnetApp/Services/ITaskService.cs b/DotnetApp/Services/ITaskService.cs new file mode 100644 index 0000000..6df4d7c --- /dev/null +++ b/DotnetApp/Services/ITaskService.cs @@ -0,0 +1,45 @@ +namespace DotnetApp.Services +{ + using DotnetApp.Models; + using System.Collections.Generic; + + /// + /// Defines the contract for task management services. + /// + public interface ITaskService + { + /// + /// Retrieves all tasks. + /// + /// A collection of all task items. + IEnumerable GetAllTasks(); + + /// + /// Retrieves a task by its unique identifier. + /// + /// The unique identifier of the task. + /// The task item if found; otherwise, null. + TaskItem? GetTaskById(int id); + + /// + /// Creates a new task. + /// + /// The task item to create. + void CreateTask(TaskItem task); + + /// + /// Updates an existing task. + /// + /// The unique identifier of the task to update. + /// The updated task item. + /// True if the update was successful; otherwise, false. + bool UpdateTask(int id, TaskItem updatedTask); + + /// + /// Deletes a task by its unique identifier. + /// + /// The unique identifier of the task to delete. + /// True if the deletion was successful; otherwise, false. + bool DeleteTask(int id); + } +} diff --git a/DotnetApp/Services/InMemoryTaskService.cs b/DotnetApp/Services/InMemoryTaskService.cs new file mode 100644 index 0000000..ee448bf --- /dev/null +++ b/DotnetApp/Services/InMemoryTaskService.cs @@ -0,0 +1,59 @@ +namespace DotnetApp.Services +{ + using System.Collections.Concurrent; + using DotnetApp.Models; + + /// + /// Provides an in-memory implementation of the interface. + /// + public class InMemoryTaskService : ITaskService + { + private readonly ConcurrentDictionary _tasks = new(); + private int _nextId = 1; + + /// + /// Retrieves all tasks stored in memory. + /// + /// A collection of all task items. + public IEnumerable GetAllTasks() => _tasks.Values; + + /// + /// Retrieves a task by its unique identifier. + /// + /// The unique identifier of the task. + /// The task item if found; otherwise, null. + public TaskItem? GetTaskById(int id) => _tasks.TryGetValue(id, out var task) ? task : null; + + /// + /// Creates a new task and stores it in memory. + /// + /// The task item to create. + public void CreateTask(TaskItem task) + { + var id = System.Threading.Interlocked.Increment(ref _nextId); + task.Id = id; + _tasks[id] = task; + } + + /// + /// Updates an existing task in memory. + /// + /// The unique identifier of the task to update. + /// The updated task item. + /// True if the update was successful; otherwise, false. + public bool UpdateTask(int id, TaskItem updatedTask) + { + if (!_tasks.ContainsKey(id)) return false; + updatedTask.Id = id; + _tasks[id] = updatedTask; + return true; + } + + /// + /// Deletes a task from memory by its unique identifier. + /// + /// The unique identifier of the task to delete. + /// True if the deletion was successful; otherwise, false. + public bool DeleteTask(int id) => _tasks.TryRemove(id, out _); + } +} diff --git a/DotnetApp/wwwroot/index.html b/DotnetApp/wwwroot/index.html new file mode 100644 index 0000000..b60a42a --- /dev/null +++ b/DotnetApp/wwwroot/index.html @@ -0,0 +1,272 @@ + + + + + + Task Manager + + + +

Task Manager

+ + +
+

Add New Task

+ + + + +
+ + +
+ + +
+ + +

Tasks

+
+ +
Loading tasks...
+
+ + + + \ No newline at end of file diff --git a/OLD_README.md b/OLD_README.md deleted file mode 100644 index 5dfdff5..0000000 --- a/OLD_README.md +++ /dev/null @@ -1,129 +0,0 @@ -# copilot - -> [!NOTE] -> As with any GenAI, Copilot is non-deterministic. As such, results you get from Copilot may differ from what I demonstrate. - -## Install (for VS Code) -1. Go to Extensions (on Activity Bar) -1. Search for `Copilot` -1. Install - > You will get both the `Copilot` and `Copilot Chat` extensions installed -1. Using either the pop-up after install or the "Accounts" icon on the Activity Bar, sign into GitHub - -Easy as that! - -## Familiarize (for VS Code) -After install, Copilot is 100% ready to go. Start coding to use Code completions or click the "Chat" icon on the Activity Bar to use Copilot Chat. - -That's all! - -## Use -### Code completion - - -> [!NOTE] -> Although I don't always explicitly list it, there is an implied acceptance of Copilot's suggestions at the end of each step below. - - -#### point.py -1. Navigate to point.py - > file name is part of the context Copilot uses! - -##### class point -1. Start a new comment "# create a class..." - > If the suggestions are wrong or I don't like them, just keep typing! -1. Add "# should include getters, setters and a toString method" to your comment - > The clearer and more descriptive I am, the more helpful Copilot can be! -1. Type "class Point:" and hit enter - > Copilot draws on all information in our file to build its context, so it can infer what we want based on what we have already commented and coded. Remember, file name is part of context! -1. Accept all getters, setters and toString - > Copilot expedites "boring" coding (repetitive, boilerplate tasks). This gives us more time for the tasks and coding we enjoy. -1. Start a new comment "# calculate the..." (we're going to create a distance function) - > Copilot is, once again, inferring what we might want here based on the context it has. - -##### class line -1. Start a new comment "# create a class..." -1. Type "class..." - > Copilot will use the current context (in this case, the file name, and all comments and code in our current file), to determine how to structure and stylize suggested code. notice how Copilot automatically added getters, setters and a toString method (following the pattern it recognized from above) and it even automatically utilizes the distance method we defined previously. - - - -### Copilot Chat - -#### Generate -> Copilot works on more than just traditional code. Even with operational tasks and files, Copilot can help. - -##### Infra as Code -1. Navigate to iac.tf -1. Ask Copilot chat to "write a terraform file that creates a webapp and sql DB in Azure for me" - > In Copilot Chat, we have various options for how to accepts suggested code. - - - -#### Explain -1. Open server.rs -1. Ask Copilot Chat what this file is doing -#### Improve -1. Ask Copilot Chat it if there are any ways we can improve the code -1. Ask chat how to implement thread pools and accept changes - -#### Translate -1. Ask Copilot Chat to turn our rust code into python - -#### Brainstorm -1. Ask Copilot Chat: if I'm looking to create a webserver in python, how should I go about it? should I be creating it from scratch like I'm doing here? -1. Ask it about the differences between the different frameworks it suggests. -1. Ask it which to use if I'm looking to run a simple blog server and I don't have much coding experience. - -#### Secure -> Copilot can help identify and mitigate security vulnerabilities -1. Navigate to sql.py -1. Ask Copilot Chat to identify any security vulnerabilities it sees - - -#### @, # and / -##### \# -We can use `#` to reference files or selections. Essentially, determine what context to use to answer the question we are asking. Note: #web for web search. - -1. Try this in chat: - - what is the latest version of Node.js? -1. then try this - - #web what is the latest version of Node.js? - -##### @ -Called "participants". Use if you're looking to ask about a specific topic or domain. Example @docker. Copilot extensions can also provide more chat participants. Personally I don't use it much but it's there! - -##### / -Short hand for common tasks in Copilot. So that I don't have to type out a full paragraph. -- `/tests` - writes tests -- `/explain` - explain code -- `/fix` - fix errors - -## FAQ -1. How does GitHub Copilot Chat differ from ChatGPT? - - GitHub Copilot Chat takes into consideration the context of your codebase and workspace, giving you more tailored solutions grounded in the code that you've already written. ChatGPT does not do this. - -## Best Practices -- https://docs.github.com/en/copilot/using-github-copilot/best-practices-for-using-github-copilot \ No newline at end of file diff --git a/README.md b/README.md index 8ce16da..9099dc2 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # GitHub Copilot > [!NOTE] -> Last updated 05-MAY-2025 +> Last updated 14-JUL-2025 This is a repo containing materials that can be used for future Copilot demos. @@ -19,7 +19,7 @@ Copilot code completions even promotes best practices while you code as comments You can also interact with Copilot code completions (+ more) inside a file in other ways: - Suggestion Selector -- Completions Panel (Ctrl + Enter) +- Completions Panel - Editor Inline Chat (Cmd + I) ### Next Edit Suggestions @@ -44,11 +44,15 @@ Chat commands are a great and easy place to start with Copilot Chat. When in dou - `dotnet test DotnetApp.Tests/DotnetApp.Tests.csproj` 1. Ask `@vscode Where can I find the setting to render whitespace?` -### Context (MOVE THIS ABOVE THE MODES BELOW THE CHAT SECTION?) +### Context Context in Copilot Chat works differently than it did for code completions. Other than what is currently visible in your editor, Copilot Chat requires that we explicitly add all relevant files as context before submitting our prompt. The easiest ways of including files as context are to with drag and drop them into the chat window, or using the `#file:` tag. 1. Show typing a `#` into chat and reading what each tag specifies +Best Practice: Only add the minimum context necessary to answer the question you are asking or to solve the problem you have. This will ensure you get the highest quality response possible from Copilot. + +#### Vision + ### Possibilities #### Brainstorm 1. What the best naming convention to use in my .NET project? What's idiomatic? @@ -56,15 +60,19 @@ Context in Copilot Chat works differently than it did for code completions. Othe #### Translate 1. Can you translate this Java file (`point.java`) into Python? #### Optimize -1. What can I do to improve my .NET app (`DotnetApp`)? I'm preparing it for a production release and need to make sure it's polished. +1. What can I do to improve my .NET app (`DotnetApp`)? I'm preparing it for a production release and need to make sure it's ready. #### Review 1. Do you see any security vulnerabilities in this code (`sql.py`)? 1. I'm looking to reduce tech debt across my codebase. Is there anything in my .NET app (`DotnetApp`) that I should consider improving or fixing? #### Understand 1. Can you explain what this file is doing (`server.rs`)? - +#### Generate +- Test data +- Documentation +- ... +#### Modernize ### Modes -When to use each mode. https://code.visualstudio.com/docs/copilot/chat/copilot-chat#_chat-mode +When to use each mode. https://code.visualstudio.com/docs/copilot/chat/chat-modes #### [Ask mode](https://code.visualstudio.com/docs/copilot/chat/chat-ask-mode) @@ -77,6 +85,15 @@ Copilot Edits makes sweeping changes across multiple files quick and easy. #### [Agent mode](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode) +##### Demo +- does my dotnetapp already have unit tests? + - auto context discovery +- can you run the existing unit tests to see if they pass? + - self-healing +- can you add additional unit tests for the placeholders you mentioned above? + +- @vscode where is the setting to change the number of "iterations" agent mode will perform before asking if I'd like to continue + ## [Configuring Copilot / Customizing Copilot](https://code.visualstudio.com/docs/copilot/copilot-customization) ### Custom instructions Used to set "rules" you want Copilot to follow for all suggestions. A system prompt of sorts. @@ -101,16 +118,20 @@ If Public Code Block is enabled, if Copilot generates code that closely matches - Ask Copilot to break suggested code into different blocks in its response - Ask Copilot to only show changed lines of code - Ask Copilot to just show pseudocode -- Ask Copilot to comment out the code it suggests +- Ask Copilot to show the code it suggests in another language - Break your problem into smaller problems + -Generally speaking, when we work with our own large, complex, unique codebases, we won't run into this much. This will mostly come into play when we are starting from scratch or asking Copilot for generic examples. The alternative to the Public Code Block is Code Referencing, where Copilot will show the public code anyway and let you know what type of license applies to the repo it is sourced from. +Generally speaking, when we work with our own large, complex, unique codebases, we won't run into this much. This will mostly come into play when we are starting from scratch or asking Copilot for generic examples. Across all of Copilot, only about 1% of suggestions hit a public code block and most of those are new new files or other generic (and non-code!) use cases. The alternative to the Public Code Block is Code Referencing, where Copilot will show the public code anyway and let you know what type of license applies to the repo it is sourced from. A fairly reliable prompt to use to test Code Referencing (or trigger a public code block) is: - "generate “void fast_inverse_sqrt” in C" +- "can you show me a quick sort algorithm?" ## Other - +### Mermaid Diagram +### UML Diagram? ### Copilot Code Review + + +## Future +In the future, a decent demo might be to use this commit https://github.com/mpchenette/pong/tree/80dcd03e2cd1e7fe39a044c1fc51cb39ea2b5c2f (FFR: this is the duopong right before I add server side color chainging, right after I changed 127.0.0.1 to 0.0.0.0)to demo agent mode and also repo indexing. + +If you have the repo indexed remotely, ask the "ask" mode the following: "it would seem that at the moment the background color changing is a client side change only. is that accurate? how would I make this a change that affects everyone/that everyone can see? that is my goal", and look how fast the response is. This is because of indexing! Even without the file open! No context needed because we have the index. + +Now jump to agent mode and ask the same thing. see how much longer it takes. but also see that agent mode makes the change for you. And if agent mode fails like it did for me the first time, you can ask it to iterate! + +A good example of when to use each mode and pros/cons and also how knowing the different aspects of Copilot leads to a better experience. + +Want to find something that is currently in the code or find where something is? Want to understand how the current logic or implementation works? Ask mode with remote index. + +Want to debug something or find where an error or bug stems from? Want to implement a change based on how the current logic functions? Agent mode. \ No newline at end of file diff --git a/cloud_infra.png b/cloud_infra.png new file mode 100644 index 0000000..ab52c6e Binary files /dev/null and b/cloud_infra.png differ diff --git a/cobol/CUSTPROC.cbl b/cobol/CUSTPROC.cbl new file mode 100644 index 0000000..9bc3c34 --- /dev/null +++ b/cobol/CUSTPROC.cbl @@ -0,0 +1,112 @@ + IDENTIFICATION DIVISION. + PROGRAM-ID. CUSTPROC. + AUTHOR. GITHUB-COPILOT. + DATE-WRITTEN. 2025-03-28. + + ENVIRONMENT DIVISION. + INPUT-OUTPUT SECTION. + FILE-CONTROL. + SELECT CUSTOMER-FILE + ASSIGN TO 'CUSTFILE' + ORGANIZATION IS SEQUENTIAL + ACCESS MODE IS SEQUENTIAL + FILE STATUS IS WS-FILE-STATUS. + SELECT REPORT-FILE + ASSIGN TO 'CUSTRPT' + ORGANIZATION IS SEQUENTIAL + ACCESS MODE IS SEQUENTIAL. + + DATA DIVISION. + FILE SECTION. + FD CUSTOMER-FILE + LABEL RECORDS ARE STANDARD. + 01 CUSTOMER-RECORD. + 05 CUST-ID PIC X(6). + 05 CUST-NAME PIC X(30). + 05 CUST-ADDRESS PIC X(50). + 05 CUST-PHONE PIC X(12). + 05 CUST-BALANCE PIC 9(7)V99. + + FD REPORT-FILE + LABEL RECORDS ARE STANDARD. + 01 REPORT-LINE PIC X(132). + + WORKING-STORAGE SECTION. + 01 WS-FILE-STATUS PIC X(2). + 01 WS-EOF-FLAG PIC X VALUE 'N'. + 88 END-OF-FILE VALUE 'Y'. + + 01 WS-COUNTERS. + 05 WS-READ-CTR PIC 9(6) VALUE ZERO. + 05 WS-VALID-CTR PIC 9(6) VALUE ZERO. + 05 WS-ERROR-CTR PIC 9(6) VALUE ZERO. + + 01 WS-HEADING-1. + 05 FILLER PIC X(20) VALUE 'Customer Report '. + 05 FILLER PIC X(20) VALUE 'Date: '. + 05 WS-CURR-DATE PIC X(10). + + 01 WS-DETAIL-LINE. + 05 WS-DL-CUSTID PIC X(6). + 05 FILLER PIC X(2) VALUE SPACES. + 05 WS-DL-NAME PIC X(30). + 05 FILLER PIC X(2) VALUE SPACES. + 05 WS-DL-BALANCE PIC $ZZZ,ZZ9.99. + + PROCEDURE DIVISION. + 0100-MAIN-PROCESS. + PERFORM 0200-INIT-ROUTINE + PERFORM 0300-PROCESS-RECORDS UNTIL END-OF-FILE + PERFORM 0900-CLOSE-ROUTINE + STOP RUN. + + 0200-INIT-ROUTINE. + OPEN INPUT CUSTOMER-FILE + OUTPUT REPORT-FILE + IF WS-FILE-STATUS NOT = '00' + DISPLAY 'Error opening files. Status: ' WS-FILE-STATUS + MOVE 'Y' TO WS-EOF-FLAG + END-IF + PERFORM 0250-WRITE-HEADERS. + + 0250-WRITE-HEADERS. + MOVE FUNCTION CURRENT-DATE(1:10) TO WS-CURR-DATE + WRITE REPORT-LINE FROM WS-HEADING-1 + WRITE REPORT-LINE FROM SPACES. + + 0300-PROCESS-RECORDS. + READ CUSTOMER-FILE + AT END + MOVE 'Y' TO WS-EOF-FLAG + NOT AT END + ADD 1 TO WS-READ-CTR + PERFORM 0400-VALIDATE-RECORD + END-READ. + + 0400-VALIDATE-RECORD. + IF CUST-BALANCE > 0 + PERFORM 0500-FORMAT-DETAIL + ADD 1 TO WS-VALID-CTR + ELSE + ADD 1 TO WS-ERROR-CTR + END-IF. + + 0500-FORMAT-DETAIL. + MOVE CUST-ID TO WS-DL-CUSTID + MOVE CUST-NAME TO WS-DL-NAME + MOVE CUST-BALANCE TO WS-DL-BALANCE + WRITE REPORT-LINE FROM WS-DETAIL-LINE. + + 0900-CLOSE-ROUTINE. + WRITE REPORT-LINE FROM SPACES + MOVE 'Total Records Read: ' TO REPORT-LINE + MOVE WS-READ-CTR TO REPORT-LINE(25:6) + WRITE REPORT-LINE + MOVE 'Valid Records: ' TO REPORT-LINE + MOVE WS-VALID-CTR TO REPORT-LINE(25:6) + WRITE REPORT-LINE + MOVE 'Error Records: ' TO REPORT-LINE + MOVE WS-ERROR-CTR TO REPORT-LINE(25:6) + WRITE REPORT-LINE + CLOSE CUSTOMER-FILE + REPORT-FILE. \ No newline at end of file diff --git a/diag.mmd b/diag.mmd new file mode 100644 index 0000000..3eb7a68 --- /dev/null +++ b/diag.mmd @@ -0,0 +1,11 @@ +flowchart TD + A[0100-MAIN-PROCESS] --> B[0200-INIT-ROUTINE] + B --> C[0250-WRITE-HEADERS] + C --> D[0300-PROCESS-RECORDS] + D -->|Read Record| E{END-OF-FILE?} + E -- No --> F[0400-VALIDATE-RECORD] + F -->|CUST-BALANCE > 0| G[0500-FORMAT-DETAIL] + G --> D + F -->|Else| D + E -- Yes --> H[0900-CLOSE-ROUTINE] + H --> I[STOP RUN] \ No newline at end of file diff --git a/docs/ADOPTION.md b/docs/ADOPTION.md new file mode 100644 index 0000000..8664321 --- /dev/null +++ b/docs/ADOPTION.md @@ -0,0 +1,4 @@ +# Driving Copilot Adoption + +## Options +- Champions Program \ No newline at end of file diff --git a/docs/COMPARISON.md b/docs/COMPARISON.md new file mode 100644 index 0000000..363bd9c --- /dev/null +++ b/docs/COMPARISON.md @@ -0,0 +1,3 @@ +# Advantages of Copilot + +## \ No newline at end of file diff --git a/docs/COPILOT_FEATURE_DECISION_TREE.mmd b/docs/COPILOT_FEATURE_DECISION_TREE.mmd new file mode 100644 index 0000000..ac69119 --- /dev/null +++ b/docs/COPILOT_FEATURE_DECISION_TREE.mmd @@ -0,0 +1,35 @@ +flowchart TD + A[Can the task be broken into smaller tasks?] + B[Break the task into smaller tasks] + C[Is the task a code review?] + D[Copilot code review] + E[Is the task a code change?] + F[Does the change span more than five to ten files or will Copilot need CLI access?] + G[Ask mode] + H[Would information or tools outside of VS Code be useful to accomplish this task?] + I[Edit mode] + %% J[Is this a code change similar to something you or others are likely to make again in the future?] + J[Is anyone likely to make a change like this in the future?] + K[Create and use a prompt file] + L[Agent mode + MCP] + M[Agent mode] + N[Do you need to supervise this code change?] + O[Copilot coding agent] + + A -- Yes --> B + B --> A + A -- No --> E + C -- Yes --> D + C -- No --> G + %% E -- Yes --> J + J -- Yes --> K + J -- No --> F + K --> F + E -- No --> C + F-- Yes --> H + F-- No --> I + H -- Yes --> L + H -- No --> M + E -- Yes --> N + N -- No --> O + N -- Yes --> J diff --git a/docs/Code-Reference-Example.md b/docs/Code-Reference-Example.md new file mode 100644 index 0000000..20e9b60 --- /dev/null +++ b/docs/Code-Reference-Example.md @@ -0,0 +1,73 @@ +# Code Citations + +## License: unknown +https://github.com/sli7236/QuickSort/tree/c7e584891b75fd1ca0f01c7cd308b09c96fad008/src/com/company/quickSort.java + +``` +static int Partition(int[] arr, int left, int right) +{ + int pivot = arr[right]; + int i = left - 1; + for (int j = left; j < right; j++) + { + if (arr[j] <= +``` + + +## License: unknown +https://github.com/petriucmihai/Interview-Problems/tree/8d5bb453f25bf130aca34682494aa2435222474c/InterviewProblems/SearchingAndSorting/SortingAlgorithms/QuickSort.cs + +``` +private static int Partition(int[] arr, int left, int right) +{ + int pivot = arr[right]; + int i = left - 1; + for (int j = left; j < right; j++) + { + if (arr[j] < +``` + + +## License: MIT +https://github.com/CodeMazeBlog/CodeMazeGuides/tree/e8a3b277ba7b5c70147a3f82e64477d9d88cc0b5/dotnet-client-libraries/BenchmarkDotNet-MemoryDiagnoser-Attribute/BenchmarkDotNet-MemoryDiagnoser-Attribute/Sort.cs + +``` +QuickSort(int[] arr, int left, int right) +{ + if (left < right) + { + int pivotIndex = Partition(arr, left, right); + QuickSort(arr, left, pivotIndex - 1); + QuickSort(arr, pivotIndex + 1, right); +``` + + +## License: unknown +https://github.com/giangpham712/dsa-csharp/tree/e6aaf295082d34b376b3e8ac0929b05005508d6b/src/Algorithms/Sorting/QuickSort.cs + +``` +arr[i], arr[j]) = (arr[j], arr[i]); + } + } + (arr[i + 1], arr[right]) = (arr[right], arr[i + 1]); + return i +``` + + +## License: unknown +https://github.com/krmphasis/QuickSort1/tree/aeefa44f535beee0a50e330c0e882fc530a7255d/QuickSortLogic.cs + +``` +right) +{ + if (left < right) + { + int pivotIndex = Partition(arr, left, right); + QuickSort(arr, left, pivotIndex - 1); + QuickSort(arr, pivotIndex + 1, right); + } +} + +private static int Partition(int[] arr +``` + diff --git a/docs/SDD.md b/docs/SDD.md new file mode 100644 index 0000000..a1a2f02 --- /dev/null +++ b/docs/SDD.md @@ -0,0 +1,12 @@ +# Spec-Driven Development (SDD) +- https://github.com/github/spec-kit + +## Steps +- /speckit.constitution + - Define your project's governing principles and development guidelines +- /speckit.specify + - Describe what you want to build +- /speckit.plan + - Provide your tech stack and architecture choices +- /speckit.tasks +- /speckit.implement \ No newline at end of file diff --git a/docs/VALUE.md b/docs/VALUE.md new file mode 100644 index 0000000..14514c9 --- /dev/null +++ b/docs/VALUE.md @@ -0,0 +1,41 @@ +# Proving Copilot's Value + +First off, ESSP. + +## Key Metrics +- Developer Happiness +- Dev/Deploy Velocity +- Code Quality + +## Developer Happiness + +## Development Speed +What metrics do you track today? How do you track development speed today? We need a baseline from which to compare Copilot. If we don't have one, how will we know if there is improvement? + +metric | how good a representation of DS is it? | how easy is it to grab? | how objective is it? | how hard is it to game? +--- | --- | --- | --- | --- +PR lead time / time to PR | 8 | 8 | 10 | 7(depends, if it's time to PR merged, hard, if it's time to PR opened, super easy) +\# of new features merged | 7 | 8 | 7 | 7 +\# of bugs fixed | 5 | 8 | 4 | 7 +time spent / story point | 8 | 4 | 8 | 3 +lines of code written by AI (belongs in CQ?) | 4 | 9* | 10 | 4 +\# of lines of code written / sprint | 5 | 7 | 10 | 3 + + +## Code Quality + + +metric | how good a representation of CQ is it? | how easy is it to grab? | how objective is it? | how hard is it to game? +--- | --- | --- | --- | --- +bugs / capita | 8 | 5 | 10 | 10 +code quality tool score / rating | 8 | 8 | 7 | 9 +average time to merge PR | 5 | 8 | 7 | 5 +average number of comments on a PR | 7 | 10 | 6 | 4 +\# of code related outages / year (or downtime due to) | 8 | 7 | 10 | 10 +average age of a LoC? - 2 (extremely easy to track accurately) - subjective / very much depends on how you interpret it +average onboard / onramp time - 2 (also hard to track accurately) - very subjective / hard to know what "good" is here. will very much depend on the codebase / language /etc. + + +## Key Things To Remember +- Pitfalls (common, gameable metrics that lead to anti-patterns) +- You will not see the value overnight. It will take months of sustained rollout, engagement and tracking metrics to begin to see the improvements Copilot brings. \ No newline at end of file diff --git a/python/calculator.py b/python/calculator.py new file mode 100644 index 0000000..1560607 --- /dev/null +++ b/python/calculator.py @@ -0,0 +1,30 @@ +class Calculator: + """A simple calculator class with basic arithmetic operations.""" + + def add(self, a, b): + """Add two numbers.""" + return a + b + + def subtract(self, a, b): + """Subtract b from a.""" + return a - b + + def multiply(self, a, b): + """Multiply two numbers.""" + return a * b + + def divide(self, a, b): + """Divide a by b.""" + if b == 0: + raise ValueError("Cannot divide by zero") + return a / b + + def power(self, a, b): + """Raise a to the power of b.""" + return a ** b + + def square_root(self, a): + """Calculate the square root of a number.""" + if a < 0: + raise ValueError("Cannot calculate square root of negative number") + return a ** 0.5 \ No newline at end of file diff --git a/python/sql.py b/python/sql.py index 2e5f46c..bc2121e 100644 --- a/python/sql.py +++ b/python/sql.py @@ -23,7 +23,3 @@ def add_user(username, password): cursor.close() conn.close() return True - - - - diff --git a/python/tests/test_calculator.py b/python/tests/test_calculator.py new file mode 100644 index 0000000..e69de29 diff --git a/scripts/analyze_seat_activity.py b/scripts/analyze_seat_activity.py new file mode 100755 index 0000000..bbc1a21 --- /dev/null +++ b/scripts/analyze_seat_activity.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python3 +""" +Seat Activity Analyzer + +This script analyzes seat activity CSV files and calculates the percentage +of active users. A user is considered active if they have activity within the last 60 days. +""" + +import csv +import sys +import argparse +from datetime import datetime, timedelta, timezone +from pathlib import Path + + +def analyze_seat_activity(csv_path: str, days: int = 60) -> dict: + """ + Analyze a seat activity CSV file. + + Args: + csv_path: Path to the CSV file + + Returns: + Dictionary containing analysis results + """ + total_users = 0 + active_users = 0 + inactive_users = 0 + + # Calculate the cutoff date (days ago from now) + cutoff_date = datetime.now(timezone.utc) - timedelta(days=days) + + with open(csv_path, 'r', encoding='utf-8') as file: + reader = csv.DictReader(file) + + for row in reader: + total_users += 1 + last_activity = row.get('Last Activity At', '').strip() + + if last_activity and last_activity.lower() != 'none': + try: + # Parse the ISO 8601 date format (e.g., "2025-12-01T19:59:43Z") + activity_date = datetime.fromisoformat(last_activity.replace('Z', '+00:00')) + + if activity_date >= cutoff_date: + active_users += 1 + else: + inactive_users += 1 + except (ValueError, AttributeError): + # If date parsing fails, consider as inactive + inactive_users += 1 + else: + inactive_users += 1 + + active_percentage = (active_users / total_users * 100) if total_users > 0 else 0 + inactive_percentage = (inactive_users / total_users * 100) if total_users > 0 else 0 + + return { + 'total_users': total_users, + 'active_users': active_users, + 'inactive_users': inactive_users, + 'active_percentage': active_percentage, + 'inactive_percentage': inactive_percentage, + 'cutoff_date': cutoff_date + } + + +def main(): + """Main function to run the analysis.""" + parser = argparse.ArgumentParser(description="Analyze seat activity CSV") + parser.add_argument("csv", nargs="?", default=str(Path(__file__).parent / 'seat-activity.csv'), help="Path to CSV file") + parser.add_argument("--days", type=int, default=60, help="Activity window in days (default: 60)") + args = parser.parse_args() + + csv_file = Path(args.csv) + + if not csv_file.exists(): + print(f"Error: CSV file not found at {csv_file}") + sys.exit(1) + + print(f"Analyzing: {csv_file.name}") + print("-" * 60) + + results = analyze_seat_activity(str(csv_file), days=args.days) + + cutoff_date_str = results['cutoff_date'].strftime('%Y-%m-%d') + print(f"\nActivity cutoff date: {cutoff_date_str} ({args.days} days ago)") + print(f"\nTotal Users: {results['total_users']:,}") + print(f"Active Users: {results['active_users']:,} ({results['active_percentage']:.2f}%)") + print(f"Inactive Users: {results['inactive_users']:,} ({results['inactive_percentage']:.2f}%)") + print("-" * 60) + print(f"\n✓ Active user percentage: {results['active_percentage']:.2f}%") + + +if __name__ == '__main__': + main() diff --git a/scripts/find_json_string.py b/scripts/find_json_string.py new file mode 100644 index 0000000..bc71e27 --- /dev/null +++ b/scripts/find_json_string.py @@ -0,0 +1,165 @@ +#!/usr/bin/env python3 +""" +find_json_string.py + +Search a JSON file for occurrences of a string (or regex) and output the +1-based line numbers where matches occur. Works on the raw file text so it +doesn't require valid JSON and preserves line numbers. + +Usage: + python3 scripts/find_json_string.py path/to/file.json "needle" + +Options: + -i, --ignore-case Case-insensitive search + -r, --regex Treat the pattern as a regular expression + -w, --word Whole-word match (implies regex with word boundaries) + -N, --numbers-only Print only numbers, one per line (default) + -l, --list Print "line: content" for each matching line + +Examples: + python3 scripts/find_json_string.py data.json "user_id" + python3 scripts/find_json_string.py data.json "error .* timeout" -r -i -l + cat data.json | python3 scripts/find_json_string.py - "foo" -w +""" + +from __future__ import annotations + +import argparse +import re +import sys +from typing import Iterable, List + + +def iter_lines(path: str) -> Iterable[tuple[int, str]]: + if path == "-": + for i, line in enumerate(sys.stdin, start=1): + yield i, line.rstrip("\n") + return + try: + with open(path, "r", encoding="utf-8", errors="replace") as f: + for i, line in enumerate(f, start=1): + yield i, line.rstrip("\n") + except FileNotFoundError: + print(f"error: file not found: {path}", file=sys.stderr) + sys.exit(2) + except OSError as e: + print(f"error: cannot read {path}: {e}", file=sys.stderr) + sys.exit(2) + + +def find_matches( + lines: Iterable[tuple[int, str]], + pattern: str, + ignore_case: bool = False, + regex: bool = False, + whole_word: bool = False, +) -> List[int]: + flags = re.IGNORECASE if ignore_case else 0 + if whole_word: + regex = True + pattern = rf"\b{re.escape(pattern)}\b" + + compiled = None + if regex: + try: + compiled = re.compile(pattern, flags) + except re.error as e: + print(f"error: invalid regex: {e}", file=sys.stderr) + sys.exit(2) + + hits: List[int] = [] + if compiled is not None: + for ln, text in lines: + if compiled.search(text) is not None: + hits.append(ln) + else: + if ignore_case: + needle = pattern.lower() + for ln, text in lines: + if needle in text.lower(): + hits.append(ln) + else: + for ln, text in lines: + if pattern in text: + hits.append(ln) + + # De-duplicate while preserving order + seen = set() + unique_hits: List[int] = [] + for ln in hits: + if ln not in seen: + seen.add(ln) + unique_hits.append(ln) + return unique_hits + + +def main(argv: list[str] | None = None) -> int: + p = argparse.ArgumentParser( + description="Find lines in a JSON file containing a string or regex.", + ) + p.add_argument( + "path", + help="Path to JSON file, or '-' for stdin", + ) + p.add_argument( + "pattern", + help="Search string (or regex with -r)", + ) + p.add_argument( + "-i", + "--ignore-case", + action="store_true", + help="Case-insensitive search", + ) + p.add_argument( + "-r", + "--regex", + action="store_true", + help="Treat pattern as a regular expression", + ) + p.add_argument( + "-w", + "--word", + action="store_true", + help="Whole-word match (wraps pattern with word boundaries)", + ) + output = p.add_mutually_exclusive_group() + output.add_argument( + "-N", + "--numbers-only", + action="store_true", + help="Print only line numbers (default)", + ) + output.add_argument( + "-l", + "--list", + action="store_true", + help="Print 'line: content' for each matching line", + ) + + args = p.parse_args(argv) + + hits = find_matches( + iter_lines(args.path), + args.pattern, + ignore_case=args.ignore_case, + regex=args.regex, + whole_word=args.word, + ) + + if args.list: + # Re-iterate lines for printing content efficiently + line_set = set(hits) + for ln, text in iter_lines(args.path): + if ln in line_set: + print(f"{ln}: {text}") + else: + for ln in hits: + print(ln) + + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) + diff --git a/scripts/report_seat_versions.py b/scripts/report_seat_versions.py new file mode 100644 index 0000000..5e6e2fa --- /dev/null +++ b/scripts/report_seat_versions.py @@ -0,0 +1,188 @@ +#!/usr/bin/env python3 +import argparse +import csv +import os +import re +import sys +from collections import Counter, defaultdict +from pathlib import Path + + +def parse_surface(surface: str): + """ + Parse a Last Surface Used value like: + - "vscode/1.99.3/copilot-chat/0.26.7" + - "JetBrains-IC/251.26927.53/" + - "VisualStudio/17.8.21/copilot-vs/1.206.0.0" + + Returns (ide_name, ide_version, ext_name, ext_version) where ext_* can be None. + Whitespace is stripped; empty or 'None' values return (None, None, None, None). + """ + if surface is None: + return None, None, None, None + s = str(surface).strip() + if not s or s.lower() == "none": + return None, None, None, None + + # Split by '/', keep empty tokens to allow trailing slash patterns + parts = s.split('/') + parts = [p.strip() for p in parts] + parts = [p for p in parts if p != ''] # drop empty tokens from trailing '/' + + if len(parts) < 2: + return None, None, None, None + + ide_name, ide_version = parts[0], parts[1] + ext_name = ext_version = None + if len(parts) >= 4: + ext_name, ext_version = parts[2], parts[3] + + return ide_name, ide_version, ext_name, ext_version + + +from typing import Optional + + +def is_copilot_extension(name: Optional[str]) -> bool: + if not name: + return False + return name.lower().startswith("copilot") + + +def find_default_csv() -> Optional[Path]: + # Look for a seat activity CSV in ./scripts by default + cand_dir = Path(__file__).resolve().parent + matches = sorted(cand_dir.glob("seat-activity-*.csv")) + if matches: + # Choose the lexicographically last; filename usually contains a timestamp + return matches[-1] + return None + + +def main(): + parser = argparse.ArgumentParser(description="Report counts of IDE versions and Copilot extension versions from seat activity CSV.") + parser.add_argument("csv_path", nargs="?", help="Path to CSV (defaults to scripts/seat-activity-*.csv)") + parser.add_argument("--by-extension-name", action="store_true", help="Also break down Copilot counts by extension name (e.g., copilot, copilot-chat, copilot-intellij).") + parser.add_argument("--write-csv", action="store_true", help="Write results to CSV files alongside the input or to --out-dir.") + parser.add_argument("--out-dir", help="Directory to write CSV files. Defaults to the input CSV's directory.") + parser.add_argument("--prefix", help="Output filename prefix. Defaults to the input CSV filename stem.") + args = parser.parse_args() + + csv_path = args.csv_path + if not csv_path: + default = find_default_csv() + if not default: + print("No CSV provided and no default seat activity CSV found in scripts/", file=sys.stderr) + sys.exit(1) + csv_path = str(default) + + csv_file = Path(csv_path) + if not csv_file.exists(): + print(f"CSV not found: {csv_file}", file=sys.stderr) + sys.exit(1) + + ide_counts = Counter() + copilot_version_counts = Counter() + copilot_name_version_counts = Counter() # optional detailed breakdown + malformed_surfaces = 0 + empty_surfaces = 0 + + with csv_file.open(newline='') as f: + reader = csv.DictReader(f) + # try to detect the column name case-insensitively + header_map = {h.lower(): h for h in reader.fieldnames or []} + surface_col = None + for key in ("last surface used", "last_surface_used", "surface", "lastsurfaceused"): + if key in header_map: + surface_col = header_map[key] + break + if surface_col is None: + print("Could not find 'Last Surface Used' column in CSV headers.", file=sys.stderr) + sys.exit(1) + + for row in reader: + raw_surface = row.get(surface_col) + ide_name, ide_ver, ext_name, ext_ver = parse_surface(raw_surface) + if ide_name is None or ide_ver is None: + if raw_surface and raw_surface.strip().lower() != "none": + malformed_surfaces += 1 + else: + empty_surfaces += 1 + continue + + # Normalize IDE name to lower for grouping consistency + norm_ide_name = ide_name.lower() + ide_key = f"{norm_ide_name}/{ide_ver}" + ide_counts[ide_key] += 1 + + if is_copilot_extension(ext_name) and ext_ver: + copilot_version_counts[ext_ver] += 1 + name_ver_key = f"{ext_name.lower()}/{ext_ver}" + copilot_name_version_counts[name_ver_key] += 1 + + def print_counter(title: str, counter: Counter): + print(title) + for key, count in counter.most_common(): + print(f" {key}: {count}") + if not counter: + print(" (none)") + print() + + print(f"Source: {csv_file}") + print() + print_counter("IDE Versions (name/version):", ide_counts) + print_counter("Copilot Extension Versions (by version):", copilot_version_counts) + if args.by_extension_name: + print_counter("Copilot Extension Versions (by extension name/version):", copilot_name_version_counts) + + # Optionally write results to CSV files + if args.write_csv: + out_dir = Path(args.out_dir) if args.out_dir else csv_file.parent + out_dir.mkdir(parents=True, exist_ok=True) + prefix = args.prefix if args.prefix else csv_file.stem + + ide_out = out_dir / f"{prefix}_ide_versions.csv" + copilot_out = out_dir / f"{prefix}_copilot_versions.csv" + copilot_byname_out = out_dir / f"{prefix}_copilot_extname_versions.csv" + + # Write IDE versions as columns: ide_name, ide_version, count + with ide_out.open('w', newline='') as f: + w = csv.writer(f) + w.writerow(["ide_name", "ide_version", "count"]) + for key, count in ide_counts.most_common(): + ide_name, ide_version = key.split('/', 1) if '/' in key else (key, "") + w.writerow([ide_name, ide_version, count]) + + # Write Copilot versions as columns: extension_version, count + with copilot_out.open('w', newline='') as f: + w = csv.writer(f) + w.writerow(["extension_version", "count"]) + for ver, count in copilot_version_counts.most_common(): + w.writerow([ver, count]) + + # Optional: by extension name and version + if args.by_extension_name: + with copilot_byname_out.open('w', newline='') as f: + w = csv.writer(f) + w.writerow(["extension_name", "extension_version", "count"]) + for key, count in copilot_name_version_counts.most_common(): + ext_name, ext_version = key.split('/', 1) if '/' in key else (key, "") + w.writerow([ext_name, ext_version, count]) + + print("Written CSVs:") + print(f" {ide_out}") + print(f" {copilot_out}") + if args.by_extension_name: + print(f" {copilot_byname_out}") + + # Small diagnostic footer + if malformed_surfaces or empty_surfaces: + print("Notes:") + if empty_surfaces: + print(f" Rows with empty/None surface: {empty_surfaces}") + if malformed_surfaces: + print(f" Rows with unparseable surface: {malformed_surfaces}") + + +if __name__ == "__main__": + main() diff --git a/task-management/README.md b/task-management/README.md new file mode 100644 index 0000000..bdcccdd --- /dev/null +++ b/task-management/README.md @@ -0,0 +1,56 @@ +# Playwright MCP Demo + +A minimal static web UI + Playwright tests designed to be easy for an MCP server (e.g., Playwright MCP server) to drive. + +## What’s included +- Static UI: `src/index.html`, `src/app.js`, `src/styles.css` +- Playwright tests: `tests/example.spec.ts` +- Scripts: local HTTP server and test runner + +## Quick start + +1. Install dev dependencies: + +```bash +npm install +``` + +2. Serve the static site (default port 5173): + +```bash +npm run serve +``` + +3. In another terminal, run tests (they assume the server is running): + +```bash +npm test +``` + +To use a different port, set `PORT` when running tests, and start the server on the same port: + +```bash +PORT=8080 npm run serve +PORT=8080 npm test +``` + +## MCP Server Integration Notes +- The page exposes stable selectors (`data-testid`, ids) to enable robust automation. +- Flows covered: + - Login success/failure + - Task add/clear + - Modal open/close +- You can point the Playwright MCP server to the served URL and use actions that mirror the test steps. + +## Folder structure +``` +playwright-mcp-demo/ + package.json + README.md + src/ + index.html + app.js + styles.css + tests/ + example.spec.ts +``` diff --git a/task-management/package-lock.json b/task-management/package-lock.json new file mode 100644 index 0000000..7c9d662 --- /dev/null +++ b/task-management/package-lock.json @@ -0,0 +1,706 @@ +{ + "name": "playwright-mcp-demo", + "version": "0.1.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "playwright-mcp-demo", + "version": "0.1.0", + "devDependencies": { + "@playwright/test": "^1.49.0", + "http-server": "^14.1.1" + } + }, + "node_modules/@playwright/test": { + "version": "1.57.0", + "resolved": "https://registry.npmjs.org/@playwright/test/-/test-1.57.0.tgz", + "integrity": "sha512-6TyEnHgd6SArQO8UO2OMTxshln3QMWBtPGrOCgs3wVEmQmwyuNtB10IZMfmYDE0riwNR1cu4q+pPcxMVtaG3TA==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "playwright": "1.57.0" + }, + "bin": { + "playwright": "cli.js" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/async": { + "version": "3.2.6", + "resolved": "https://registry.npmjs.org/async/-/async-3.2.6.tgz", + "integrity": "sha512-htCUDlxyyCLMgaM3xXg0C0LW2xqfuQ6p05pCEIsXuyQ+a1koYKTuBMzRNwmybfLgvJDMd0r1LTn4+E0Ti6C2AA==", + "dev": true, + "license": "MIT" + }, + "node_modules/basic-auth": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/basic-auth/-/basic-auth-2.0.1.tgz", + "integrity": "sha512-NF+epuEdnUYVlGuhaxbbq+dvJttwLnGY+YixlXlME5KpQ5W3CnXA5cVTneY3SPbPDRkcjMbifrwmFYcClgOZeg==", + "dev": true, + "license": "MIT", + "dependencies": { + "safe-buffer": "5.1.2" + }, + "engines": { + "node": ">= 0.8" + } + }, + "node_modules/call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/call-bound": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/chalk": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/corser": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/corser/-/corser-2.0.1.tgz", + "integrity": "sha512-utCYNzRSQIZNPIcGZdQc92UVJYAhtGAteCFg0yRaFm8f0P+CPtyGyHXJcGXnffjCybUCEx3FQ2G7U3/o9eIkVQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4.0" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/dunder-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-define-property": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-object-atoms": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/eventemitter3": { + "version": "4.0.7", + "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz", + "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==", + "dev": true, + "license": "MIT" + }, + "node_modules/follow-redirects": { + "version": "1.15.11", + "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz", + "integrity": "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==", + "dev": true, + "funding": [ + { + "type": "individual", + "url": "https://github.com/sponsors/RubenVerborgh" + } + ], + "license": "MIT", + "engines": { + "node": ">=4.0" + }, + "peerDependenciesMeta": { + "debug": { + "optional": true + } + } + }, + "node_modules/fsevents": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz", + "integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": "^8.16.0 || ^10.6.0 || >=11.0.0" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "dev": true, + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-intrinsic": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/get-proto": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "dev": true, + "license": "MIT", + "dependencies": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/gopd": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/has-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/he": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz", + "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==", + "dev": true, + "license": "MIT", + "bin": { + "he": "bin/he" + } + }, + "node_modules/html-encoding-sniffer": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/html-encoding-sniffer/-/html-encoding-sniffer-3.0.0.tgz", + "integrity": "sha512-oWv4T4yJ52iKrufjnyZPkrN0CH3QnrUqdB6In1g5Fe1mia8GmF36gnfNySxoZtxD5+NmYw1EElVXiBk93UeskA==", + "dev": true, + "license": "MIT", + "dependencies": { + "whatwg-encoding": "^2.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/http-proxy": { + "version": "1.18.1", + "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz", + "integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "eventemitter3": "^4.0.0", + "follow-redirects": "^1.0.0", + "requires-port": "^1.0.0" + }, + "engines": { + "node": ">=8.0.0" + } + }, + "node_modules/http-server": { + "version": "14.1.1", + "resolved": "https://registry.npmjs.org/http-server/-/http-server-14.1.1.tgz", + "integrity": "sha512-+cbxadF40UXd9T01zUHgA+rlo2Bg1Srer4+B4NwIHdaGxAGGv59nYRnGGDJ9LBk7alpS0US+J+bLLdQOOkJq4A==", + "dev": true, + "license": "MIT", + "dependencies": { + "basic-auth": "^2.0.1", + "chalk": "^4.1.2", + "corser": "^2.0.1", + "he": "^1.2.0", + "html-encoding-sniffer": "^3.0.0", + "http-proxy": "^1.18.1", + "mime": "^1.6.0", + "minimist": "^1.2.6", + "opener": "^1.5.1", + "portfinder": "^1.0.28", + "secure-compare": "3.0.1", + "union": "~0.5.0", + "url-join": "^4.0.1" + }, + "bin": { + "http-server": "bin/http-server" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dev": true, + "license": "MIT", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/math-intrinsics": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/mime": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", + "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "dev": true, + "license": "MIT", + "bin": { + "mime": "cli.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/minimist": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz", + "integrity": "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==", + "dev": true, + "license": "MIT", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", + "dev": true, + "license": "MIT" + }, + "node_modules/object-inspect": { + "version": "1.13.4", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/opener": { + "version": "1.5.2", + "resolved": "https://registry.npmjs.org/opener/-/opener-1.5.2.tgz", + "integrity": "sha512-ur5UIdyw5Y7yEj9wLzhqXiy6GZ3Mwx0yGI+5sMn2r0N0v3cKJvUmFH5yPP+WXh9e0xfyzyJX95D8l088DNFj7A==", + "dev": true, + "license": "(WTFPL OR MIT)", + "bin": { + "opener": "bin/opener-bin.js" + } + }, + "node_modules/playwright": { + "version": "1.57.0", + "resolved": "https://registry.npmjs.org/playwright/-/playwright-1.57.0.tgz", + "integrity": "sha512-ilYQj1s8sr2ppEJ2YVadYBN0Mb3mdo9J0wQ+UuDhzYqURwSoW4n1Xs5vs7ORwgDGmyEh33tRMeS8KhdkMoLXQw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "playwright-core": "1.57.0" + }, + "bin": { + "playwright": "cli.js" + }, + "engines": { + "node": ">=18" + }, + "optionalDependencies": { + "fsevents": "2.3.2" + } + }, + "node_modules/playwright-core": { + "version": "1.57.0", + "resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.57.0.tgz", + "integrity": "sha512-agTcKlMw/mjBWOnD6kFZttAAGHgi/Nw0CZ2o6JqWSbMlI219lAFLZZCyqByTsvVAJq5XA5H8cA6PrvBRpBWEuQ==", + "dev": true, + "license": "Apache-2.0", + "bin": { + "playwright-core": "cli.js" + }, + "engines": { + "node": ">=18" + } + }, + "node_modules/portfinder": { + "version": "1.0.38", + "resolved": "https://registry.npmjs.org/portfinder/-/portfinder-1.0.38.tgz", + "integrity": "sha512-rEwq/ZHlJIKw++XtLAO8PPuOQA/zaPJOZJ37BVuN97nLpMJeuDVLVGRwbFoBgLudgdTMP2hdRJP++H+8QOA3vg==", + "dev": true, + "license": "MIT", + "dependencies": { + "async": "^3.2.6", + "debug": "^4.3.6" + }, + "engines": { + "node": ">= 10.12" + } + }, + "node_modules/qs": { + "version": "6.14.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.14.0.tgz", + "integrity": "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "side-channel": "^1.1.0" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/requires-port": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz", + "integrity": "sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/safe-buffer": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", + "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", + "dev": true, + "license": "MIT" + }, + "node_modules/safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "dev": true, + "license": "MIT" + }, + "node_modules/secure-compare": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/secure-compare/-/secure-compare-3.0.1.tgz", + "integrity": "sha512-AckIIV90rPDcBcglUwXPF3kg0P0qmPsPXAj6BBEENQE1p5yA1xfmDJzfi1Tappj37Pv2mVbKpL3Z1T+Nn7k1Qw==", + "dev": true, + "license": "MIT" + }, + "node_modules/side-channel": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-list": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/side-channel-list/-/side-channel-list-1.0.0.tgz", + "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", + "dev": true, + "license": "MIT", + "dependencies": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-map": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "dev": true, + "license": "MIT", + "dependencies": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/union": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/union/-/union-0.5.0.tgz", + "integrity": "sha512-N6uOhuW6zO95P3Mel2I2zMsbsanvvtgn6jVqJv4vbVcz/JN0OkL9suomjQGmWtxJQXOCqUJvquc1sMeNz/IwlA==", + "dev": true, + "dependencies": { + "qs": "^6.4.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/url-join": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/url-join/-/url-join-4.0.1.tgz", + "integrity": "sha512-jk1+QP6ZJqyOiuEI9AEWQfju/nB2Pw466kbA0LEZljHwKeMgd9WrAEgEGxjPDD2+TNbbb37rTyhEfrCXfuKXnA==", + "dev": true, + "license": "MIT" + }, + "node_modules/whatwg-encoding": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/whatwg-encoding/-/whatwg-encoding-2.0.0.tgz", + "integrity": "sha512-p41ogyeMUrw3jWclHWTQg1k05DSVXPLcVxRTYsXUk+ZooOCZLcoYgPZ/HL/D/N+uQPOtcp1me1WhBEaX02mhWg==", + "dev": true, + "license": "MIT", + "dependencies": { + "iconv-lite": "0.6.3" + }, + "engines": { + "node": ">=12" + } + } + } +} diff --git a/task-management/package.json b/task-management/package.json new file mode 100644 index 0000000..d4b9dbf --- /dev/null +++ b/task-management/package.json @@ -0,0 +1,15 @@ +{ + "name": "playwright-mcp-demo", + "private": true, + "version": "0.1.0", + "description": "Minimal static site and Playwright tests suitable for MCP server automation", + "scripts": { + "test": "playwright test", + "test:ui": "playwright test --ui", + "serve": "npx http-server ./src -p 5173 -c-" + }, + "devDependencies": { + "@playwright/test": "^1.49.0", + "http-server": "^14.1.1" + } +} diff --git a/task-management/src/admin.html b/task-management/src/admin.html new file mode 100644 index 0000000..1b4d428 --- /dev/null +++ b/task-management/src/admin.html @@ -0,0 +1,53 @@ + + + + + + Admin Area + + + + +
+

Admin Area

+ +
+
+
+

+
+ +
+ + + diff --git a/task-management/src/app.js b/task-management/src/app.js new file mode 100644 index 0000000..0a2f3f3 --- /dev/null +++ b/task-management/src/app.js @@ -0,0 +1,88 @@ +const state = { + loggedIn: false, + tasks: [] +}; + +function $(sel) { return document.querySelector(sel); } +function el(tag, props = {}, children = []) { + const node = document.createElement(tag); + Object.entries(props).forEach(([k, v]) => { + if (k === 'dataset') Object.entries(v).forEach(([dk, dv]) => node.dataset[dk] = dv); + else if (k in node) node[k] = v; else node.setAttribute(k, v); + }); + children.forEach(c => node.appendChild(typeof c === 'string' ? document.createTextNode(c) : c)); + return node; +} + +function renderTasks() { + const list = $('#task-list'); + list.innerHTML = ''; + state.tasks.forEach((t, i) => { + const item = el('li', { className: 'task-item' }, [ + el('input', { type: 'checkbox', checked: !!t.done, 'aria-label': `Complete ${t.text}` }), + el('span', { className: 'task-text' }, [t.text]), + el('button', { className: 'delete-btn' }, ['Delete']) + ]); + + item.querySelector('input').addEventListener('change', (e) => { + state.tasks[i].done = e.target.checked; + }); + item.querySelector('.delete-btn').addEventListener('click', () => { + state.tasks.splice(i, 1); + renderTasks(); + }); + list.appendChild(item); + }); +} + +function initLogin() { + $('#login-form').addEventListener('submit', (e) => { + e.preventDefault(); + const u = $('#username').value.trim(); + const p = $('#password').value; + setTimeout(() => { + if (u && p) { + state.loggedIn = true; + // Very simple demo role handling: admin if username+password both 'admin' + const isAdmin = u.toLowerCase() === 'admin' && p === 'admin'; + const role = isAdmin ? 'admin' : 'user'; + try { localStorage.setItem('demo-role', role); } catch {} + $('#login-status').textContent = isAdmin ? `Logged in as admin` : `Logged in as ${u}`; + $('#login-status').dataset.testid = isAdmin ? 'admin-login' : 'login-success'; + } else { + state.loggedIn = false; + $('#login-status').textContent = 'Login failed'; + $('#login-status').dataset.testid = 'login-failed'; + } + }, 150); + }); +} + +function initTasks() { + $('#add-task').addEventListener('click', () => { + const text = $('#new-task').value.trim(); + if (!text) return; + state.tasks.push({ text, done: false }); + $('#new-task').value = ''; + renderTasks(); + }); + + $('#clear-completed').addEventListener('click', () => { + state.tasks = state.tasks.filter(t => !t.done); + renderTasks(); + }); +} + +function initModal() { + const dialog = $('#demo-modal'); + $('#open-modal').addEventListener('click', () => dialog.showModal()); + $('#close-modal').addEventListener('click', () => dialog.close()); +} + +function init() { + initLogin(); + initTasks(); + initModal(); +} + +document.addEventListener('DOMContentLoaded', init); diff --git a/task-management/src/index.html b/task-management/src/index.html new file mode 100644 index 0000000..99e43f2 --- /dev/null +++ b/task-management/src/index.html @@ -0,0 +1,59 @@ + + + + + + Playwright MCP Demo + + + +
+

Playwright MCP Demo

+
+ +
+
+
+ + + + + + + +
+

+
+ +
+
+ + + +
+
    +
    + +
    + + +
    +

    Demo Modal

    +

    This is a modal for UI automation.

    + +
    +
    +
    + +
    +

    + Admin area: + /admin +

    +

    Log in as admin/admin to gain access.

    +
    +
    + + + + diff --git a/task-management/src/styles.css b/task-management/src/styles.css new file mode 100644 index 0000000..7d121fd --- /dev/null +++ b/task-management/src/styles.css @@ -0,0 +1,14 @@ +* { box-sizing: border-box; } +body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Arial, sans-serif; margin: 0; padding: 24px; background: #fafafa; } +header { border-bottom: 1px solid #e5e5e5; margin-bottom: 16px; } +h1 { margin: 0 0 12px; } +main { display: grid; gap: 24px; grid-template-columns: 1fr; max-width: 720px; } +label { display: block; margin: 8px 0 4px; } +input[type="text"], input[type="password"] { width: 100%; padding: 8px; border: 1px solid #ccc; border-radius: 6px; } +button { margin-top: 8px; padding: 8px 12px; border: 1px solid #ccc; background: white; border-radius: 6px; cursor: pointer; } +.controls { display: flex; gap: 8px; align-items: center; } +#new-task { flex: 1; } +#task-list { list-style: none; padding: 0; } +.task-item { display: flex; align-items: center; gap: 8px; padding: 6px 0; } +.task-text { flex: 1; } +dialog { border: none; border-radius: 8px; padding: 16px; } diff --git a/task-management/test-results/.last-run.json b/task-management/test-results/.last-run.json new file mode 100644 index 0000000..0b626df --- /dev/null +++ b/task-management/test-results/.last-run.json @@ -0,0 +1,6 @@ +{ + "status": "failed", + "failedTests": [ + "b3a1b63342050f33a2cc-7681d50341b1ff43aa2e" + ] +} \ No newline at end of file diff --git a/task-management/tests/example.spec.ts b/task-management/tests/example.spec.ts new file mode 100644 index 0000000..920dc9f --- /dev/null +++ b/task-management/tests/example.spec.ts @@ -0,0 +1,72 @@ +import { test, expect } from '@playwright/test'; + +const PORT = process.env.PORT || '5173'; +const BASE = `http://localhost:${PORT}`; + +// Basic smoke tests for MCP-friendly actions + +test.describe('Playwright MCP Demo', () => { + test.beforeEach(async ({ page }) => { + await page.goto(BASE); + }); + + test('renders title', async ({ page }) => { + const title = page.getByTestId('title'); + await expect(title).toHaveText('Playwright MCP Demo'); + }); + + test('login success and failure', async ({ page }) => { + await page.fill('#username', 'mcp-user'); + await page.fill('#password', 'secret'); + await page.click('#login-btn'); + await expect(page.locator('#login-status[data-testid="login-success"]')).toContainText('Logged in as mcp-user'); + + await page.fill('#username', ''); + await page.fill('#password', ''); + await page.click('#login-btn'); + await expect(page.locator('#login-status[data-testid="login-failed"]')).toHaveText('Login failed'); + }); + + test('add and clear tasks', async ({ page }) => { + await page.fill('#new-task', 'Write MCP test'); + await page.click('#add-task'); + await expect(page.locator('#task-list .task-item .task-text')).toHaveText('Write MCP test'); + + await page.check('#task-list .task-item input[type="checkbox"]'); + await page.click('#clear-completed'); + await expect(page.locator('#task-list .task-item')).toHaveCount(0); + }); + + test('modal open/close', async ({ page }) => { + await page.click('#open-modal'); + const modal = page.locator('dialog#demo-modal'); + await expect(modal).toBeVisible(); + await page.click('#close-modal'); + await expect(modal).not.toBeVisible(); + }); + + test('normal user cannot access admin', async ({ page }) => { + // Login as a normal user + await page.fill('#username', 'alice'); + await page.fill('#password', 'secret'); + await page.click('#login-btn'); + await expect(page.locator('#login-status[data-testid="login-success"]')).toContainText('Logged in as alice'); + + // Navigate to /admin + await page.goto(`${BASE}/admin`); + // Confirm access denied message is shown and admin content hidden + await expect(page.locator('#status[data-testid="admin-access-denied"]')).toHaveText('Access denied: insufficient permissions'); + }); + + test('admin can access admin area', async ({ page }) => { + // Login as admin/admin + await page.fill('#username', 'admin'); + await page.fill('#password', 'admin'); + await page.click('#login-btn'); + await expect(page.locator('#login-status[data-testid="admin-login"]')).toHaveText('Logged in as admin'); + + await page.goto(`${BASE}/admin`); + await expect(page.locator('#status[data-testid="admin-access-granted"]')).toHaveText('Access granted: admin'); + await expect(page.getByRole('heading', { name: 'System Controls' })).toBeVisible(); + }); +}); diff --git a/test-data-demo/README.md b/test-data-demo/README.md new file mode 100644 index 0000000..3cb5766 --- /dev/null +++ b/test-data-demo/README.md @@ -0,0 +1,57 @@ +# Test Data Generation Demo + +This demo showcases using GitHub Copilot to generate test data for an existing test suite. + +## Overview + +This is a simple **Order Processing System** that validates and processes customer orders. The tests are already written but are missing the test data fixtures needed to run them. + +## The Challenge + +The test files in `tests/` reference data fixtures that don't exist yet: +- `tests/fixtures/sample_customers.json` - Customer records for testing +- `tests/fixtures/sample_orders.json` - Order records for testing +- `tests/fixtures/sample_products.json` - Product catalog for testing + +## Demo Goals + +Use Copilot to: +1. Generate realistic test data that matches the expected schemas +2. Create edge case data for boundary testing +3. Generate data that covers various validation scenarios + +## Running Tests + +```bash +cd test-data-demo +pip install pytest +pytest -v +``` + +**Note:** Tests will fail until the fixture data is generated! + +## Data Models + +### Customer +- `id`: string (UUID format) +- `name`: string +- `email`: string (valid email format) +- `membership_level`: string ("bronze", "silver", "gold", "platinum") +- `created_at`: string (ISO date format) + +### Product +- `id`: string (UUID format) +- `name`: string +- `price`: float (positive) +- `category`: string +- `in_stock`: boolean +- `stock_quantity`: integer + +### Order +- `id`: string (UUID format) +- `customer_id`: string (must match a customer) +- `items`: list of order items + - `product_id`: string + - `quantity`: integer (positive) +- `status`: string ("pending", "confirmed", "shipped", "delivered", "cancelled") +- `order_date`: string (ISO date format) diff --git a/test-data-demo/data_loader.py b/test-data-demo/data_loader.py new file mode 100644 index 0000000..fa90c18 --- /dev/null +++ b/test-data-demo/data_loader.py @@ -0,0 +1,76 @@ +"""Utilities for loading test data from JSON fixtures.""" + +import json +import os +from typing import List +from models import Customer, Product, Order, OrderItem + + +FIXTURES_DIR = os.path.join(os.path.dirname(__file__), "tests", "fixtures") + + +def load_customers(filepath: str = None) -> List[Customer]: + """Load customers from a JSON file.""" + if filepath is None: + filepath = os.path.join(FIXTURES_DIR, "sample_customers.json") + + with open(filepath, "r") as f: + data = json.load(f) + + return [ + Customer( + id=c["id"], + name=c["name"], + email=c["email"], + membership_level=c["membership_level"], + created_at=c["created_at"] + ) + for c in data + ] + + +def load_products(filepath: str = None) -> List[Product]: + """Load products from a JSON file.""" + if filepath is None: + filepath = os.path.join(FIXTURES_DIR, "sample_products.json") + + with open(filepath, "r") as f: + data = json.load(f) + + return [ + Product( + id=p["id"], + name=p["name"], + price=p["price"], + category=p["category"], + in_stock=p["in_stock"], + stock_quantity=p["stock_quantity"] + ) + for p in data + ] + + +def load_orders(filepath: str = None) -> List[Order]: + """Load orders from a JSON file.""" + if filepath is None: + filepath = os.path.join(FIXTURES_DIR, "sample_orders.json") + + with open(filepath, "r") as f: + data = json.load(f) + + orders = [] + for o in data: + items = [ + OrderItem(product_id=i["product_id"], quantity=i["quantity"]) + for i in o["items"] + ] + orders.append( + Order( + id=o["id"], + customer_id=o["customer_id"], + items=items, + status=o["status"], + order_date=o["order_date"] + ) + ) + return orders diff --git a/test-data-demo/models.py b/test-data-demo/models.py new file mode 100644 index 0000000..9a37e4d --- /dev/null +++ b/test-data-demo/models.py @@ -0,0 +1,107 @@ +"""Data models for the order processing system.""" + +from dataclasses import dataclass +from typing import List +import re +from datetime import datetime + + +@dataclass +class Customer: + id: str + name: str + email: str + membership_level: str + created_at: str + + VALID_MEMBERSHIP_LEVELS = ("bronze", "silver", "gold", "platinum") + + def is_valid(self) -> bool: + """Validate the customer data.""" + if not self.id or not self.name: + return False + if not self._is_valid_email(self.email): + return False + if self.membership_level not in self.VALID_MEMBERSHIP_LEVELS: + return False + return True + + def _is_valid_email(self, email: str) -> bool: + """Check if email format is valid.""" + pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$' + return bool(re.match(pattern, email)) + + def get_discount_rate(self) -> float: + """Get discount rate based on membership level.""" + discounts = { + "bronze": 0.0, + "silver": 0.05, + "gold": 0.10, + "platinum": 0.15 + } + return discounts.get(self.membership_level, 0.0) + + +@dataclass +class Product: + id: str + name: str + price: float + category: str + in_stock: bool + stock_quantity: int + + def is_valid(self) -> bool: + """Validate the product data.""" + if not self.id or not self.name: + return False + if self.price <= 0: + return False + if self.stock_quantity < 0: + return False + return True + + def is_available(self, quantity: int) -> bool: + """Check if the requested quantity is available.""" + return self.in_stock and self.stock_quantity >= quantity + + +@dataclass +class OrderItem: + product_id: str + quantity: int + + def is_valid(self) -> bool: + """Validate the order item.""" + return bool(self.product_id) and self.quantity > 0 + + +@dataclass +class Order: + id: str + customer_id: str + items: List[OrderItem] + status: str + order_date: str + + VALID_STATUSES = ("pending", "confirmed", "shipped", "delivered", "cancelled") + + def is_valid(self) -> bool: + """Validate the order data.""" + if not self.id or not self.customer_id: + return False + if not self.items: + return False + if self.status not in self.VALID_STATUSES: + return False + if not all(item.is_valid() for item in self.items): + return False + return True + + def can_be_cancelled(self) -> bool: + """Check if the order can be cancelled.""" + return self.status in ("pending", "confirmed") + + def get_total_items(self) -> int: + """Get total number of items in the order.""" + return sum(item.quantity for item in self.items) diff --git a/test-data-demo/order_processor.py b/test-data-demo/order_processor.py new file mode 100644 index 0000000..7aed18c --- /dev/null +++ b/test-data-demo/order_processor.py @@ -0,0 +1,102 @@ +"""Order processing logic.""" + +from typing import Dict, List, Optional, Tuple +from models import Customer, Product, Order, OrderItem + + +class OrderProcessor: + """Handles order validation and processing.""" + + def __init__(self, customers: List[Customer], products: List[Product]): + self.customers = {c.id: c for c in customers} + self.products = {p.id: p for p in products} + + def get_customer(self, customer_id: str) -> Optional[Customer]: + """Retrieve a customer by ID.""" + return self.customers.get(customer_id) + + def get_product(self, product_id: str) -> Optional[Product]: + """Retrieve a product by ID.""" + return self.products.get(product_id) + + def validate_order(self, order: Order) -> Tuple[bool, List[str]]: + """ + Validate an order and return validation result with error messages. + """ + errors = [] + + if not order.is_valid(): + errors.append("Order has invalid structure") + return False, errors + + # Validate customer exists + customer = self.get_customer(order.customer_id) + if not customer: + errors.append(f"Customer {order.customer_id} not found") + + # Validate each item + for item in order.items: + product = self.get_product(item.product_id) + if not product: + errors.append(f"Product {item.product_id} not found") + elif not product.is_available(item.quantity): + errors.append( + f"Product {product.name} has insufficient stock " + f"(requested: {item.quantity}, available: {product.stock_quantity})" + ) + + return len(errors) == 0, errors + + def calculate_order_total(self, order: Order) -> float: + """Calculate the total price for an order including discounts.""" + if not order.is_valid(): + return 0.0 + + subtotal = 0.0 + for item in order.items: + product = self.get_product(item.product_id) + if product: + subtotal += product.price * item.quantity + + # Apply customer discount + customer = self.get_customer(order.customer_id) + if customer: + discount_rate = customer.get_discount_rate() + subtotal *= (1 - discount_rate) + + return round(subtotal, 2) + + def process_order(self, order: Order) -> Tuple[bool, str]: + """ + Process an order: validate, calculate total, and update stock. + Returns success status and message. + """ + is_valid, errors = self.validate_order(order) + if not is_valid: + return False, f"Order validation failed: {'; '.join(errors)}" + + # Update stock quantities + for item in order.items: + product = self.get_product(item.product_id) + if product: + product.stock_quantity -= item.quantity + if product.stock_quantity == 0: + product.in_stock = False + + total = self.calculate_order_total(order) + return True, f"Order processed successfully. Total: ${total:.2f}" + + def get_orders_by_status(self, orders: List[Order], status: str) -> List[Order]: + """Filter orders by status.""" + return [o for o in orders if o.status == status] + + def get_customer_orders(self, orders: List[Order], customer_id: str) -> List[Order]: + """Get all orders for a specific customer.""" + return [o for o in orders if o.customer_id == customer_id] + + def get_low_stock_products(self, threshold: int = 5) -> List[Product]: + """Get products with stock below the threshold.""" + return [ + p for p in self.products.values() + if p.stock_quantity <= threshold + ] diff --git a/test-data-demo/tests/__init__.py b/test-data-demo/tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/test-data-demo/tests/conftest.py b/test-data-demo/tests/conftest.py new file mode 100644 index 0000000..132ab9b --- /dev/null +++ b/test-data-demo/tests/conftest.py @@ -0,0 +1,35 @@ +"""Test fixtures and data loading for tests.""" + +import pytest +import os +import sys + +# Add parent directory to path for imports +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from data_loader import load_customers, load_products, load_orders + + +@pytest.fixture +def sample_customers(): + """Load sample customers from fixtures.""" + return load_customers() + + +@pytest.fixture +def sample_products(): + """Load sample products from fixtures.""" + return load_products() + + +@pytest.fixture +def sample_orders(): + """Load sample orders from fixtures.""" + return load_orders() + + +@pytest.fixture +def order_processor(sample_customers, sample_products): + """Create an OrderProcessor with sample data.""" + from order_processor import OrderProcessor + return OrderProcessor(sample_customers, sample_products) diff --git a/test-data-demo/tests/fixtures/README.md b/test-data-demo/tests/fixtures/README.md new file mode 100644 index 0000000..76c9443 --- /dev/null +++ b/test-data-demo/tests/fixtures/README.md @@ -0,0 +1,11 @@ +# Test Data Fixtures + +This directory should contain the following JSON fixture files: + +- `sample_customers.json` - Customer test data +- `sample_products.json` - Product test data +- `sample_orders.json` - Order test data + +**These files are intentionally missing for the demo!** + +Use GitHub Copilot to generate realistic test data that matches the schemas defined in the main README. diff --git a/test-data-demo/tests/test_customer.py b/test-data-demo/tests/test_customer.py new file mode 100644 index 0000000..35b7e44 --- /dev/null +++ b/test-data-demo/tests/test_customer.py @@ -0,0 +1,67 @@ +"""Tests for Customer model.""" + +import pytest +import os +import sys + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from models import Customer + + +class TestCustomerValidation: + """Test customer validation logic.""" + + def test_valid_customers_pass_validation(self, sample_customers): + """All sample customers should pass validation.""" + for customer in sample_customers: + assert customer.is_valid(), f"Customer {customer.id} should be valid" + + def test_customer_has_valid_email_format(self, sample_customers): + """All customers should have valid email addresses.""" + for customer in sample_customers: + assert "@" in customer.email, f"Customer {customer.name} has invalid email" + assert "." in customer.email.split("@")[1], f"Customer {customer.name} has invalid email domain" + + def test_customer_membership_levels_are_valid(self, sample_customers): + """All customers should have valid membership levels.""" + valid_levels = Customer.VALID_MEMBERSHIP_LEVELS + for customer in sample_customers: + assert customer.membership_level in valid_levels, \ + f"Customer {customer.name} has invalid membership level: {customer.membership_level}" + + def test_customers_have_unique_ids(self, sample_customers): + """All customers should have unique IDs.""" + ids = [c.id for c in sample_customers] + assert len(ids) == len(set(ids)), "Customer IDs are not unique" + + def test_customers_have_unique_emails(self, sample_customers): + """All customers should have unique emails.""" + emails = [c.email for c in sample_customers] + assert len(emails) == len(set(emails)), "Customer emails are not unique" + + +class TestCustomerDiscounts: + """Test customer discount rate logic.""" + + def test_discount_rates_by_membership(self, sample_customers): + """Verify discount rates are correct for each membership level.""" + expected_discounts = { + "bronze": 0.0, + "silver": 0.05, + "gold": 0.10, + "platinum": 0.15 + } + + for customer in sample_customers: + expected = expected_discounts[customer.membership_level] + actual = customer.get_discount_rate() + assert actual == expected, \ + f"Customer {customer.name} ({customer.membership_level}) has wrong discount rate" + + def test_has_customers_at_each_membership_level(self, sample_customers): + """Sample data should include at least one customer at each level.""" + levels_present = {c.membership_level for c in sample_customers} + for level in Customer.VALID_MEMBERSHIP_LEVELS: + assert level in levels_present, \ + f"No sample customer with membership level '{level}'" diff --git a/test-data-demo/tests/test_integration.py b/test-data-demo/tests/test_integration.py new file mode 100644 index 0000000..696289a --- /dev/null +++ b/test-data-demo/tests/test_integration.py @@ -0,0 +1,100 @@ +"""Integration tests for the order processing system.""" + +import pytest +import os +import sys + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + + +class TestDataIntegrity: + """Test that all fixture data works together correctly.""" + + def test_all_fixtures_load_successfully(self, sample_customers, sample_products, sample_orders): + """All fixture files should load without errors.""" + assert len(sample_customers) > 0, "No customers loaded" + assert len(sample_products) > 0, "No products loaded" + assert len(sample_orders) > 0, "No orders loaded" + + def test_minimum_data_requirements(self, sample_customers, sample_products, sample_orders): + """Fixtures should have minimum required data for meaningful tests.""" + assert len(sample_customers) >= 5, "Need at least 5 customers for tests" + assert len(sample_products) >= 10, "Need at least 10 products for tests" + assert len(sample_orders) >= 8, "Need at least 8 orders for tests" + + def test_order_customer_references_are_valid(self, sample_customers, sample_orders): + """All order customer_ids should reference actual customers.""" + customer_ids = {c.id for c in sample_customers} + + for order in sample_orders: + assert order.customer_id in customer_ids, \ + f"Order {order.id} references unknown customer {order.customer_id}" + + def test_order_product_references_are_valid(self, sample_products, sample_orders): + """All order product_ids should reference actual products.""" + product_ids = {p.id for p in sample_products} + + for order in sample_orders: + for item in order.items: + assert item.product_id in product_ids, \ + f"Order {order.id} references unknown product {item.product_id}" + + +class TestEndToEndScenarios: + """End-to-end scenario tests.""" + + def test_process_valid_order_workflow(self, order_processor, sample_orders, sample_products): + """Test complete order processing workflow.""" + # Find an order with in-stock items + in_stock_ids = {p.id for p in sample_products if p.in_stock and p.stock_quantity >= 5} + + valid_order = None + for order in sample_orders: + if all(item.product_id in in_stock_ids and item.quantity <= 5 for item in order.items): + valid_order = order + break + + if valid_order: + is_valid, errors = order_processor.validate_order(valid_order) + # Order should be valid if products have sufficient stock + + def test_low_stock_detection(self, order_processor, sample_products): + """Test detection of low stock products.""" + low_stock = order_processor.get_low_stock_products(threshold=10) + + # Verify all returned products are actually low stock + for product in low_stock: + assert product.stock_quantity <= 10 + + def test_customer_order_history(self, order_processor, sample_orders, sample_customers): + """Test retrieving customer order history.""" + # Find a customer with orders + for customer in sample_customers: + orders = order_processor.get_customer_orders(sample_orders, customer.id) + for order in orders: + assert order.customer_id == customer.id + + +class TestEdgeCases: + """Test edge cases in the data.""" + + def test_has_high_value_order(self, order_processor, sample_orders): + """Sample data should include at least one high-value order.""" + totals = [order_processor.calculate_order_total(o) for o in sample_orders] + max_total = max(totals) + assert max_total >= 100, "No high-value orders in sample data" + + def test_has_single_item_order(self, sample_orders): + """Sample data should include orders with single items.""" + single_item_orders = [o for o in sample_orders if len(o.items) == 1] + assert len(single_item_orders) > 0, "No single-item orders in sample data" + + def test_has_multi_item_order(self, sample_orders): + """Sample data should include orders with multiple items.""" + multi_item_orders = [o for o in sample_orders if len(o.items) > 1] + assert len(multi_item_orders) > 0, "No multi-item orders in sample data" + + def test_has_cancelled_order(self, sample_orders): + """Sample data should include at least one cancelled order.""" + cancelled = [o for o in sample_orders if o.status == "cancelled"] + assert len(cancelled) > 0, "No cancelled orders in sample data" diff --git a/test-data-demo/tests/test_order.py b/test-data-demo/tests/test_order.py new file mode 100644 index 0000000..79dd110 --- /dev/null +++ b/test-data-demo/tests/test_order.py @@ -0,0 +1,149 @@ +"""Tests for Order model and OrderProcessor.""" + +import pytest +import os +import sys + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from models import Order, OrderItem + + +class TestOrderValidation: + """Test order validation logic.""" + + def test_valid_orders_pass_validation(self, sample_orders): + """All sample orders should pass basic validation.""" + for order in sample_orders: + assert order.is_valid(), f"Order {order.id} should be valid" + + def test_orders_have_valid_status(self, sample_orders): + """All orders should have valid status values.""" + for order in sample_orders: + assert order.status in Order.VALID_STATUSES, \ + f"Order {order.id} has invalid status: {order.status}" + + def test_orders_have_at_least_one_item(self, sample_orders): + """All orders should have at least one item.""" + for order in sample_orders: + assert len(order.items) > 0, f"Order {order.id} has no items" + + def test_order_items_have_positive_quantities(self, sample_orders): + """All order items should have positive quantities.""" + for order in sample_orders: + for item in order.items: + assert item.quantity > 0, \ + f"Order {order.id} has item with non-positive quantity" + + def test_orders_have_unique_ids(self, sample_orders): + """All orders should have unique IDs.""" + ids = [o.id for o in sample_orders] + assert len(ids) == len(set(ids)), "Order IDs are not unique" + + +class TestOrderStatuses: + """Test order status distribution and logic.""" + + def test_has_orders_in_each_status(self, sample_orders): + """Sample data should include orders in various statuses.""" + statuses_present = {o.status for o in sample_orders} + + # Should have at least pending and delivered orders + assert "pending" in statuses_present, "No pending orders in sample data" + assert "delivered" in statuses_present, "No delivered orders in sample data" + + def test_cancellable_orders(self, sample_orders): + """Test can_be_cancelled logic on sample orders.""" + cancellable = [o for o in sample_orders if o.can_be_cancelled()] + non_cancellable = [o for o in sample_orders if not o.can_be_cancelled()] + + # Verify cancellable orders have correct statuses + for order in cancellable: + assert order.status in ("pending", "confirmed"), \ + f"Order {order.id} should not be cancellable with status {order.status}" + + # Verify non-cancellable orders have correct statuses + for order in non_cancellable: + assert order.status in ("shipped", "delivered", "cancelled"), \ + f"Order {order.id} should be cancellable with status {order.status}" + + +class TestOrderProcessorValidation: + """Test OrderProcessor validation.""" + + def test_orders_reference_valid_customers(self, order_processor, sample_orders): + """All orders should reference existing customers.""" + for order in sample_orders: + customer = order_processor.get_customer(order.customer_id) + assert customer is not None, \ + f"Order {order.id} references non-existent customer {order.customer_id}" + + def test_order_items_reference_valid_products(self, order_processor, sample_orders): + """All order items should reference existing products.""" + for order in sample_orders: + for item in order.items: + product = order_processor.get_product(item.product_id) + assert product is not None, \ + f"Order {order.id} references non-existent product {item.product_id}" + + def test_validate_order_returns_success_for_valid_orders(self, order_processor, sample_orders, sample_products): + """Valid orders should pass processor validation.""" + # Find orders that reference in-stock products + in_stock_product_ids = { + p.id for p in sample_products if p.in_stock and p.stock_quantity > 0 + } + + for order in sample_orders: + # Check if all items are for in-stock products + all_in_stock = all( + item.product_id in in_stock_product_ids + for item in order.items + ) + + if all_in_stock: + is_valid, errors = order_processor.validate_order(order) + # Note: may fail due to quantity issues, which is okay + + +class TestOrderCalculations: + """Test order total calculations.""" + + def test_calculate_total_returns_positive_for_valid_orders(self, order_processor, sample_orders): + """Order totals should be positive for valid orders.""" + for order in sample_orders: + total = order_processor.calculate_order_total(order) + assert total >= 0, f"Order {order.id} has negative total" + + def test_platinum_customers_get_best_discount(self, order_processor, sample_customers, sample_orders): + """Platinum customers should have lowest totals for same items.""" + platinum_customers = [c for c in sample_customers if c.membership_level == "platinum"] + bronze_customers = [c for c in sample_customers if c.membership_level == "bronze"] + + assert len(platinum_customers) > 0, "No platinum customers to test" + assert len(bronze_customers) > 0, "No bronze customers to test" + + def test_get_total_items_is_correct(self, sample_orders): + """get_total_items should return correct count.""" + for order in sample_orders: + expected = sum(item.quantity for item in order.items) + actual = order.get_total_items() + assert actual == expected, \ + f"Order {order.id} total items mismatch: expected {expected}, got {actual}" + + +class TestOrderFiltering: + """Test order filtering methods.""" + + def test_filter_by_status(self, order_processor, sample_orders): + """Should correctly filter orders by status.""" + for status in Order.VALID_STATUSES: + filtered = order_processor.get_orders_by_status(sample_orders, status) + for order in filtered: + assert order.status == status + + def test_filter_by_customer(self, order_processor, sample_orders, sample_customers): + """Should correctly filter orders by customer.""" + for customer in sample_customers: + filtered = order_processor.get_customer_orders(sample_orders, customer.id) + for order in filtered: + assert order.customer_id == customer.id diff --git a/test-data-demo/tests/test_product.py b/test-data-demo/tests/test_product.py new file mode 100644 index 0000000..9bbe951 --- /dev/null +++ b/test-data-demo/tests/test_product.py @@ -0,0 +1,95 @@ +"""Tests for Product model.""" + +import pytest +import os +import sys + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + +from models import Product + + +class TestProductValidation: + """Test product validation logic.""" + + def test_valid_products_pass_validation(self, sample_products): + """All sample products should pass validation.""" + for product in sample_products: + assert product.is_valid(), f"Product {product.id} should be valid" + + def test_products_have_positive_prices(self, sample_products): + """All products should have positive prices.""" + for product in sample_products: + assert product.price > 0, f"Product {product.name} has non-positive price" + + def test_products_have_non_negative_stock(self, sample_products): + """All products should have non-negative stock quantities.""" + for product in sample_products: + assert product.stock_quantity >= 0, \ + f"Product {product.name} has negative stock" + + def test_products_have_unique_ids(self, sample_products): + """All products should have unique IDs.""" + ids = [p.id for p in sample_products] + assert len(ids) == len(set(ids)), "Product IDs are not unique" + + def test_products_have_categories(self, sample_products): + """All products should have a category assigned.""" + for product in sample_products: + assert product.category, f"Product {product.name} has no category" + + +class TestProductAvailability: + """Test product availability logic.""" + + def test_in_stock_products_have_positive_quantity(self, sample_products): + """Products marked as in_stock should have positive quantity.""" + for product in sample_products: + if product.in_stock: + assert product.stock_quantity > 0, \ + f"Product {product.name} is in_stock but has zero quantity" + + def test_out_of_stock_products_have_zero_quantity(self, sample_products): + """Products not in_stock should have zero quantity.""" + for product in sample_products: + if not product.in_stock: + assert product.stock_quantity == 0, \ + f"Product {product.name} is not in_stock but has quantity" + + def test_availability_check_works(self, sample_products): + """is_available should return correct results.""" + for product in sample_products: + if product.in_stock and product.stock_quantity >= 1: + assert product.is_available(1), \ + f"Product {product.name} should be available for quantity 1" + + # Should not be available for more than stock + assert not product.is_available(product.stock_quantity + 1), \ + f"Product {product.name} should not be available for quantity exceeding stock" + + def test_has_mix_of_stock_statuses(self, sample_products): + """Sample data should include both in-stock and out-of-stock products.""" + in_stock = [p for p in sample_products if p.in_stock] + out_of_stock = [p for p in sample_products if not p.in_stock] + + assert len(in_stock) > 0, "No in-stock products in sample data" + assert len(out_of_stock) > 0, "No out-of-stock products in sample data" + + +class TestProductCategories: + """Test product categorization.""" + + def test_has_multiple_categories(self, sample_products): + """Sample data should have products in multiple categories.""" + categories = {p.category for p in sample_products} + assert len(categories) >= 3, \ + f"Expected at least 3 categories, found: {categories}" + + def test_price_ranges_are_realistic(self, sample_products): + """Products should have a range of prices.""" + prices = [p.price for p in sample_products] + min_price = min(prices) + max_price = max(prices) + + assert min_price < 50, "No low-priced items in sample data" + assert max_price > 100, "No high-priced items in sample data" diff --git a/testing-demo.md b/testing-demo.md new file mode 100644 index 0000000..8b0286f --- /dev/null +++ b/testing-demo.md @@ -0,0 +1,18 @@ +## are there tests in this app +- "are there any unit tests already in this app? if so can you run them to see if they pass? if they don't can you fix them?" + +## Plan mode +- "I'm looking to write unit tests for this app but I need to be sure they are comprehensive and have as close to 100% test coverage as possible. Can you help me think through corner cases and uncommon scenarios I may need to test here? I want to log all of them in a file somewhere" +- save to file (i.e., do not start implementation, use other button) +- run prompt file + +## execute / write tests / agent mode + +## TDD +- "can you help me write a new minimally featured banking app? I want it to have a log in feature of rhte user (dummy accounts for now) be able to add or remove money from the user's account, add or delete accounts like checking or savings from the users account and transfer money from accounts within a users account" + +## MCP +- playwright + +## Test data generation +- "can you generate test data for this app? In particular I need to popiulate the following files: `tests/fixtures/sample_customers.json`, `tests/fixtures/sample_products.json`, and `tests/fixtures/sample_orders.json` \ No newline at end of file diff --git a/web-utils/README.md b/web-utils/README.md new file mode 100644 index 0000000..28a5e57 --- /dev/null +++ b/web-utils/README.md @@ -0,0 +1,19 @@ +# Web Utils + +A small client-side utilities module to demonstrate ESLint integration, with realistic file names and structure. + +## Setup + +```bash +cd web-utils +npm install +``` + +## Lint + +```bash +npm run lint +npm run lint:fix +``` + +The `src/list-utils.js` contains intentional issues to show lint findings while still resembling real application code. `src/math-utils.js` is mostly clean. diff --git a/web-utils/eslint.config.js b/web-utils/eslint.config.js new file mode 100644 index 0000000..8427f73 --- /dev/null +++ b/web-utils/eslint.config.js @@ -0,0 +1,29 @@ +// ESLint v9 flat config +import js from '@eslint/js'; + +export default [ + js.configs.recommended, + { + ignores: ['node_modules/**'], + files: ['src/**/*.js'], + languageOptions: { + ecmaVersion: 2022, + sourceType: 'module', + globals: { + console: 'readonly', + window: 'readonly', + document: 'readonly', + }, + }, + rules: { + 'no-unused-vars': 'warn', + 'no-undef': 'error', + eqeqeq: ['error', 'always'], + semi: ['error', 'always'], + quotes: ['error', 'single', { avoidEscape: true }], + 'no-var': 'error', + 'prefer-const': 'warn', + 'no-console': 'off', + }, + }, +]; diff --git a/web-utils/package-lock.json b/web-utils/package-lock.json new file mode 100644 index 0000000..043e7b4 --- /dev/null +++ b/web-utils/package-lock.json @@ -0,0 +1,1077 @@ +{ + "name": "web-utils", + "version": "0.1.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "web-utils", + "version": "0.1.0", + "devDependencies": { + "@eslint/js": "^9.39.1", + "eslint": "^9.39.1" + } + }, + "node_modules/@eslint-community/eslint-utils": { + "version": "4.9.0", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.9.0.tgz", + "integrity": "sha512-ayVFHdtZ+hsq1t2Dy24wCmGXGe4q9Gu3smhLYALJrr473ZH27MsnSL+LKUlimp4BWJqMDMLmPpx/Q9R3OAlL4g==", + "dev": true, + "license": "MIT", + "dependencies": { + "eslint-visitor-keys": "^3.4.3" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + }, + "peerDependencies": { + "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0" + } + }, + "node_modules/@eslint-community/eslint-utils/node_modules/eslint-visitor-keys": { + "version": "3.4.3", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", + "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@eslint-community/regexpp": { + "version": "4.12.2", + "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.12.2.tgz", + "integrity": "sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^12.0.0 || ^14.0.0 || >=16.0.0" + } + }, + "node_modules/@eslint/config-array": { + "version": "0.21.1", + "resolved": "https://registry.npmjs.org/@eslint/config-array/-/config-array-0.21.1.tgz", + "integrity": "sha512-aw1gNayWpdI/jSYVgzN5pL0cfzU02GT3NBpeT/DXbx1/1x7ZKxFPd9bwrzygx/qiwIQiJ1sw/zD8qY/kRvlGHA==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/object-schema": "^2.1.7", + "debug": "^4.3.1", + "minimatch": "^3.1.2" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/config-helpers": { + "version": "0.4.2", + "resolved": "https://registry.npmjs.org/@eslint/config-helpers/-/config-helpers-0.4.2.tgz", + "integrity": "sha512-gBrxN88gOIf3R7ja5K9slwNayVcZgK6SOUORm2uBzTeIEfeVaIhOpCtTox3P6R7o2jLFwLFTLnC7kU/RGcYEgw==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^0.17.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/core": { + "version": "0.17.0", + "resolved": "https://registry.npmjs.org/@eslint/core/-/core-0.17.0.tgz", + "integrity": "sha512-yL/sLrpmtDaFEiUj1osRP4TI2MDz1AddJL+jZ7KSqvBuliN4xqYY54IfdN8qD8Toa6g1iloph1fxQNkjOxrrpQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@types/json-schema": "^7.0.15" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/eslintrc": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-3.3.3.tgz", + "integrity": "sha512-Kr+LPIUVKz2qkx1HAMH8q1q6azbqBAsXJUxBl/ODDuVPX45Z9DfwB8tPjTi6nNZ8BuM3nbJxC5zCAg5elnBUTQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "ajv": "^6.12.4", + "debug": "^4.3.2", + "espree": "^10.0.1", + "globals": "^14.0.0", + "ignore": "^5.2.0", + "import-fresh": "^3.2.1", + "js-yaml": "^4.1.1", + "minimatch": "^3.1.2", + "strip-json-comments": "^3.1.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/@eslint/js": { + "version": "9.39.1", + "resolved": "https://registry.npmjs.org/@eslint/js/-/js-9.39.1.tgz", + "integrity": "sha512-S26Stp4zCy88tH94QbBv3XCuzRQiZ9yXofEILmglYTh/Ug/a9/umqvgFtYBAo3Lp0nsI/5/qH1CCrbdK3AP1Tw==", + "dev": true, + "license": "MIT", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://eslint.org/donate" + } + }, + "node_modules/@eslint/object-schema": { + "version": "2.1.7", + "resolved": "https://registry.npmjs.org/@eslint/object-schema/-/object-schema-2.1.7.tgz", + "integrity": "sha512-VtAOaymWVfZcmZbp6E2mympDIHvyjXs/12LqWYjVw6qjrfF+VK+fyG33kChz3nnK+SU5/NeHOqrTEHS8sXO3OA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@eslint/plugin-kit": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.4.1.tgz", + "integrity": "sha512-43/qtrDUokr7LJqoF2c3+RInu/t4zfrpYdoSDfYyhg52rwLV6TnOvdG4fXm7IkSB3wErkcmJS9iEhjVtOSEjjA==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@eslint/core": "^0.17.0", + "levn": "^0.4.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + } + }, + "node_modules/@humanfs/core": { + "version": "0.19.1", + "resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.1.tgz", + "integrity": "sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanfs/node": { + "version": "0.16.7", + "resolved": "https://registry.npmjs.org/@humanfs/node/-/node-0.16.7.tgz", + "integrity": "sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==", + "dev": true, + "license": "Apache-2.0", + "dependencies": { + "@humanfs/core": "^0.19.1", + "@humanwhocodes/retry": "^0.4.0" + }, + "engines": { + "node": ">=18.18.0" + } + }, + "node_modules/@humanwhocodes/module-importer": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/@humanwhocodes/module-importer/-/module-importer-1.0.1.tgz", + "integrity": "sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=12.22" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@humanwhocodes/retry": { + "version": "0.4.3", + "resolved": "https://registry.npmjs.org/@humanwhocodes/retry/-/retry-0.4.3.tgz", + "integrity": "sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": ">=18.18" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/nzakas" + } + }, + "node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/json-schema": { + "version": "7.0.15", + "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz", + "integrity": "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==", + "dev": true, + "license": "MIT" + }, + "node_modules/acorn": { + "version": "8.15.0", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz", + "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", + "dev": true, + "license": "MIT", + "peer": true, + "bin": { + "acorn": "bin/acorn" + }, + "engines": { + "node": ">=0.4.0" + } + }, + "node_modules/acorn-jsx": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.3.2.tgz", + "integrity": "sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==", + "dev": true, + "license": "MIT", + "peerDependencies": { + "acorn": "^6.0.0 || ^7.0.0 || ^8.0.0" + } + }, + "node_modules/ajv": { + "version": "6.12.6", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", + "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", + "dev": true, + "license": "MIT", + "dependencies": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/argparse": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz", + "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", + "dev": true, + "license": "Python-2.0" + }, + "node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/brace-expansion": { + "version": "1.1.12", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz", + "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==", + "dev": true, + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "node_modules/callsites": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz", + "integrity": "sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/chalk": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/chalk?sponsor=1" + } + }, + "node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true, + "license": "MIT" + }, + "node_modules/concat-map": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", + "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==", + "dev": true, + "license": "MIT" + }, + "node_modules/cross-spawn": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz", + "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", + "dev": true, + "license": "MIT", + "dependencies": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/debug": { + "version": "4.4.3", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz", + "integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==", + "dev": true, + "license": "MIT", + "dependencies": { + "ms": "^2.1.3" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/deep-is": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/deep-is/-/deep-is-0.1.4.tgz", + "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/eslint": { + "version": "9.39.1", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-9.39.1.tgz", + "integrity": "sha512-BhHmn2yNOFA9H9JmmIVKJmd288g9hrVRDkdoIgRCRuSySRUHH7r/DI6aAXW9T1WwUuY3DFgrcaqB+deURBLR5g==", + "dev": true, + "license": "MIT", + "peer": true, + "dependencies": { + "@eslint-community/eslint-utils": "^4.8.0", + "@eslint-community/regexpp": "^4.12.1", + "@eslint/config-array": "^0.21.1", + "@eslint/config-helpers": "^0.4.2", + "@eslint/core": "^0.17.0", + "@eslint/eslintrc": "^3.3.1", + "@eslint/js": "9.39.1", + "@eslint/plugin-kit": "^0.4.1", + "@humanfs/node": "^0.16.6", + "@humanwhocodes/module-importer": "^1.0.1", + "@humanwhocodes/retry": "^0.4.2", + "@types/estree": "^1.0.6", + "ajv": "^6.12.4", + "chalk": "^4.0.0", + "cross-spawn": "^7.0.6", + "debug": "^4.3.2", + "escape-string-regexp": "^4.0.0", + "eslint-scope": "^8.4.0", + "eslint-visitor-keys": "^4.2.1", + "espree": "^10.4.0", + "esquery": "^1.5.0", + "esutils": "^2.0.2", + "fast-deep-equal": "^3.1.3", + "file-entry-cache": "^8.0.0", + "find-up": "^5.0.0", + "glob-parent": "^6.0.2", + "ignore": "^5.2.0", + "imurmurhash": "^0.1.4", + "is-glob": "^4.0.0", + "json-stable-stringify-without-jsonify": "^1.0.1", + "lodash.merge": "^4.6.2", + "minimatch": "^3.1.2", + "natural-compare": "^1.4.0", + "optionator": "^0.9.3" + }, + "bin": { + "eslint": "bin/eslint.js" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://eslint.org/donate" + }, + "peerDependencies": { + "jiti": "*" + }, + "peerDependenciesMeta": { + "jiti": { + "optional": true + } + } + }, + "node_modules/eslint-scope": { + "version": "8.4.0", + "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-8.4.0.tgz", + "integrity": "sha512-sNXOfKCn74rt8RICKMvJS7XKV/Xk9kA7DyJr8mJik3S7Cwgy3qlkkmyS2uQB3jiJg6VNdZd/pDBJu0nvG2NlTg==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "esrecurse": "^4.3.0", + "estraverse": "^5.2.0" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/eslint-visitor-keys": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-4.2.1.tgz", + "integrity": "sha512-Uhdk5sfqcee/9H/rCOJikYz67o0a2Tw2hGRPOG2Y1R2dg7brRe1uG0yaNQDHu+TO/uQPF/5eCapvYSmHUjt7JQ==", + "dev": true, + "license": "Apache-2.0", + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/espree": { + "version": "10.4.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-10.4.0.tgz", + "integrity": "sha512-j6PAQ2uUr79PZhBjP5C5fhl8e39FmRnOjsD5lGnWrFU8i2G776tBK7+nP8KuQUTTyAZUwfQqXAgrVH5MbH9CYQ==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "acorn": "^8.15.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^4.2.1" + }, + "engines": { + "node": "^18.18.0 || ^20.9.0 || >=21.1.0" + }, + "funding": { + "url": "https://opencollective.com/eslint" + } + }, + "node_modules/esquery": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.6.0.tgz", + "integrity": "sha512-ca9pw9fomFcKPvFLXhBKUK90ZvGibiGOvRJNbjljY7s7uq/5YO4BOzcYtJqExdx99rF6aAcnRxHmcUHcz6sQsg==", + "dev": true, + "license": "BSD-3-Clause", + "dependencies": { + "estraverse": "^5.1.0" + }, + "engines": { + "node": ">=0.10" + } + }, + "node_modules/esrecurse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", + "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "estraverse": "^5.2.0" + }, + "engines": { + "node": ">=4.0" + } + }, + "node_modules/estraverse": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=4.0" + } + }, + "node_modules/esutils": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", + "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==", + "dev": true, + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/fast-deep-equal": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", + "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-json-stable-stringify": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", + "dev": true, + "license": "MIT" + }, + "node_modules/fast-levenshtein": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz", + "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", + "dev": true, + "license": "MIT" + }, + "node_modules/file-entry-cache": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", + "integrity": "sha512-XXTUwCvisa5oacNGRP9SfNtYBNAMi+RPwBFmblZEF7N7swHYQS6/Zfk7SRwx4D5j3CH211YNRco1DEMNVfZCnQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "flat-cache": "^4.0.0" + }, + "engines": { + "node": ">=16.0.0" + } + }, + "node_modules/find-up": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz", + "integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==", + "dev": true, + "license": "MIT", + "dependencies": { + "locate-path": "^6.0.0", + "path-exists": "^4.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/flat-cache": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/flat-cache/-/flat-cache-4.0.1.tgz", + "integrity": "sha512-f7ccFPK3SXFHpx15UIGyRJ/FJQctuKZ0zVuN3frBo4HnK3cay9VEW0R6yPYFHC0AgqhukPzKjq22t5DmAyqGyw==", + "dev": true, + "license": "MIT", + "dependencies": { + "flatted": "^3.2.9", + "keyv": "^4.5.4" + }, + "engines": { + "node": ">=16" + } + }, + "node_modules/flatted": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/flatted/-/flatted-3.3.3.tgz", + "integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==", + "dev": true, + "license": "ISC" + }, + "node_modules/glob-parent": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz", + "integrity": "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A==", + "dev": true, + "license": "ISC", + "dependencies": { + "is-glob": "^4.0.3" + }, + "engines": { + "node": ">=10.13.0" + } + }, + "node_modules/globals": { + "version": "14.0.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-14.0.0.tgz", + "integrity": "sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=18" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/ignore": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.3.2.tgz", + "integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 4" + } + }, + "node_modules/import-fresh": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.1.tgz", + "integrity": "sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "parent-module": "^1.0.0", + "resolve-from": "^4.0.0" + }, + "engines": { + "node": ">=6" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/imurmurhash": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", + "integrity": "sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.8.19" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "dev": true, + "license": "MIT", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/isexe": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", + "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", + "dev": true, + "license": "ISC" + }, + "node_modules/js-yaml": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz", + "integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "argparse": "^2.0.1" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/json-buffer": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.1.tgz", + "integrity": "sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-schema-traverse": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", + "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true, + "license": "MIT" + }, + "node_modules/json-stable-stringify-without-jsonify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz", + "integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==", + "dev": true, + "license": "MIT" + }, + "node_modules/keyv": { + "version": "4.5.4", + "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz", + "integrity": "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==", + "dev": true, + "license": "MIT", + "dependencies": { + "json-buffer": "3.0.1" + } + }, + "node_modules/levn": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/levn/-/levn-0.4.1.tgz", + "integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1", + "type-check": "~0.4.0" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/locate-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz", + "integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-locate": "^5.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/lodash.merge": { + "version": "4.6.2", + "resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz", + "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/minimatch": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz", + "integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==", + "dev": true, + "license": "ISC", + "dependencies": { + "brace-expansion": "^1.1.7" + }, + "engines": { + "node": "*" + } + }, + "node_modules/ms": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==", + "dev": true, + "license": "MIT" + }, + "node_modules/natural-compare": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz", + "integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==", + "dev": true, + "license": "MIT" + }, + "node_modules/optionator": { + "version": "0.9.4", + "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz", + "integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==", + "dev": true, + "license": "MIT", + "dependencies": { + "deep-is": "^0.1.3", + "fast-levenshtein": "^2.0.6", + "levn": "^0.4.1", + "prelude-ls": "^1.2.1", + "type-check": "^0.4.0", + "word-wrap": "^1.2.5" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/p-limit": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz", + "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "yocto-queue": "^0.1.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/p-locate": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-5.0.0.tgz", + "integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==", + "dev": true, + "license": "MIT", + "dependencies": { + "p-limit": "^3.0.2" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/parent-module": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/parent-module/-/parent-module-1.0.1.tgz", + "integrity": "sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g==", + "dev": true, + "license": "MIT", + "dependencies": { + "callsites": "^3.0.0" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/path-exists": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/path-key": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/prelude-ls": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz", + "integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6" + } + }, + "node_modules/resolve-from": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz", + "integrity": "sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/shebang-command": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "dev": true, + "license": "MIT", + "dependencies": { + "shebang-regex": "^3.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/shebang-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-json-comments": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", + "integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, + "node_modules/supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "license": "MIT", + "dependencies": { + "has-flag": "^4.0.0" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/type-check": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz", + "integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==", + "dev": true, + "license": "MIT", + "dependencies": { + "prelude-ls": "^1.2.1" + }, + "engines": { + "node": ">= 0.8.0" + } + }, + "node_modules/uri-js": { + "version": "4.4.1", + "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", + "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "punycode": "^2.1.0" + } + }, + "node_modules/which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "license": "ISC", + "dependencies": { + "isexe": "^2.0.0" + }, + "bin": { + "node-which": "bin/node-which" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/word-wrap": { + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz", + "integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/yocto-queue": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", + "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + } + } +} diff --git a/web-utils/package.json b/web-utils/package.json new file mode 100644 index 0000000..cfd6910 --- /dev/null +++ b/web-utils/package.json @@ -0,0 +1,14 @@ +{ + "name": "web-utils", + "version": "0.1.0", + "private": true, + "type": "module", + "scripts": { + "lint": "eslint \"src/**/*.js\"", + "lint:fix": "eslint \"src/**/*.js\" --fix" + }, + "devDependencies": { + "@eslint/js": "^9.39.1", + "eslint": "^9.39.1" + } +} diff --git a/web-utils/src/list-utils.js b/web-utils/src/list-utils.js new file mode 100644 index 0000000..be8f2b9 --- /dev/null +++ b/web-utils/src/list-utils.js @@ -0,0 +1,29 @@ +// List utilities for client-side rendering (with intentional issues for lint demo) +var foo = 1 // missing semicolon, uses var +let bar = 2 +let unused = 3 + +export function areEqual(a, b) { + if (a == b) { // eqeqeq violation + console.log("Equal!\n"); // double quotes + } +} + +export function renderList(items) { + let bar = 'shadowed'; + console.log(bar) + + // undefined variable usage + console.log(result) + + // prefer-const violation + let arr = [1,2,3] + arr.push(4) + + // mixed spaces and tabs + for (var i = 0; i < items.length; i++) { + console.log(items[i]) + } + + areEqual(foo, bar) +} diff --git a/web-utils/src/math-utils.js b/web-utils/src/math-utils.js new file mode 100644 index 0000000..a8dcd7e --- /dev/null +++ b/web-utils/src/math-utils.js @@ -0,0 +1,15 @@ +// Basic math helpers used by UI widgets +export const answer = 42; + +export function greet(name) { + if (typeof name !== 'string') return; + console.log('Hello, ' + name + '!'); +} + +export function sum(nums) { + if (!Array.isArray(nums)) return 0; + return nums.reduce((acc, n) => acc + n, 0); +} + +greet('World'); +console.log('Sum:', sum([1, 2, 3]));