Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Oct 26, 2025

Users lack clear guidance on configuring shard counts for Test Workflows, including where to define configurations, how to choose optimal shard numbers, and understanding performance impacts.

Changes

Configuration Setup

  • Location and structure of TestWorkflow YAML files
  • Multiple application methods (kubectl, CLI, Dashboard)
  • Minimal sharded workflow example

Performance Guidance

  • Impact table correlating shard counts to execution time reduction
  • Specific recommendations by test suite size (1-2 shards for <10 tests, 3-5 for 10-50, 5-10 for 50-200, 10-20 for 200+)
  • Resource considerations: CPU/memory per shard, cluster capacity, cost implications

Configuration Steps

  1. Static vs dynamic sharding strategies (count vs maxCount + shards)
  2. Resource limit definitions
  3. Data distribution patterns
  4. Verification commands

Use Cases

  • Cypress E2E test distribution with dynamic sharding
  • K6 distributed load generation across 10 nodes
  • Playwright multi-browser testing with matrix + sharding

Troubleshooting

  • Common issues: pending shards, uneven distribution, OOM errors
  • Best practices: conservative scaling, resource monitoring, descriptive naming, retry logic, cost vs speed tradeoffs

Example

apiVersion: testworkflows.testkube.io/v1
kind: TestWorkflow
metadata:
  name: cypress-sharded
spec:
  steps:
  - name: Run tests
    parallel:
      maxCount: 5  # Adapts to test count
      shards:
        testFiles: 'glob("cypress/e2e/**/*.cy.js")'
      container:
        resources:
          requests:
            cpu: 1
            memory: 1Gi
      run:
        args: [--spec, '{{ join(shard.testFiles, ",") }}']

Screenshot

Configuration and performance guidance sections

Original prompt

This section details on the original issue you should resolve

<issue_title>Improve Documentation on Configuring Workflow Sharding</issue_title>
<issue_description>## Current State
The current documentation lacks detailed instructions on how to define and configure the parameter that sets the number of shards for workflow sharding.

Requested Improvement

Add detailed instructions and examples on how to properly configure the shard number parameter, including:

  • Where to find the configuration file
  • Syntax and examples for defining this parameter
  • Explanation of the impact that different shard numbers have on performance and scalability

Context

  • Users are unclear about how to set the number of shards correctly.
  • There is a lack of examples and guidance on finding and modifying the config file.
  • The documentation should include specific, step-by-step instructions to reduce confusion.

Acceptance Criteria

  • Documentation includes a clear explanation of the config file's location.
  • Step-by-step guide on setting the shard number, with examples.
  • Description of the effects different shard numbers have, in terms of performance.
  • Examples and common use cases added for guidance.</issue_description>

Comments on the Issue (you are @copilot in this section)

@olensmar 📎 **Related Slack Discussion:** https://saltsealabs.slack.com/archives/C09NQKN8FBL/p1761479785260449
  • Fixes kubeshop/testkube-docs#399

💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

@vercel
Copy link

vercel bot commented Oct 26, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
testkube-docs Ready Ready Preview Comment Oct 26, 2025 0:14am

Co-authored-by: olensmar <1917063+olensmar@users.noreply.github.com>
Co-authored-by: olensmar <1917063+olensmar@users.noreply.github.com>
Copilot AI changed the title [WIP] Improve documentation on configuring workflow sharding Add comprehensive workflow sharding configuration guide Oct 26, 2025
Copilot finished work on behalf of olensmar October 26, 2025 12:14
Copilot AI requested a review from olensmar October 26, 2025 12:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants