Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .buildkite/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ COPY --from=ruby / /
COPY --from=cypress / /
COPY --from=python / /

RUN gem install rspec
RUN gem install rspec cucumber base64
RUN gem install bigdecimal -v 3.2.0
RUN yarn global add jest
RUN pip install pytest
RUN pip install buildkite-test-collector==0.2.0
Expand Down
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ Buildkite Test Engine Client (bktec) is an open source tool to orchestrate your

bktec supports multiple test runners and offers various features to enhance your testing workflow. Below is a comparison of the features supported by each test runner:

| Feature | RSpec | Jest | Playwright | Cypress | pytest | pants (pytest) | Go test |
| -------------------------------------------------- | :---: | :--: | :---------: | :-----: | :-----: | :------------: | :-----: |
| Filter test files | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | |
| Automatically retry failed test | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | |
| Split slow files by individual test example | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | |
| Mute tests (ignore test failures) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | |
| Skip tests | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | |
| Feature | RSpec | Jest | Playwright | Cypress | pytest | pants (pytest) | Go test | Cucumber |
| -------------------------------------------------- | :---: | :--: | :--------: | :-----: | :----: | :------------: | :-----: | :------: |
| Filter test files | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ |
| Automatically retry failed test | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ |
| Split slow files by individual test example | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Mute tests (ignore test failures) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ |
| Skip tests | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |

## Installation
The latest version of bktec can be downloaded from https://github.com/buildkite/test-engine-client/releases
Expand Down Expand Up @@ -63,6 +63,7 @@ To configure the test runner for bktec, please refer to the detailed guides prov
- [pytest pants](./docs/pytest-pants.md)
- [go test](./docs/gotest.md)
- [RSpec](./docs/rspec.md)
- [Cucumber](./docs/cucumber.md)


### Running bktec
Expand Down
4 changes: 4 additions & 0 deletions bin/setup
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,10 @@ npx cypress verify
cd ../rspec
bundle install

# Install Cucumber dependencies
cd ../cucumber
bundle install

# Install various python things, dependencies for pytest test cases
if [ -n "$VIRTUAL_ENV" ]; then
echo "Python virtual environment is active: $VIRTUAL_ENV"
Expand Down
58 changes: 58 additions & 0 deletions docs/cucumber.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Using bktec with Cucumber

To integrate bktec with Cucumber, set the `BUILDKITE_TEST_ENGINE_TEST_RUNNER` environment variable to `cucumber`. Then, specify the `BUILDKITE_TEST_ENGINE_RESULT_PATH` to define where the JSON result should be stored. bktec will instruct Cucumber to output the JSON result to this path, which is necessary for bktec to read the test results for retries and verification purposes.

```sh
export BUILDKITE_TEST_ENGINE_TEST_RUNNER=cucumber
export BUILDKITE_TEST_ENGINE_RESULT_PATH=tmp/cucumber-result.json
```

## Configure test command
By default, bktec runs Cucumber with the following command:

```sh
bundle exec cucumber --format pretty --format json --out {{resultPath}} {{testExamples}}
```

In this command:
- `{{testExamples}}` is replaced by bktec with the list of feature files or scenarios to run.
- `{{resultPath}}` is replaced with the value set in `BUILDKITE_TEST_ENGINE_RESULT_PATH`.

You can customise this command using the `BUILDKITE_TEST_ENGINE_TEST_CMD` environment variable.

```sh
export BUILDKITE_TEST_ENGINE_TEST_CMD="bundle exec cucumber --format json --out {{resultPath}} {{testExamples}}"
```

> **IMPORTANT** – Make sure your custom command includes `--format json --out {{resultPath}}` so that bktec can parse the results.

## Filter feature files
By default, bktec runs feature files that match the `features/**/*.feature` pattern. You can customise this pattern using the `BUILDKITE_TEST_ENGINE_TEST_FILE_PATTERN` environment variable.

```sh
export BUILDKITE_TEST_ENGINE_TEST_FILE_PATTERN=features/login/**/*.feature
```

You can also exclude certain directories or files with `BUILDKITE_TEST_ENGINE_TEST_FILE_EXCLUDE_PATTERN`:

```sh
export BUILDKITE_TEST_ENGINE_TEST_FILE_EXCLUDE_PATTERN=features/experimental
```

> **TIP** – The patterns use the same glob syntax as the [zzglob](https://github.com/DrJosh9000/zzglob#pattern-syntax) library.

## Automatically retry failed scenarios
Use `BUILDKITE_TEST_ENGINE_RETRY_COUNT` to automatically retry failed scenarios. When this variable is set and greater than `0`, failed scenarios will be re-run using the command from `BUILDKITE_TEST_ENGINE_RETRY_CMD` (or the main test command if not set).

```sh
export BUILDKITE_TEST_ENGINE_RETRY_COUNT=2
```

A typical retry command might look like:

```sh
export BUILDKITE_TEST_ENGINE_RETRY_CMD="bundle exec cucumber {{testExamples}} --format json --out {{resultPath}}"
```

## Split by example
Splitting slow files by individual scenario is supported for Cucumber. When bktec identifies slow files, it can request a plan that splits these files into individual scenarios. The `{{testExamples}}` placeholder in your test command will then be populated with specific `file:line` identifiers for each scenario to be run.
254 changes: 254 additions & 0 deletions internal/runner/cucumber.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,254 @@
package runner

import (
"encoding/json"
"errors"
"fmt"
"os"
"os/exec"
"slices"
"strings"

"github.com/buildkite/test-engine-client/internal/debug"
"github.com/buildkite/test-engine-client/internal/plan"
"github.com/kballard/go-shellquote"
)

// Cucumber implements TestRunner for Cucumber (Ruby implementation).
// It follows very similar behaviour to the RSpec runner. We rely on the JSON formatter
// so users MUST include `--format json --out {{resultPath}}` in their custom commands.
//
// We treat every Scenario as an individual test case. A scenario is considered failed
// if any step in it failed or has undefined status. "pending" and "skipped" are
// mapped to TestStatusSkipped.

type Cucumber struct {
RunnerConfig
}

func NewCucumber(c RunnerConfig) Cucumber {
if c.TestCommand == "" {
// The pretty formatter gives a nice progress bar in the console, the JSON formatter is required for bktec.
c.TestCommand = "cucumber --format pretty --format json --out {{resultPath}} {{testExamples}}"
}

if c.TestFilePattern == "" {
c.TestFilePattern = "features/**/*.feature"
}

if c.RetryTestCommand == "" {
c.RetryTestCommand = c.TestCommand
}

return Cucumber{
RunnerConfig: c,
}
}

func (c Cucumber) Name() string {
return "Cucumber"
}

// GetFiles returns the list of feature files based on include / exclude pattern.
func (c Cucumber) GetFiles() ([]string, error) {
debug.Println("Discovering test files with include pattern:", c.TestFilePattern, "exclude pattern:", c.TestFileExcludePattern)
files, err := discoverTestFiles(c.TestFilePattern, c.TestFileExcludePattern)
debug.Println("Discovered", len(files), "files")

if err != nil {
return nil, err
}

if len(files) == 0 {
return nil, fmt.Errorf("no files found with pattern %q and exclude pattern %q", c.TestFilePattern, c.TestFileExcludePattern)
}

return files, nil
}

// Run executes the Cucumber command and records results.
func (c Cucumber) Run(result *RunResult, testCases []plan.TestCase, retry bool) error {
command := c.TestCommand
if retry {
command = c.RetryTestCommand
}

testPaths := make([]string, len(testCases))
for i, tc := range testCases {
testPaths[i] = tc.Path
}

commandName, commandArgs, err := c.commandNameAndArgs(command, testPaths)
if err != nil {
return fmt.Errorf("failed to build command: %w", err)
}

cmd := exec.Command(commandName, commandArgs...)

err = runAndForwardSignal(cmd)
if ProcessSignaledError := new(ProcessSignaledError); errors.As(err, &ProcessSignaledError) {
return err
}

report, parseErr := c.ParseReport(c.ResultPath)
if parseErr != nil {
fmt.Println("Buildkite Test Engine Client: Failed to read Cucumber JSON output, tests will not be retried.")
return err
}

// Iterate scenarios.
for _, feature := range report {
for _, scenario := range feature.Elements {
if scenario.Type != "scenario" {
continue
}
status := scenario.AggregatedStatus()
var testStatus TestStatus
switch status {
case "failed", "undefined", "errored":
testStatus = TestStatusFailed
case "passed":
testStatus = TestStatusPassed
case "pending", "skipped" /* cucumber-js uses skipped */ :
testStatus = TestStatusSkipped
default:
testStatus = TestStatusSkipped
}

fileLinePath := fmt.Sprintf("%s:%d", feature.URI, scenario.Line)
testCaseForResult := plan.TestCase{
Identifier: fileLinePath, // Use file:line as the primary identifier
Name: scenario.Name,
Scope: feature.Name,
Path: fileLinePath,
}

result.RecordTestResult(testCaseForResult, testStatus)
}
}

// Determine if there were any errors outside of scenarios. Cucumber does not
// provide such count – we rely on process exit status already handled above.

return nil
}

// CucumberFeature and CucumberElement structs would be defined, likely in a separate parser file.
// For brevity, they are assumed here.

// mapScenarioToTestCase maps a Cucumber scenario (element) to a plan.TestCase
func mapScenarioToTestCase(featureURI string, scenario CucumberElement) plan.TestCase {
// Cucumber scenarios are identified by file_path:line_number
identifier := fmt.Sprintf("%s:%d", featureURI, scenario.Line)
return plan.TestCase{
Path: identifier,
Name: scenario.Name,
Identifier: identifier, // Or scenario.ID if it's more suitable and consistently available
}
}

// GetExamples returns an array of test scenarios within the given feature files.
func (c Cucumber) GetExamples(files []string) ([]plan.TestCase, error) {
if len(files) == 0 {
return []plan.TestCase{}, nil
}

// Create a temporary file to store the JSON output of the cucumber dry run.
f, err := os.CreateTemp("", "cucumber-dry-run-*.json")
if err != nil {
return nil, fmt.Errorf("failed to create temporary file for cucumber dry run: %w", err)
}
debug.Printf("Created temp file for cucumber dry run: %s", f.Name())

defer func() {
closeErr := f.Close()
if closeErr != nil {
debug.Printf("Error closing temp file %s: %v", f.Name(), closeErr)
}
removeErr := os.Remove(f.Name())
if removeErr != nil {
debug.Printf("Error removing temp file %s: %v", f.Name(), removeErr)
}
}()

cmdName, _, err := c.commandNameAndArgs(c.TestCommand, files)
if err != nil {
return nil, err
}

dryRunArgs := append(
[]string{"--dry-run", "--format", "json", "--out", f.Name(), "--format", "progress"},
files...,
)

debug.Printf("Running `%s %s` for dry run", cmdName, strings.Join(dryRunArgs, " "))

output, err := exec.Command(cmdName, dryRunArgs...).CombinedOutput()
if err != nil {
return []plan.TestCase{}, fmt.Errorf("failed to run Cucumber dry run: %s", output)
}

dryRunReport, parseErr := parseCucumberDryRunJSONOutput(f.Name()) // Use parser from cucumber_result_parser.go
if parseErr != nil {
return nil, fmt.Errorf("failed to parse cucumber dry run JSON report from %s: %w", f.Name(), parseErr)
}

var testCases []plan.TestCase
for _, feature := range dryRunReport {
for _, scenario := range feature.Elements {
if scenario.Type == "scenario" { // Only include scenarios, not scenario outlines directly (examples are handled differently)
testCases = append(testCases, mapScenarioToTestCase(feature.URI, scenario))
} else if scenario.Type == "scenario_outline" && scenario.Keyword == "Scenario Outline" {
// Scenario outlines themselves aren't runnable directly by path:line of the outline.
// Cucumber expands them into concrete scenarios based on their Examples tables.
// The JSON from a dry run might already include these expanded examples as individual 'scenario' type elements.
// If not, we'd need to parse scenario.Examples and generate test cases for each example row.
// For now, we assume the JSON includes expanded examples as type: "scenario".
// If the dry run JSON for outlines is different, this part needs adjustment.
// Let's log if we encounter an outline to see its structure.
debug.Printf("Encountered Scenario Outline: %s:%d. Its examples might be listed as separate scenarios.", feature.URI, scenario.Line)
}
}
}

return testCases, nil
}

// commandNameAndArgs replaces placeholders and returns command + args.
func (c Cucumber) commandNameAndArgs(cmd string, testCases []string) (string, []string, error) {
words, err := shellquote.Split(cmd)
if err != nil {
return "", []string{}, err
}

idx := slices.Index(words, "{{testExamples}}")
if idx < 0 {
words = append(words, testCases...)
} else {
words = slices.Replace(words, idx, idx+1, testCases...)
}

idx = slices.Index(words, "{{resultPath}}")
if idx >= 0 {
words = slices.Replace(words, idx, idx+1, c.ResultPath)
}

return words[0], words[1:], nil
}

// ---------------- Report parsing -------------------
// ParseReport now uses CucumberFeature from cucumber_result_parser.go

func (c Cucumber) ParseReport(path string) ([]CucumberFeature, error) {
var report []CucumberFeature
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("failed to read cucumber output: %v", err)
}

if err := json.Unmarshal(data, &report); err != nil {
return nil, fmt.Errorf("failed to parse cucumber output: %s", err)
}

return report, nil
}
Loading