Skip to content

Proposed RFC Feature: Batched tests with no results are recollected and run in a new batch rather than failed #41

@smurly

Description

@smurly

Summary:

Currently in editor_test.py when a batched list of tests encounters a timeout or a test exits the editor prematurely the tests which did not execute have no result and currently report as failed. This results in confusion for many who see a list of failed tests and assume they ran and failed.

What is the relevance of this feature?

Actually running tests rather than failing them will provide more useful information to debug the actual timeout or unexpected exit.

Feature design description:

  • When a Report summary includes tests without results, those tests will be collected in a new batch
  • The new sub-batch of tests will be executed again to generate results
  • If a timeout was the cause we will not rerun the test which was running at the timeout (the one that caused the issue)
  • Tests which already had no results and were rerun will not be collected for a 3rd run.

Technical design description:

Recollection and execution occurs for tests which report no results
Recollected tests should not be run a 3rd time if they fail to report results
Results will clearly call out no result rather than failure. "Test did not run or had no results!"

What are the advantages of the feature?

Removes reports of failures which are really just did not run. This reduces initial confusion among developers troubleshooting the AR results.

What are the disadvantages of the feature?

  • Increased run time for AR
  • Need to be careful that we don't collect the test which caused the unexpected exit or timeout and just repeat the cycle

How will this be implemented or integrated into the O3DE environment?

  • This will be part of editor_test.py and the pytest framework around it. It should just work without extra effort.

Are there any alternatives to this feature?

  • Educate everyone who reads a log as to what the output means more fully and expect them to deeply understand pytest and editor_test output.

How will users learn this feature?

  • Documentation would need to be created
  • In practical use they should not see this or need to know much about it just the result of less "failures" on top of root failures to distract them.

Are there any open questions?

  • What are some of the open questions and potential scenarios that should be considered?

Metadata

Metadata

Assignees

No one assigned

    Labels

    priority/majorMajor priority. Work that should be handled after all blocking and critical work is done.rfc-featureRequest for Comments for a Featuretriage/acceptedIssue that has been accepted and is ready for work

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions