Skip to content

Conversation

@mwasilew
Copy link
Contributor

@mwasilew mwasilew commented Jan 2, 2026

Improve test result reporting in both push and PR generated builds.

@mwasilew
Copy link
Contributor Author

mwasilew commented Jan 2, 2026

This PR builds on top of #1312 and adds just one commit that refactors result reporting. @smuppand @lumag please take a look. I also pushed the changes to next. The workflow just started, so there are no test results yet: https://github.com/qualcomm-linux/meta-qcom/actions/runs/20656743845. It will be very similar to this: https://github.com/qualcomm-linux/meta-qcom/actions/runs/20597266586

echo "DEVICE_TYPE=dragonboard-410c" >> dragonboard-410c.ini
echo "BOOT_IMG_FILE=boot-apq8016-sbc-qcom-armv8a.img" >> dragonboard-410c.ini
cat dragonboard-410c.ini
lava-test-plans --dry-run --variables dragonboard-410c.ini --test-plan "${GITHUB_REPOSITORY#*/}/${{ inputs.distro_name }}/${{ inputs.testplan }}" --device-type "dragonboard-410c" --dry-run-path "${JOBS_OUT_PATH}/dragonboard-410c-${{ inputs.distro_name }}-${{ inputs.testplan }}" || true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the || true on lava-test-plans --dry-run, you intentionally ignore failures. If generation fails, still it will upload something (maybe empty)? and downstream will fail in confusing ways. Better to fail fast if generation fails unless you are explicitly trying to allow missing device templates.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the same commit as in #1312. I don't think it's a good idea to change it before it's merged.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the same commit as in #1312. I don't think it's a good idea to change it before it's merged.

OK—if this PR is intentionally identical to #1312 , let’s keep it aligned and we can address the review items in a follow-up PR right after merge

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add another commit on top. We discussed fixing this in #1312. I'll try that now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit more complicated.In general lava-test-plans will only fail if it can't render a template. This means no jobs will be generated in case of failure. However this only works in case when there is a single test job file produced. Test plan can contain more than one test job. In this case a failure may result in some test job files, but not all that were expected. It should not happen as the workflows/tests in lava-test-plans repository renders all possible jobs for all existing machines.

We have 2 options to address the issue:

  • only run workflow on machines included in lava-test-plans (I prefer this option). In this case we can remove ||true and assume all matrix workflows will produce something
  • keep || true but restrict all test plans to only a single job (I don't think it's a good idea).

I'll implement the 1st option and we can discuss further.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what happens if I include device that doesn't exist: https://github.com/qualcomm-linux/meta-qcom/actions/runs/20713210403/job/59469534291

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. Remove || true from lava-test-plans --dry-run.
Add pre-check that device template exists.
Add post-check that at least one job file was generated (and non-empty).

@github-actions
Copy link

github-actions bot commented Jan 4, 2026

Test run workflow

Test jobs for commit 1e821e4

@test-reporting-app
Copy link

test-reporting-app bot commented Jan 4, 2026

Test Results

 19 files   67 suites   1h 13m 23s ⏱️
 35 tests  34 ✅ 0 💤 1 ❌
655 runs  653 ✅ 1 💤 1 ❌

For more details on these failures, see this check.

Results for commit 2dba239.

♻️ This comment has been updated with latest results.

@github-actions
Copy link

github-actions bot commented Jan 4, 2026

Test run workflow

Test jobs for commit 1e821e4

@mwasilew mwasilew force-pushed the workflows/test-result-reporting branch from 1e821e4 to e520e7d Compare January 5, 2026 10:27
@github-actions
Copy link

github-actions bot commented Jan 5, 2026

Test run workflow

Test jobs for commit e520e7d

@lumag lumag requested a review from smuppand January 7, 2026 23:44
@lumag lumag enabled auto-merge January 7, 2026 23:44
do
JOB_ID=$(cat "${TESTJOB}" | jq ".id")
JOB_URL="https://lava.infra.foundries.io/results/$JOB_ID"
JOB_DETAILS=$(curl -s "https://lava.infra.foundries.io/api/v0.2/jobs/$JOB_ID/")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommend switching from curl -s to curl -fsS so API failures fail clearly (instead of generating empty strings and weird jq errors).

shell: bash
run: |
JOBFILES=$(find . -name "*.yaml" | grep "${{ inputs.prefix }}")
JOBFILES=$(find . -name "${{ input.prefix }}*.yaml")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

composite actions the context is inputs, not input. right?

do
echo $J
NAME=$(echo "$J" | cut --delimiter "/" --field 3)
NAME=$(basename "$J")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now NAME becomes e.g. test-job-rb3gen2-qcom-distro.yaml. If downstream expects a “device type” or “job name” without extension, this will be noisy.

NAME=$(basename "$J" .yaml)

Use upload-artifact@v6 in all workflows. v6 was released recently. It
features support for Node.js 24 and fixes a few deprecation notices in the
v4 and v5 code.

Signed-off-by: Milosz Wasilewski <milosz.wasilewski@oss.qualcomm.com>
Move test job summary generation to a separate action and improve the
overall summary by adding a table with test results. Note: results table
generation carries an assumption there is only one test result in each
test-definition. This might not always be true. Once this assumption is
broken the summary generation action will need to be updated.

Signed-off-by: Milosz Wasilewski <milosz.wasilewski@oss.qualcomm.com>
Improve loop preparing test job list by removing dependency on directory
structure. This should make the action more robust.

Signed-off-by: Milosz Wasilewski <milosz.wasilewski@oss.qualcomm.com>
@mwasilew mwasilew force-pushed the workflows/test-result-reporting branch from e520e7d to 2dba239 Compare January 22, 2026 11:30
@github-actions
Copy link

Test run workflow

Test jobs for commit 2dba239

@github-actions
Copy link

Test run workflow

Test jobs for commit 2dba239

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants