Skip to content

tools/ci: Granular build selection tool#18500

Draft
linguini1 wants to merge 2 commits intoapache:masterfrom
linguini1:gcd-path
Draft

tools/ci: Granular build selection tool#18500
linguini1 wants to merge 2 commits intoapache:masterfrom
linguini1:gcd-path

Conversation

@linguini1
Copy link
Contributor

@linguini1 linguini1 commented Mar 6, 2026

Summary

This PR includes select.py, which is some initial ground-work for more granular selection of builds. It takes in a list of changed files (say, the output of git diff --name-only) and outputs a list of defconfig files to be built to test the change set. The benefit is that it always (1) outputs the minimum number of configurations required to fully test the change set.

You can find more information in the docs included with this PR, which show some example invocations.

(1): it should always. There may be edge cases I haven't encountered yet.

Impact

None right now, besides adding a doc file.

This tool is currently not used anywhere in CI, but is a precursor to some modifications I hope to make that reduce the number of builds performed for each PR. I will make those changes once I learn a little bit more about the build selection process and ensure that this solution can be integrated in a way that still allows parallel builds and doesn't require heavy modification of any of the existing tooling.

Testing

Docs: make autobuild locally and viewed the render in browser.

I have tested this locally using made-up change-sets to see if the output matches the minimum coverage that it should be. In all of my testing so far, the results are looking correct!

See some log examples in the included docs with the sample invocations :)

Relevant example which illustrates benefit

Here is one example using the change-set from #18397. This patch only modifies a single board, by adding new board support to the ST Nucleo-H753ZI. The current CI infrastructure marks the PR as Arch: Arm, which is currently the largest architecture on NuttX (in terms of defconfig files to build). The CI for this PR ran for a little over 14h: https://github.com/apache/nuttx/actions/runs/22040681175/usage

If we look at what select.py recommends to build to test this patch, we can see that it's much more granular:

$ tools/ci/build-selector/select.py boards/arm/stm32h7/nucleo-h753zi/configs/button_driver/defconfig boards/arm/stm32h7/nucleo-h753zi/configs/nsh/defconfig boards/arm/stm32h7/nucleo-h753zi/configs/socketcan/defconfig boards/arm/stm32h7/nucleo-h753zi/include/board.h boards/arm/stm32h7/nucleo-h753zi/include/readme.txt boards/arm/stm32h7/nucleo-h753zi/kernel/Makefile boards/arm/stm32h7/nucleo-h753zi/kernel/stm32_userspace.c boards/arm/stm32h7/nucleo-h753zi/scripts/flash-mcuboot-app.ld boards/arm/stm32h7/nucleo-h753zi/scripts/flash-mcuboot-loader.ld boards/arm/stm32h7/nucleo-h753zi/scripts/flash.ld boards/arm/stm32h7/nucleo-h753zi/scripts/kernel.space.ld boards/arm/stm32h7/nucleo-h753zi/scripts/Make.defs boards/arm/stm32h7/nucleo-h753zi/scripts/memory.ld boards/arm/stm32h7/nucleo-h753zi/scripts/user-space.ld boards/arm/stm32h7/nucleo-h753zi/src/CMakeLists.txt boards/arm/stm32h7/nucleo-h753zi/src/Makefile boards/arm/stm32h7/nucleo-h753zi/src/nucleo-h753zi.h boards/arm/stm32h7/nucleo-h753zi/src/stm32_adc.c boards/arm/stm32h7/nucleo-h753zi/src/stm32_appinitialize.c boards/arm/stm32h7/nucleo-h753zi/src/stm32_autoleds.c boards/arm/stm32h7/nucleo-h753zi/src/stm32_boot.c boards/arm/stm32h7/nucleo-h753zi/src/stm32_boot_image.c boards/arm/stm32h7/nucleo-h753zi/src/stm32_bringup.c boards/arm/stm32h7/nucleo-h753zi/src/stm32_buttons.c
boards/arm/stm32h7/nucleo-h753zi/configs/nsh/defconfig
boards/arm/stm32h7/nucleo-h753zi/configs/button_driver/defconfig
boards/arm/stm32h7/nucleo-h753zi/configs/socketcan/defconfig

The shortest arm build in the original PR ran for 33 minutes (arm-12). It builds 43 configurations. If select.py was in charge, 3 configurations would be built. That's roughly 3 minutes of CI usage instead of 14 hours! (if we assume that each build takes approximately the same amount of time).

@linguini1 linguini1 added this to the CI milestone Mar 6, 2026
@github-actions github-actions bot added Area: CI Size: M The size of the change in this PR is medium labels Mar 6, 2026
This script outputs the minimum list of defconfigs to build in order to
completely test a change set. Right now it is not integrated anywhere,
but it will eventually be integrated into CI in order to cut down on
unnecessary build runs.

Signed-off-by: Matteo Golin <matteo.golin@gmail.com>
Included documentation for a new tool, select.py, and also added a
section for CI tools to be filled in.

Signed-off-by: Matteo Golin <matteo.golin@gmail.com>
@linguini1
Copy link
Contributor Author

NOTE: the reason for the build-selector directory is because I expect to add more things for build selection, like perhaps grepping defconfig files for certain Kconfig options. This way I have a place to put multiple files later.

Copy link
Contributor

@cederom cederom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Very cool thank you @linguini1 :-)
  • We could test only changed platforms and save on CI use that way!
  • We still need to verify whole ecosystem daily to see if no regressions were introduced.. and this can be done by a separate CI action right? :-)

Copy link
Member

@lupyuen lupyuen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tools is generally OK right now. But later on: Touching the CI Workflow will get really scary, I hope we can find more ways to de-risk the Upcoming CI Change...

(1) How do we prove that this works for all possible PRs? Will we run it on past PRs to compare the results?

(2) Can we deploy this in stages: e.g. Arm64 first? Or target a specific subset of Complex PRs that happen often, but are really supposed to be Simple PRs?

(3) Maybe this tool is more helpful for NuttX Apps? Because ANY change to NuttX Apps will trigger a 3.5-hour build. Which really pains me.

(4) How will we monitor this Upcoming CI Change? Make sure that all PRs are actually built and tested correctly?

(5) This tool falls into the "Code Quality Trap" that I explained in my video. We have to be 100% sure that we don't miss out any builds and tests, that will lead to NuttX Breakage later.

(6) I also wonder: Are we submitting the tool too prematurely? Should we follow the agile / iterative way: Make it work on a Simulated NuttX Repo, complete with CI Changes? Then refactor the tool based on the actual tested code? Because many things can go wrong as we're doing auto-dependency-analysis of PRs. Thanks :-)

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

  • Yes CI stages would be great to have to.. for instance first checkpatch (git commit and code syntax) needs to pass in order to trigger builds. There is not much sense in full build when basic checks failed right?
  • Sometimes checkpatch will fail because of mixed case that comes from external libs and we have no way to change that, so we should move forward with the build.
  • Step by step build may be also a good saving because if one platform fails there is no reason to build others and then re-trigger the whole process (a lot of build time save).
  • Some sort of independent daily full verification build would be still good to have for overall system check :-)

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

@lupyuen what do you propose to verify the change? To test it on the @linguini1 repo in the first place? When all works as expected over there then merge it here? Will free account bare this king of load? :-)

@linguini1 would it be possible to easily / quickly revert if anything goes wrong? :-P

@raiden00pl
Copy link
Member

This PR is missing the most important thing: how to integrate it with CI and whether it is possible at all. I think it's better to wait for a PoC of a working selector integrated with CI, otherwise this tool doesn't make much sense to merge

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

CI build failed, restarted.

onfiguration/Tool: lilygo_tbeam_lora_gps/nsh
2026-03-06 07:40:02
------------------------------------------------------------------------------------
  Cleaning...
HEAD detached at pull/18500/merge
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	modified:   boards/xtensa/esp32/esp32-devkitc/configs/bmp280/defconfig
	modified:   boards/xtensa/esp32/esp32-devkitc/configs/buttons/defconfig
	modified:   boards/xtensa/esp32/esp32-ethernet-kit/configs/buttons/defconfig
	modified:   boards/xtensa/esp32/esp32-lyrat/configs/buttons/defconfig
	modified:   boards/xtensa/esp32/esp32-wrover-kit/configs/buttons/defconfig
	modified:   boards/xtensa/esp32/esp32-wrover-kit/configs/lua/defconfig

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

I guess this change is safe as it is only preparation step and does not touch the current CI.. this way @linguini1 can test it and develop further on his local fork right? :-)

@lupyuen
Copy link
Member

lupyuen commented Mar 6, 2026

  • Yes CI stages would be great to have to.. for instance first checkpatch (git commit and code syntax) needs to pass in order to trigger builds. There is not much sense in full build when basic checks failed right?

Ehhhh I'm planning to discuss this in my next video...

(1) Do we have actual stats on how often this really happens? How many GitHub Runners will we actually save?

(2) When we kill builds too early: Will it cause our devs to resubmit PRs again and again? Consuming even more GitHub Runners? Do we have any predictive stats?

(3) I have been testing extremely impactful PRs in my own repos, like https://github.com/lupyuen10/nuttx. So far I have no problems, just create a new (free) org, rename a few things, and it builds OK! Why won't this work for All PRs? Am I missing something, that's preventing our devs from doing this on their own PRs? Is it too complicated? Or because nobody documented this "secret hack"?

@lupyuen
Copy link
Member

lupyuen commented Mar 6, 2026

what do you propose to verify the change? To test it on the @linguini1 repo in the first place? When all works as expected over there then merge it here? Will free account bare this king of load? :-)

Yes we should test this on a Free GitHub Org, similar to https://github.com/lupyuen10/nuttx
(1) Create a new Free GitHub Org e.g. lupyuen10
(2) Fork nuttx repo to lupyuen10
(3) Enable GitHub Actions
(4) Force Build on Master Branch: lupyuen10@1af85e5
(5) Rename the nuttx repos in build.yml (potentially nuttx-apps too): lupyuen10@2a4c581
(6) And everything should build OK
(7) GitHub Runners will be counted towards our Personal GitHub Account, not the Free Org. So if we are running other builds in our Personal Account, they will become slower.

@linguini1
Copy link
Contributor Author

Thanks for the feedback everyone! I guess the goal of this PR was:

  1. get some feedback on the tool
  2. avoid a CI change all at once

But, I think the concerns are right that it is probably better to test this tool in the integrated CI before merging it; maybe that will catch corner cases earlier. I will use Lup's handy trick to test this on my own fork beforehand :)

@lupyuen to answer your questions:

  1. running it on old PRs as a sanity check is a good idea! Maybe if I get this working on my own fork, I can try it against the new patches coming into mainline and compare against the existing CI

  2. Yes, probably! To my (limited) knowledge, the builds to be run are selected in the arch workflow and then output to the Linux build workflow. I think we could do some check like "if arch is arm64, instead of using arm64-01, use the new script to determine the builds".

  3. Unfortunately in its current state, no :( it only adds benefit to PRs which aren't "complex" (as you term them in your video). It can reduce full arch builds to just board builds depending on the locality of the changes. Apps, drivers, common code, etc. are still complex PRs and require the full run.

  4. I will have to monitor it if/when it is merged. Which is perhaps why it's a good idea to follow your suggestion of phasing it into one arch at a time; that will avoid major breakages.

  5. This is true! But, this tool doesn't avoid running any builds that can't be avoided (at least, to the best of my knowledge) as changes that are exclusively local to a chip can be entirely tested by building defconfigs associated with that chip only. I.e., no point running BCM2711 builds when changes were made to the Pinephone only! So, it should still preserve the quality of testing.

  6. I think you and @raiden00pl are right about this! I'll test it on my own fork using your suggested method and if things still go smoothly, we can revisit a PR!

@cederom if we follow Lup's suggestion of phasing in this tool one arch at a time, then it should be easy to roll back if something goes terribly wrong. Hopefully by that point it will have been significantly tested.

@lupyuen
Copy link
Member

lupyuen commented Mar 6, 2026

@linguini1 Awesome thanks! Another thing that really bugs me: Downloading the ESP32 Runtime fails too often. Which also wastes our GitHub Runners. I explained this in my other video: https://www.youtube.com/watch?v=lwkMS_bgyXA

Since you're testing a new CI, it might be good to fix the ESP32 Runtime Downloading too. @simbit18 might have some brilliant ideas. Thanks :-)

@raiden00pl
Copy link
Member

raiden00pl commented Mar 6, 2026

The main problem I see with this tool is that CI runs builds in parallel, and this is configured via YAML and handled by github actions. All this happens automatically and I don't know if it's possible to fine-tune the built targets without completely reworking the CI.

EDIT: probably we could inject your script here:

cd sources/nuttx/tools/ci
if [ "X${{matrix.boards}}" = "Xcodechecker" ]; then
./cibuild.sh -c -A -N -R --codechecker testlist/${{matrix.boards}}.dat
else
( sleep 7200 ; echo Killing pytest after timeout... ; pkill -f pytest )&
./cibuild.sh -c -A -N -R -S testlist/${{matrix.boards}}.dat
fi

or integrate it with cibuild.sh tool

@lupyuen
Copy link
Member

lupyuen commented Mar 6, 2026

Or we can hack arch.yml. Though the Build Selection will not be 100% specific. For example: "Build only arm-01, but skip arm-02 to arm-14, because they are not impacted". And this might be less risky, I think? 🤔

@raiden00pl
Copy link
Member

maybe we should integrate this tool directly in tools/testbuild.sh, not CI. But this script is even more mess than CI, and integration with select.py means we add python dependency

@raiden00pl
Copy link
Member

raiden00pl commented Mar 6, 2026

another idea that is probably the least risky is split tools/ci/testlist into "chips" (stm32, ntf52 etc), and then just use github labels to select what should be build. This was my initial plan, but I didn't have the time and lost the enthusiasm to finish it. However, now with LLM, it should be easy to automate.

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

@lupyuen: @linguini1 Awesome thanks! Another thing that really bugs me: Downloading the ESP32 Runtime fails too often. Which also wastes our GitHub Runners. I explained this in my other video: https://www.youtube.com/watch?v=lwkMS_bgyXA

Since you're testing a new CI, it might be good to fix the ESP32 Runtime Downloading too. @simbit18 might have some brilliant ideas. Thanks :-)

Okay this may be the time we have a dedicated MIRROR repo for all dependencies! That would be perfect for NXDART, because we only access network once, then just use offline packages from our another repo? This should also remove download fails :-)

@lupyuen
Copy link
Member

lupyuen commented Mar 6, 2026

Okay this may be the time we have a dedicated MIRROR repo for all dependencies!

Well then our ESP32 Colleagues might complain that our Mirror Repo is stale and doesn't have the latest versions of all dependencies. Or do we use the Mirror Repo as an Always-Available Backup for fetching dependencies?

@lupyuen lupyuen marked this pull request as draft March 6, 2026 08:49
@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

Okay this may be the time we have a dedicated MIRROR repo for all dependencies!

Well then our ESP32 Colleagues might complain that our Mirror Repo is stale and doesn't have the latest versions of all dependencies. Or do we use the Mirror Repo as an Always-Available Backup for fetching dependencies?

Hmm, maybe we should only work on release packages then? :-P

Or use fetch command that would first check on our mirror when that fails try the public connection?

Something like that would constitute our self-contained ecosystem :-)

@lupyuen
Copy link
Member

lupyuen commented Mar 6, 2026

I have a hunch about the ESP32 Download Failure: We are running too many jobs in parallel, which will fire Many Concurrent HTTP Requests to download the ESP32 Runtime (via github.com). And our Concurrent Downloads will get blocked by github.com for suspected spam.

Maybe we should chat with our ESP32 Colleagues: Do they host the ESP32 Runtime outside of GitHub? Preferably on Multiple Hostnames, so we don't spam One Single Server? (Or can they offer to host ESP32 Runtime elsewhere?)

Fixing the ESP32 Downloads is a terrific start. I see it failing all the time on NuttX Dashboard (sigh)

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

I have a hunch about the ESP32 Download Failure: We are running too many jobs in parallel, which will fire Many Concurrent HTTP Requests to download the ESP32 Runtime (via github.com). And our Concurrent Downloads will get blocked by github.com for suspected spam.
(..)

Good point @lupyuen !!

From my experience single file download (release package) it treated a lot nicer than many small files in parallel by any hosting provider at server / network level.

Some time ago on our website github upload artifacts was updated to transfer package rather than all files separately. This also improves transfer time surprisingly.

I did some web application that worked fine on the web. Then I added mobile application with REST API that did sequence of requests and that worked fine. Then I tried to improve synchronization speeds by moving requests to parallel and almost always there are failures and retires with delay are necessary. When you hit connection limit quota then delay is almost always necessary.

Quick fix would be to add DELAY + RETRY option to all fetchers. But perfect situation would be to have single package fetch + delays + retries. If these are here maybe only some tune is enough :-)

Copy link
Contributor

@cederom cederom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI build fails again, need to investigate :-)

@simbit18
Copy link
Contributor

simbit18 commented Mar 6, 2026

Hi everyone!

Possible solution (to be verified, of course) to test and not modify the existing process.
Currently, GitHub jobs use static .dat files contained in the ci/testlist directory for build with Make or CMake.
Would it be possible to create a folder outside the NuttX tree, for example workspace/citest, and then have a tool (select.py or other) that generates one or more .dat files (using the current logic) containing what we want to build and a .json file to pass to the dedicated job on GitHub?
The job will read the .json file with the names of the .dat files that we will pass to ./cibuild.sh as is currently the case.

./cibuild.sh -c -A -N -R -S workspace/citest/${{matrix.boards}}.dat

The folder outside the nuttxworkspace/citest tree could also be useful for locally testing boards under development or other things, manually adding the file with the boards to build.

Questions:

Will this system on GitHub only work for Linux (Docker)?

How can we know if it is possible to build a board on macOS, MSVC, or MSYS2 ? Will these jobs be started in any case?

What should we do if there are changes to a board and other types, such as nuttx/drivers? What should the tool generate?

Of course, the questions don't end there! But I have to preserve my brain cells :)

@cederom
Copy link
Contributor

cederom commented Mar 6, 2026

We also have unused https://github.com/apache/nuttx-testing where we can stage some tests before they land in here :-)

Copy link
Contributor

@acassis acassis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@linguini1 I think this is a great idea!

I think the next step is to map all drivers and fs and build only board configs that are using it, i.e.:

If a PR modified the file "fs/fat/fs_fat32.c" then the script could inspect the "fs/fat/CMakeLists.txt" and figure out that this file is compiled when CONFIG_FS_FAT is enabled, then it could track which board configs are using it.

I'm not sure if it is easy to do, but comparing Make.defs vs CMakeLists.txt, seems like CMakeLists.txt are easy to track. I think if you search for second previous "(" symbol after finding the file position (fs_fat32.c) it will find the right symbol. Just an idea!

@acassis
Copy link
Contributor

acassis commented Mar 6, 2026

@linguini1 Awesome thanks! Another thing that really bugs me: Downloading the ESP32 Runtime fails too often. Which also wastes our GitHub Runners. I explained this in my other video: https://www.youtube.com/watch?v=lwkMS_bgyXA

Since you're testing a new CI, it might be good to fix the ESP32 Runtime Downloading too. @simbit18 might have some brilliant ideas. Thanks :-)

Lup, I was thinking about these downloading all the time, maybe beside the mirror idea, we could have these files already available in the CI docker image. I don't know whether it is possible or not, but maybe it possible.

Although in theory it is good to download the files to confirm the link still available, we know it is a major cause of CI failures and it also impacts in the Runner usage because a test that could be finished early needs to be executed again and again.

@linguini1
Copy link
Contributor Author

linguini1 commented Mar 6, 2026

The main problem I see with this tool is that CI runs builds in parallel, and this is configured via YAML and handled by github actions. All this happens automatically and I don't know if it's possible to fine-tune the built targets without completely reworking the CI.

Yes, that would be an important requirement!

EDIT: probably we could inject your script here:

cd sources/nuttx/tools/ci
if [ "X${{matrix.boards}}" = "Xcodechecker" ]; then
./cibuild.sh -c -A -N -R --codechecker testlist/${{matrix.boards}}.dat
else
( sleep 7200 ; echo Killing pytest after timeout... ; pkill -f pytest )&
./cibuild.sh -c -A -N -R -S testlist/${{matrix.boards}}.dat
fi

or integrate it with cibuild.sh tool

My (perhaps naive) idea was to use this tool to either generate the .dat files used by the CI for selecting builds on demand, or ideally just pass the output directly to the build workflow through the GitHub workflow output variables.

integration with select.py means we add python dependency

I figured this would be alright for CI, since Python is used in a few other places for CI already. It also means the script is cross-platform, so no .sh, .bat copies.

another idea that is probably the least risky is split tools/ci/testlist into "chips" (stm32, ntf52 etc), and then just use github labels to select what should be build.

That works, but then we have to maintain quite a few more labels and .dat files. We'd also have to update them every time a chip is added. The additional benefit of the script is being able to select past the granularity of chips all the way to boards and even specific configurations.

@linguini1
Copy link
Contributor Author

Possible solution (to be verified, of course) to test and not modify the existing process.
Currently, GitHub jobs use static .dat files contained in the ci/testlist directory for build with Make or CMake.
Would it be possible to create a folder outside the NuttX tree, for example workspace/citest, and then have a tool (select.py or other) that generates one or more .dat files (using the current logic) containing what we want to build and a .json file to pass to the dedicated job on GitHub?
The job will read the .json file with the names of the .dat files that we will pass to ./cibuild.sh as is currently the case.

Yes! This was idea for integrating this tool. The .dat files do more than just select the builds from what I can see; they also mark which configs can use CMake and possibly modify the toolchain? (one of them I saw a toolchain Kconfig variable name in there). That would have to be accounted for.

Questions:

Will this system on GitHub only work for Linux (Docker)?

I don't think so. The select.py script is platform agnostic, so we should be able to run it on MacOS/Windows builds as well. However, I know we perform 99% of the builds on Linux and the other hosts get pre-set "samples" of a few boards to build as a sanity check. We should probably continue with that route until/unless we see large resource savings from this new tool.

How can we know if it is possible to build a board on macOS, MSVC, or MSYS2 ? Will these jobs be started in any case?

This might be beyond my understanding of CI, as I had assumed any board can be built on any host (or at least, that's the goal of NuttX)? But maybe we can start with just adopting this on the Linux builds first since that's where most of the work takes place.

What should we do if there are changes to a board and other types, such as nuttx/drivers? What should the tool generate?

Right now it considers this a complex PR, the same as the current CI. So that triggers a full build of all boards.

Of course, the questions don't end there! But I have to preserve my brain cells :)

No kidding! I'm learning that our CI is very complex! And also very important, so it would be bad to break it.

@linguini1
Copy link
Contributor Author

@linguini1 I think this is a great idea!

I think the next step is to map all drivers and fs and build only board configs that are using it, i.e.:

If a PR modified the file "fs/fat/fs_fat32.c" then the script could inspect the "fs/fat/CMakeLists.txt" and figure out that this file is compiled when CONFIG_FS_FAT is enabled, then it could track which board configs are using it.

I'm not sure if it is easy to do, but comparing Make.defs vs CMakeLists.txt, seems like CMakeLists.txt are easy to track. I think if you search for second previous "(" symbol after finding the file position (fs_fat32.c) it will find the right symbol. Just an idea!

This is a good idea, and the logical next step for sure. This will be significantly more complex though, as instead of choosing builds based on the changed file's path (as is currently possible for board/ and arch/ paths), it requires some level of parsing defconfig files or at least determining less obvious source file relationships.

One (very fresh and likely very naive) idea I had was to construct a dependency graph for source files. I.e., each source file in the tree is associated with a list of defconfig files that depend on it. Then, we can take the changeset for a PR and look up the source files in this graph to see what defconfig files need to be built.

Of course, this requires that we have the dependency graph in the first place. I don't know how cheaply it can be generated by our build system (i.e., my thought is that CMake will know which build files are getting used for a defconfig and we can take that output to stitch together the dependency graph). It would have to be maintained/updated any time we add a defconfig file, modify Kconfig files or update Make/CMake files. So it would have to be automated and highly reliable. This is a big task and is far out in the future.

@simbit18
Copy link
Contributor

simbit18 commented Mar 6, 2026

Yes! This was idea for integrating this tool. The .dat files do more than just select the builds from what I can see; they also mark which configs can use CMake and possibly modify the toolchain? (one of them I saw a toolchain Kconfig variable name in there). That would have to be accounted for.

HI @linguini1 Yes, for example, you can choose the toolchain for ARM.

This corresponds to nucleo-l152re:nsh

  • /arm/stm32/nucleo-l152re/configs/nsh,CONFIG_ARM_TOOLCHAIN_GNU_EABI

This corresponds to nucleo-f411re:nsh

  • /arm/stm32/nucleo-f411re/configs/nsh,CONFIG_ARM_TOOLCHAIN_CLANG

if we add this to the .dat file
nucleo-f411re:nsh will be built with CMake

  • CMake,nucleo-f411re:nsh

@lupyuen
Copy link
Member

lupyuen commented Mar 7, 2026

We should worry about Strange Dependencies like this: Why would a fix for ESP32 RISC-V, affect ESP32 Xtensa? 🤔

@linguini1
Copy link
Contributor Author

We should worry about Strange Dependencies like this: Why would a fix for ESP32 RISC-V, affect ESP32 Xtensa? 🤔

In this case isn't it a change for both? boards/Kconfig is modified which should cause a full build in our CI I think (?)

@lupyuen
Copy link
Member

lupyuen commented Mar 7, 2026

Sorry I meant this fix for ESP32 RISC-V:

Broke the build for ESP32 Xtensa, as explained here:

@raiden00pl
Copy link
Member

it looks like boards common files are not handled in labeler.yaml at all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Area: CI Size: M The size of the change in this PR is medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants