Skip to content

Conversation

@simonge
Copy link
Contributor

@simonge simonge commented Jun 25, 2025

Briefly, what does this PR introduce?

Trains a neural network to reproduce the momentum of the particle at the origin based on the position and direction that it exits the B2eR magnet into the Tagger drift volume.

Produces plots which demonstrate the reconstruction limits from the beam divergence and optics without worrying about the effects of the detector reconstruction. This might introduce some systematic error in the reconstruction later.

Shares the simulation from #167
Replacing the previous onnx training #123 which relied on reconstructed tagger tracks.

Runs new simulation of only electrons from eic-pythia6 files on the xrootd server.
Uses the tracks reconstructed from the taggers.

Using perfect information from the beamline tracking layers resulted in only perfect tracks in the tagger being reconstructed well, everything else was far off. Using a simulation sample which covered the full acceptance also pulled the mean residual of the pythia sample away from 0 suggesting there might not be a clean 1 to 1 mapping - this needs some further investigation.

What kind of change does this PR introduce?

Please check if this PR fulfills the following:

  • Tests for the changes have been added
  • Documentation has been added / updated
  • Changes have been communicated to collaborators

Does this PR introduce breaking changes? What changes might users need to make to their code?

No

Does this PR change default behavior?

Adds a new benchmark to train a new onnx neural network to reconstruct electron momentum through beamline magnetic optics.

@simonge
Copy link
Contributor Author

simonge commented Oct 21, 2025

Is the environment for CI on eicweb different from the main one we use via cvmfs? pytorch doesn't appear to be available.

https://eicweb.phy.anl.gov/EIC/benchmarks/detector_benchmarks/-/jobs/6523074#L645

@veprbl
Copy link
Member

veprbl commented Oct 21, 2025

Is the environment for CI on eicweb different from the main one we use via cvmfs? pytorch doesn't appear to be available.

https://eicweb.phy.anl.gov/EIC/benchmarks/detector_benchmarks/-/jobs/6523074#L645

Default benchmark running is with eic_ci flavor, not eic_xl. There is no way to run different containers, unless we trigger two different pipelines. We could either add torch to https://eicweb.phy.anl.gov/containers/eic_container/-/blob/master/spack-environment/ci/spack.yaml?ref_type=heads or, more realistically, install it locally like in other benchmarks https://eicweb.phy.anl.gov/EIC/benchmarks/detector_benchmarks/-/blob/master/benchmarks/backwards_ecal/config.yml?ref_type=heads

@simonge
Copy link
Contributor Author

simonge commented Oct 21, 2025

or, more realistically, install it locally like in other benchmarks https://eicweb.phy.anl.gov/EIC/benchmarks/detector_benchmarks/-/blob/master/benchmarks/backwards_ecal/config.yml?ref_type=heads

Are the environments for different gitlab jobs linked at all? if you install awkward and uproot for the backwards_ecal benchmark will they need to be reinstalled for this benchmark if it runs later?

@veprbl
Copy link
Member

veprbl commented Oct 21, 2025

or, more realistically, install it locally like in other benchmarks https://eicweb.phy.anl.gov/EIC/benchmarks/detector_benchmarks/-/blob/master/benchmarks/backwards_ecal/config.yml?ref_type=heads

Are the environments for different gitlab jobs linked at all? if you install awkward and uproot for the backwards_ecal benchmark will they need to be reinstalled for this benchmark if it runs later?

Not linked at all, every job sees its own home directory.

@simonge
Copy link
Contributor Author

simonge commented Oct 22, 2025

The code seems to be able to run correctly on eicweb now.

There are still some workflow issues I could do with some help with.
The failure of bench:lowq2_reconstruction triggers retrain:lowq2_reconstruction
Failure of those jobs results in the associated files being wiped immediately. These should be kept so that the plots can be reviewed.
Should new jobs be introduced to carry out the check after the benchmark has succeeded? Are the files for the check then available between the jobs?
Is there a different better approach to manage this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants