Integrate LLAMA#4003
Integrate LLAMA#4003bernhardmgruber wants to merge 2 commits intoComputationalRadiationPhysics:devfrom
Conversation
|
Also please note that PIConGPU already moved on to boost 1.74 |
Awesome, I updated the PR description! |
| # Visual Studio configuration and output files | ||
| out | ||
| /.vs |
There was a problem hiding this comment.
I don't think this should go into the PIConGPU dev, since PIConGPU does not especially favour VS
There was a problem hiding this comment.
If this is really a big problem for you, then I can remove it. However, you ignore so many other tools' temporary files (like MacOS stuff, VS Code, even Code::Blocks and Netbeans), I don't think those additional ones would hurt.
| if(IN_SRC_POS GREATER -1) | ||
| message(FATAL_ERROR | ||
| "PIConGPU requires an out of source build. " | ||
| "Please remove \n" | ||
| " - CMakeCache.txt\n" | ||
| " - CMakeFiles/\n" | ||
| "and create a separate build directory. " | ||
| "See: INSTALL.rst") | ||
| endif() | ||
| #if(IN_SRC_POS GREATER -1) | ||
| # message(FATAL_ERROR | ||
| # "PIConGPU requires an out of source build. " | ||
| # "Please remove \n" | ||
| # " - CMakeCache.txt\n" | ||
| # " - CMakeFiles/\n" | ||
| # "and create a separate build directory. " | ||
| # "See: INSTALL.rst") | ||
| #endif() |
There was a problem hiding this comment.
you should not change this, it is fine for your pull request, but not for the PIConGPU in general
There was a problem hiding this comment.
Yeah, this should not get merged here. However, some IDEs like Visual Studio and CLion, when they splot a CMakeLists.txt, start to configure the project in a subfolder of the working copy, so they can offer better help to the developer. This is still an out-of-source build btw. and you can build picongpu fine this way as well. It is just not the intended workflow when working on a cluster.
|
|
||
| Velocity velocity; | ||
| const float3_X vel = velocity(particle[momentum_], attribute::getMass(weighting, particle)); | ||
| const float3_X vel = velocity(static_cast<float3_X>(particle[momentum_]), attribute::getMass(weighting, particle)); |
There was a problem hiding this comment.
Is this cast really necessary? As far as I know momentum_ already should be float3_X
There was a problem hiding this comment.
Yes, this is necessary when particle[momentum_] returns a proxy reference, which it can with certain LLAMA mappings.
|
@sbastrakov or @psychocoderHPC if you a little bit of free time could you take a look, I am not yet qualified to review this ;) |
|
@bernhardmgruber you also should execute something like, after having load/installed clang format, (or an llvm packet containing it) on your pull request to do a clang format, and fix the formatting errors the CI is complaining about. Note: clang-fromat version is important, PIConGPU uses 12.0.1 |
|
I believe the plan is that @bernhardmgruber comes for some time to Dresden relatively soon to finish integratation of LLAMA to PIConGPU. |
|
I will resume on this work starting on the 28th of November. I will be at HZDR for two weeks to complete this work. |
8582027 to
d06c3cf
Compare
|
So, I updated the branch to LLAMA |
|
could you run clang-format-12 that we can see what the CI is saying |
I know, I only have clang-format-15 locally on Windows. On my Ubuntu 22.10, it is also no longer available: LLAMA apt repository only has versions dev, 15 and 14 (secretly, the older ones might still be reachable, I did not check). How about you update to clang-format-14? :) |
158f295 to
9317b83
Compare
| // struct ParticleFrameMapping : llama::mapping::BindAoS<false> | ||
| struct ParticleFrameMapping : llama::mapping::BindSoA<false> | ||
| // struct ParticleFrameMapping : llama::mapping::BindAoSoA<16> | ||
| // struct ParticleFrameMapping : llama::mapping::BindAoSoA<32> | ||
| // struct ParticleFrameMapping : llama::mapping::BindAoSoA<64> | ||
| { | ||
| inline static constexpr bool splitVector = false; | ||
| }; |
There was a problem hiding this comment.
For:
Find a better way to choose which memory layout to use in the param files
I am using a quoted metafunction (ParticleFrameMapping) now which has an additional flag whether the Vectors should be split up into their components. That is good enough for most cases I guess. It can cover all LLAMA mappings which only depend on static/compile-time information.
Unfortunately updating the formation requirement has low priority. The only nice feature which would come with clang-format-14 is the east cost const. At the moment lifting the requirements is also hard because we have a few students currently trying to bring their extensions to PIConGPU they developed. Switching now the code style would maybe introduce too much pain for them. |
|
I can pull your branch and focre push the formatted version if you like. |
We can do that once I am finished with everything else. I am actively working on this branch ATM to make the IO work as well. |
9317b83 to
b6ef6fe
Compare
9f510dc to
3ff629c
Compare
|
Our dev server still has a clang-format 12.0.1, in general you can install/load it using spack via the corresponding llvm pack Also be aware that picongpu is even patch level sensitive, you need clang-format 12.0.1 specifically |
3ff629c to
3acaf12
Compare
|
@bernhardmgruber I think we still have an issue #4392 |
9336a35 to
dbc221a
Compare
dbc221a to
eae9230
Compare
bef18c5 to
50fc3ff
Compare
bfa7c8c to
e9f1a67
Compare
|
Can I prevent the CI from running if I push changes, but keep this PR open? |
6fc8f5e to
fdadfa9
Compare
|
|
||
| #include "picongpu/simulation_defines.hpp" | ||
|
|
||
| #include "picongpu/param/memory.param" |
This is needed by VS to configure the project in open-folder mode.
Also support LLAMA frames in the IO.
|
@ComputationalRadiationPhysics/picongpu-maintainers This is not planned anymore, is it? |
|
A certain @psychocoderHPC fearfully admitted in the past that LLAMA may be a solution too complex to be maintained within PIConGPU. Henceforth, I expect this PR to be closed without a merge eventually. |
|
I will create soon a issue for the LLAMA integration to keep a link to this PR before we close it :-( |
This PR integrates LLAMA for handling the memory layout of several core data structures.
Open tasks:
GIT_AUTHOR_NAME="Third Party" GIT_AUTHOR_EMAIL="picongpu@hzdr.de" git subtree pull --prefix thirdParty/llama git@github.com:alpaka-group/llama.git develop --squashPull upgrade to Boost 1.70 into separate PR(done in CMake request boost 1.74.0+ #4144)After checking out this branch, you need to run
git submodule updateto get LLAMA.