Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests: amrex.the_arena_init_size=0 #5785

Open
wants to merge 1 commit into
base: development
Choose a base branch
from

Conversation

ax3l
Copy link
Member

@ax3l ax3l commented Mar 20, 2025

amrex.the_arena_init_size=0 is default on CPU but not GPU. In order to run tests in parallel on the same GPU, we need to change the default so the first test does not allocate 3/4th of the memory of the GPU (to speed up later allocations in a re-allocated heap).

This currently only applies to app tests, but not yet to the Python CTests. Need to add an extra option for those, because we cannot inject CLI options. This will be a follow-up, because we need to inject a few vars (debugger breaking, init size, warning thresholds) and this might need a WarpX-Python specific test env var or something of that sort.

@ax3l ax3l added backend: cuda Specific to CUDA execution (GPUs) component: tests Tests and CI backend: hip Specific to ROCm execution (GPUs) backend: sycl Specific to DPC++/SYCL execution (CPUs/GPUs) labels Mar 20, 2025
@ax3l ax3l requested review from WeiqunZhang and EZoni March 20, 2025 18:41
`amrex.the_arena_init_size=0` is default on CPU but not GPU.
In order to run tests in parallel on the same GPU, we need to
change the default so the first test does not allocate 3/4th
of the memory of the GPU (to speed up later allocations in
a pre-allocated heap).

This currently only applys to app tests, but not
yet to the Python CTests. Need to add an extra option for those,
because we cannot inject CLI options.
Copy link
Member

@EZoni EZoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @ax3l. Looks good to me.

@WeiqunZhang
I will wait for your approval and then merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend: cuda Specific to CUDA execution (GPUs) backend: hip Specific to ROCm execution (GPUs) backend: sycl Specific to DPC++/SYCL execution (CPUs/GPUs) component: tests Tests and CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants