Replies: 2 comments
-
Usually, you would remove Let's continue this in #4752 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Sure thing, thanks
…On Mon, Mar 11, 2024 at 7:28 PM Axel Huebl ***@***.***> wrote:
Closed #4751 <#4751> as
resolved.
—
Reply to this email directly, view it on GitHub
<#4751>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6J3UMPZHZJRX3VJ7TRSONLYXZD3JAVCNFSM6AAAAABEJMXZX6VHI2DSMVQWIX3LMV45UABFIRUXGY3VONZWS33OIV3GK3TUHI5E433UNFTGSY3BORUW63R3GEYTMMBTGU3A>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have WarpX installed two different ways--from source and from conda--on the Stellar HPC at Princeton/PPPL (information on the cluster HERE) and am just getting started running capacitive discharge simulations. I have the example python input script and run it (within a slurm script) as follows:
As a primer, when I run from my own build, I see the following lines at startup:
whereas running Warpx installed from the conda distribution displays:
Is this expected behavior, that the conda distribution does not use MPI? (In contrast, my build from source had the
WarpX_MPI
option ON.)Secondly, I submit a batch script that looks like:
I have worked out that setting
OMP_NUM_THREADS
determines the number of threads initialized in the startup lineOMP initialized with 4 OMP threads
, but have not had the number in the lineMPI initialized with 1 MPI processes
move beyond 1, even when submitting a job to more than one core and using the flag-n <num_cores>
. Instead, when submitting a job with--ntasks-per-node=N
I find that the MPI/OMP startup lines are printed out N times. This makes me think that MPI is not utilizing the full system, and perhaps even is running N identical simulations in parallel. How can I alert WarpX that I have 16 MPI tasks allocated for my job?As an additional clarification on a high level, is it correct that WarpX (via AMReX) uses MPI to distribute the simulation domain across processors and then uses OpenMP to accelerate each process?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions