Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] PICMI Usage in HPC Batch Scripts #4911

Draft
wants to merge 1 commit into
base: development
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
[Draft] PICMI Hints in HPC Batch Scripts
Try to incorporate the same level of details for PICMI as for
executable usage in HPC batch scripts.
  • Loading branch information
ax3l committed May 3, 2024
commit 1f301685705087bbe65171ceebea21f3d6f736f0
21 changes: 15 additions & 6 deletions Tools/machines/perlmutter-nersc/perlmutter_gpu.sbatch
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,21 @@
#SBATCH -o WarpX.o%j
#SBATCH -e WarpX.e%j

# executable & inputs file or python interpreter & PICMI script here
EXE=./warpx
INPUTS=inputs
# PICMI Python script or executable?
USE_PICMI=true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good idea. Maybe set USE_PICMI=false here so the default behavior is the same as previously.


if [[ "${USE_PICMI}" = true ]]
then
EXE=python3
INPUTS=PICMI_script.py
# for GPU-aware MPI in PICMI, set
# Simulation(..., warpx_amrex_use_gpu_aware_mpi=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I've thought about is allowing setting of input parameters from the command line when using Python. Something like
python3 PICMI_script.py --inputs amrex.use_gpu_aware_mpi=1
But I'm not sure it's a good idea. For one, it would conflict with any argument parsing that the input script might do.

GPU_AWARE_MPI=""
else
EXE=./warpx
INPUTS=inputs.in
GPU_AWARE_MPI="amrex.use_gpu_aware_mpi=1"
fi

# pin to closest NIC to GPU
export MPICH_OFI_NIC_POLICY=GPU
Expand All @@ -36,9 +48,6 @@ export MPICH_OFI_NIC_POLICY=GPU
export SRUN_CPUS_PER_TASK=16
export OMP_NUM_THREADS=${SRUN_CPUS_PER_TASK}

# GPU-aware MPI optimizations
GPU_AWARE_MPI="amrex.use_gpu_aware_mpi=1"

# CUDA visible devices are ordered inverse to local task IDs
# Reference: nvidia-smi topo -m
srun --cpu-bind=cores bash -c "
Expand Down