Skip to content

Commit 803a26d

Browse files
committed
add first docs for Aurora
1 parent 061381a commit 803a26d

File tree

1 file changed

+61
-0
lines changed

1 file changed

+61
-0
lines changed

docs/content/useful/cluster-setups.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -561,6 +561,67 @@ This section goes over some instructions on how to compile & run the `Entity` on
561561

562562
_WIP_
563563

564+
=== "`Aurora` (ANL)"
565+
566+
[`Aurora`](https://docs.alcf.anl.gov/aurora/) uses Intel PVC nodes with 6 GPUs/node. Development of entity for Aurora is currently ongoing. Use the following docs with caution and check in with @LudwigBoess on potential changes.
567+
568+
**Modules to load**
569+
570+
You can load the installed dependencies with
571+
572+
```sh
573+
module load adios2
574+
module load autoconf cmake
575+
```
576+
577+
The `adios2` module automatically loads the related `kokkos` module.
578+
579+
I would recommend saving the module configuration for easy loading within the PBS job:
580+
```sh
581+
module save entity
582+
```
583+
584+
You can compile `entity` with:
585+
586+
```sh
587+
cmake -B build -D pgen=streaming -D precision=single -D mpi=ON -D output=ON -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx
588+
```
589+
590+
**Running entity**
591+
592+
Aurora uses [PBS](https://docs.alcf.anl.gov/running-jobs/?h=pbs) for workload management.
593+
The Intel PVC GPUs are split into two tiles each and it is recommended to launch one MPI rank per tile.
594+
595+
```sh
596+
#!/bin/bash -l
597+
#PBS -A <project_name>
598+
#PBS -N weibel_test
599+
#PBS -l select=1 # number of nodes to use
600+
#PBS -l walltime=00:05:00
601+
#PBS -l filesystems=flare # replace with the filesystem of your project
602+
#PBS -k doe
603+
#PBS -l place=scatter
604+
#PBS -q debug
605+
606+
NTOTRANKS=12 # 2*6*N_nodes
607+
NRANKS_PER_NODE=12 # 2*6
608+
NDEPTH=1 # this is only relevant for the CPU pinning
609+
610+
# change to directory from which job was submitted
611+
cd $PBS_O_WORKDIR
612+
613+
# load all modules defined above
614+
module restore entity
615+
616+
# only relevant for CPU pinning and to avoid Kokkos complaints
617+
export OMP_PROC_BIND=spread
618+
export OMP_PLACES=threads
619+
620+
mpiexec -n ${NTOTRANKS} --ppn ${NRANKS_PER_NODE} --depth=${NDEPTH} --cpu-bind=depth gpu_tile_compact.sh ./entity.xc -input weibel.toml
621+
```
622+
623+
_Last updated: 7/24/2025_
624+
564625
=== "`LUMI` (CSC)"
565626

566627
[`LUMI`](https://www.lumi-supercomputer.eu/) cluster is located in Finland. It is equipped with 2978 nodes with 4 AMD MI250x GPUs and a single 64 cores AMD EPYC "Trento" CPU. The required modules to be loaded are:

0 commit comments

Comments
 (0)