-
Notifications
You must be signed in to change notification settings - Fork 5
Set MPI-executable for ParFlow to $SLURM_SRUN when Slurm #90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This is needed when GCC+OpenMPI is used. This is tested on JURECA.
With Intel+ParaStationMPI this already went right; there is no effect
when compiling with Intel LLVM. It only has effect if ${SLURM_SRUN} is
set, which means it is a Slurm system (as is already assumed).
Resolves: #87
|
The only additional assumption, I can think of, is that some system where |
|
Only here I found a case where it was set. This is for a specific machine (same for two other NERSC machines). Instead of |
|
It's not okay. On Marvin it fails with this message: whereas
WDYT? |
|
How about this solution:
|
|
I think I need to if(JSC)
set(MPIEXEC_EXECUTABLE "${SLURM_SRUN}")
endif()in How do I properly check if I'm on JSC hardware? |
It's best to keep the CMake scripts as system-agnostic as possible. All machine-specific settings should be configured thru the environment files (that's why they exist). In this case you can set |
|
I've done this in 30bfbcb. But I cannot reproduce the original problem anymore. I went back to |
This is needed when GCC+OpenMPI is used. This is tested on JURECA.
With Intel+ParaStationMPI this already went right; there is no effect when compiling with Intel LLVM. It only has effect if ${SLURM_SRUN} is set, which means it is a Slurm system (as is already assumed).
Resolves: #87
Replaces: #88