Skip to content

Preconfigured Sites

Rick Grubin edited this page Jan 7, 2026 · 10 revisions

Overview

This page provides general information on the supported platforms, such as the location of the spack-stack installations on tier 1 platforms and instructions on how to set up an environment for building spack-stack environments. Information on using spack-stack environments for development of downstream applications is available in the release-specific end user docs from the top page of the Wiki

Note. To support users who consistently want the latest release, on NOAA RDHPCS tier 1 platforms, soft links pointing to the modulefiles associated with the latest release of the Unified Environment are provided under the main spack-stack directory. The usage consists of module use /path/to/spack-stack/latest-ue-<compiler>, and then loading the spack-stack meta-modules as usual. These soft links should be updated when each release is finalized.

Organization System Compilers Location of top-level spack-stack directory Maintainers
HPC systems
MSU Hercules GCC, oneAPI /apps/contrib/spack-stack/ EPIC / JCSDA
Orion oneAPI /apps/contrib/spack-stack/ EPIC / JCSDA
NASA Discover SCU17 GCC, Intel /gpfsm/dswdev/jcsda/spack-stack/scu17/ JCSDA
NAS GCC, Intel /nobackup/gmao_SIteam/spack-stack/ GMAO
NCAR-Wyoming Derecho GCC, Intel /glade/work/epicufsrt/contrib/spack-stack/derecho/ EPIC / JCSDA
NOAA (NCEP) Acorn Intel /lfs/h1/emc/nceplibs/noscrub/spack-stack/ NOAA-EMC
NOAA (RDHPCS) Gaea C6 Intel /ncrc/proj/epic/spack-stack/c6/ EPIC / NOAA-EMC
Hera GCC, Intel /contrib/spack-stack/ EPIC / NOAA-EMC
Ursa GCC, Intel /contrib/spack-stack/ EPIC / NOAA-EMC
U.S. Navy (NRL) Atlantis GCC, oneAPI, LLVM /gpfs/neptune/spack-stack/ NRL
U.S. Navy (HPCMP) Blueback GCC, oneAPI /p/app/projects/NEPTUNE/spack-stack/ NRL
Narwhal GCC, oneAPI /p/app/projects/NEPTUNE/spack-stack/ NRL
Nautilus GCC, oneAPI /p/app/projects/NEPTUNE/spack-stack/ NRL
Cloud platforms
NOAA (RDHPCS) RDHPCS Parallel Works Intel /contrib/spack-stack-rocky8/ EPIC / JCSDA
U.S. Navy (HPCMP) HPCMP Parallel Works GCC, oneAPI /project/spack-stack/ NRL

Many of these sites rely on manually installed external packages (see respective site configurations). In cases where such a package is not available from the OS or from a module provided by the system administrators, the spack-stack maintainers for the platform may have installed the package manually, following a procedure similar to the generic instructions provided elsewhere in this documentation. Check for a file named README.md in the site config directory for additional information.

MSU Orion

The following is required for building new spack environments with any supported compiler on this platform.

To access the /apps/contrib/spack-stack directory, first log into orion-devel-1 or orion-devel-2. Then sudo to role-epic account.

module purge
module load spack-managed-x86-64_v3/v1.0

MSU Hercules

The following is required for building new spack environments with any supported compiler on this platform.

To access the /apps/contrib/spack-stack directory, first log into hercules-devel-1 or hercules-devel-2. Then sudo to role-epic account.

module purge
module load spack-managed-x86-64_v3/v1.0

NASA Discover SCU17

The following is required for building new spack environments with any supported compiler on this platform.

module purge
module use /discover/swdev/gmao_SIteam/modulefiles-SLES15
module use /discover/swdev/jcsda/spack-stack/scu17/modulefiles

NASA NAS

The following is required for building new spack environments with any supported compiler on this platform.

module purge
module use /nobackup/gmao_SIteam/modulefiles

U.S. Navy Atlantis, Blueback, Narwhal, Nautilus

The following is required for building new spack environments with any supported compiler on this platform.

umask 0022
module purge

NCAR-Wyoming Derecho

The following is required for building new spack environments with any supported compiler on this platform.

module --force purge

NOAA Acorn (WCOSS2 test system)

On WCOSS2 OpenSUSE sets CONFIG_SITE which causes libraries to be installed in lib64, breaking the lib assumption made by some packages. Therefore, CONFIG_SITE should remain set to empty in compilers.yaml.

For official deployments on spack-stack on Acorn, be mindful of umask and group ownership, as these can be finicky. The umask value should be 002, otherwise various files can be assigned to the wrong group. In any case, running something to the effect of chgrp nceplibs <spack-stack dir> -R and chmod o+rX <spack-stack dir> -R after the whole installation is done is a good idea.

Note that for the installation using Intel 19, the system GCC, 7.5.0, is used on the backend for the Intel compiler. More recent versions of GCC are not reliably compatible. Intel 19 is not reliably compatible with C++17 standards. Without a handful of package version restrictions, certain package builds will break, usually in the configure stage.

System-wide spack software installations are maintained by NCO on this platform, which are not associated with spack-stack.

NOAA RDHPCS Gaea C6

The following is required for building new spack environments with Intel on this platform.. Don't use module purge on Gaea!

# These modules should be loaded by default, if not load (swap) with:
module load PrgEnv-intel/8.5.0
module load intel-classic/2023.2.0
module load cray-mpich/8.1.30

On Gaea, running module available without the option -t can lead to an error: /usr/bin/lua5.3: /opt/cray/pe/lmod/lmod/libexec/Spider.lua:568: stack overflow

NOAA RDHPCS Hera

The following is required for building new spack environments with any supported compiler on this platform.

module purge

On Hera, a dedicated node exists for ecflow server jobs (hecflow01). Users starting ecflow_server on the regular login nodes will see their servers being killed every few minutes, and may be barred from accessing the system.

NOAA RDHPCS Ursa

The following is required for building new spack environments with any supported compiler on this platform.

module purge

On Ursa, a dedicated node exists for ecflow server jobs is not currently available. It is expected that a dedicated node for ecflow will be available when Ursa is available as a production host.

NOAA Parallel Works (AWS, Azure, Gcloud)

The following is required for building new spack environments with any supported compiler on this platform. The default module path needs to be removed, otherwise spack detects the system as Cray.

module purge

U.S. Navy Parallel Works (AWS)

The following is required for building new spack environments with GNU on this platform.

umask 0022
module purge

Clone this wiki locally