Unofficial instructions for running Isaac Sim and Isaac Lab on a cluster using Singularity/Apptainer.
For running Isaac Sim/Lab with Docker, please refer to j3soon/docker-isaac-sim for more details.
Please note that we use the singularity commands here for backward compatibility. You can replace them with apptainer without any changes.
The commands here are for testing on a local PC. For running on a cloud cluster, see the Cloud Cluster section.
Retrieve the SIF file from the official Docker registry:
singularity pull docker://nvcr.io/nvidia/isaac-sim:5.1.0Alternatively, if you already have the Docker image locally, you can build the SIF file from the Docker image:
docker pull nvcr.io/nvidia/isaac-sim:5.1.0 singularity build isaac-sim_5.1.0.sif docker-daemon:nvcr.io/nvidia/isaac-sim:5.1.0
Run the container:
# with headless streaming mode
singularity run --env "ACCEPT_EULA=Y" --nv isaac-sim_5.1.0.sif
# or with GUI mode (requires Ubuntu Desktop)
singularity exec --env "ACCEPT_EULA=Y" --nv isaac-sim_5.1.0.sif /isaac-sim/runapp.sh
# or interactive shell in the container
singularity shell --env "ACCEPT_EULA=Y" --nv isaac-sim_5.1.0.sif
# or run headless scripts
singularity exec --env "ACCEPT_EULA=Y" --nv isaac-sim_5.1.0.sif /isaac-sim/python.sh -u /isaac-sim/standalone_examples/api/isaacsim.core.api/time_stepping.py
singularity exec --env "ACCEPT_EULA=Y" --nv isaac-sim_5.1.0.sif /isaac-sim/python.sh -u /isaac-sim/standalone_examples/api/isaacsim.core.api/simulation_callbacks.pyIf you need to write to the container, follow the Isaac Lab instructions below to create an overlay directory and run the container with the
--overlayoption.
The commands here are for testing on a local PC. For running on a cloud cluster, see the Cloud Cluster section.
Retrieve the SIF file from the official Docker registry:
singularity pull docker://nvcr.io/nvidia/isaac-lab:2.3.1Alternatively, if you already have the Docker image locally, you can build the SIF file from the Docker image:
docker pull nvcr.io/nvidia/isaac-lab:2.3.1 singularity build isaac-lab_2.3.1.sif docker-daemon:nvcr.io/nvidia/isaac-lab:2.3.1
Run the container:
# Create directory for overlay
mkdir -p isaac-lab_2.3.1# run H1 pre-trained policy with GUI mode (requires Ubuntu Desktop)
singularity exec --env "ACCEPT_EULA=Y" --nv --overlay isaac-lab_2.3.1 isaac-lab_2.3.1.sif bash -c "cd /workspace/isaaclab && ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/play.py --task Isaac-Velocity-Rough-H1-v0 --num_envs 32 --use_pretrained_checkpoint"
# or interactive shell in the container
singularity shell --env "ACCEPT_EULA=Y" --nv --overlay isaac-lab_2.3.1 isaac-lab_2.3.1.sifor run headless training under custom codebase:
# clone your Isaac Lab codebase
git clone -b v2.3.1 https://github.com/isaac-sim/IsaacLab
ln -s /isaac-sim IsaacLab/_isaac_sim
# start training
singularity exec --env "ACCEPT_EULA=Y" --nv --overlay isaac-lab_2.3.1 isaac-lab_2.3.1.sif bash -c "cd IsaacLab && time ./isaaclab.sh -p scripts/reinforcement_learning/rl_games/train.py --task Isaac-Cartpole-v0 --headless"
# or interactive shell in the container
singularity shell --env "ACCEPT_EULA=Y" --nv --overlay isaac-lab_2.3.1 isaac-lab_2.3.1.sifIn most cases, you want to run real training workloads on a cloud cluster.
Example: NCHC Nano5 Cluster
Note: This cluster currently only have H100/H200 GPUs, which don't have RT cores as in L40S or RTX PRO 6000 GPUs. Therefore, you can only run Isaac Lab training without camera rendering. (reference)
Upload the SIF file and codebase to the cluster:
On your local PC:
# Get the SIF file and codebase
docker pull nvcr.io/nvidia/isaac-lab:2.3.1
singularity build --sandbox --fakeroot isaac-lab_2.3.1_nano5.sif docker-daemon://nvcr.io/nvidia/isaac-lab:2.3.1
tar -czvf isaac-lab_2.3.1_nano5.tar.gz isaac-lab_2.3.1_nano5.sif
# or use pigz for faster compression, as follows:
# tar -I pigz -cvf isaac-lab_2.3.1_nano5.tar.gz isaac-lab_2.3.1_nano5.sif
git clone -b v2.3.1 https://github.com/isaac-sim/IsaacLab
rm -rf IsaacLab/.git
tar --exclude-vcs -czvf IsaacLab.tar.gz IsaacLab
# Upload the SIF file and codebase to the cluster
NCHC_USERNAME=<YOUR_NCHC_USERNAME>
sftp ${NCHC_USERNAME}@nano5.nchc.org.tw
# in the SFTP session, upload the SIF file, codebase, and SLURM script
sftp> put isaac-lab_2.3.1_nano5.tar.gz
sftp> put IsaacLab.tar.gz
sftp> put isaac-lab-cartpole_nano5.slurm
sftp> exit
# SSH into the cluster
ssh ${NCHC_USERNAME}@nano5.nchc.org.tw
# in the SSH session, extract the SIF file and codebase
tar xzvf isaac-lab_2.3.1_nano5.tar.gz
tar xzvf IsaacLab.tar.gz
# create Isaac Sim symbolic link for Isaac Lab
ln -s /isaac-sim IsaacLab/_isaac_simIn the same SSH session, test interactive mode:
export NCHC_ACCOUNT_ID=<YOUR_NCHC_ACCOUNT_ID>
salloc --partition=dev --account=${NCHC_ACCOUNT_ID} --ntasks=1 --gpus-per-node=1
# in the allocated session, start the container
singularity exec --env "ACCEPT_EULA=Y" --nv isaac-lab_2.3.1_nano5.sif bash
# in the container, run
cd IsaacLab && ./isaaclab.sh -p scripts/reinforcement_learning/rl_games/train.py --task Isaac-Cartpole-v0 --headless
# Make sure to press Ctrl+D twice to exit the container and deallocate the session. This prevents resource wastage and avoid unexpected costs.In the same SSH session, submit the batch job:
export NCHC_ACCOUNT_ID=<YOUR_NCHC_ACCOUNT_ID>
sbatch --account=${NCHC_ACCOUNT_ID} isaac-lab-cartpole_nano5.slurmView the job status:
squeue | grep $USER
tail -f <SLURM_JOB_ID>_isaac-lab-cartpole.outAfter the job is finished, check the logs and checkpoints:
tree IsaacLab/logsReferences:
Alternatively, see the official Isaac Lab instructions and scripts for more details.
This project has been made possible through the support of ElsaLab and NVIDIA AI Technology Center (NVAITC).