This directory contains a self‑contained helper script for spinning up a GPU-enabled kind cluster using NVIDIA's nvkind.
| File | Purpose |
|---|---|
bootstrap.sh |
Main automation script (cluster create → device plugin → smoke test). |
nvkind-ingress.yaml |
kind cluster configuration template consumed by nvkind. |
Install the dependencies first (Docker, kind, kubectl, Helm, Go, Make, jq, nvkind). The root bootstrap/install-deps.sh in this repo can do this for you.
Your host must expose NVIDIA GPUs and have the driver + CUDA toolkit stack installed. If Docker is not already configured for the NVIDIA runtime, run the script with CONFIGURE_TOOLKIT=true.
# optional: tweak defaults
export CLUSTER_NAME=kind-gpu
export CONFIGURE_TOOLKIT=true # only needed the first time on a new host
bootstrap/bootstrap.shThe script will:
- Clone/build
nvkind(or reuse an existing checkout) - Recreate the named cluster via
nvkindusing the provided template - Install the NVIDIA device plugin with
runtimeClassName=nvidia - Verify that GPU allocatable resources are visible
- Run a simple
nvidia-smismoke test pod (skip viaSKIP_SMOKE_TEST=true)
Key environment variables:
| Variable | Default | Description |
|---|---|---|
CLUSTER_NAME |
kind-gpu |
Name of the kind cluster |
NVKIND_DIR |
../nvkind |
Location of the nvkind checkout/binary |
TEMPLATE_PATH |
nvkind-ingress.yaml |
Cluster template passed to nvkind |
DELETE_EXISTING |
true |
Whether to delete an existing cluster with the same name |
CONFIGURE_TOOLKIT |
false |
Configure NVIDIA container toolkit + restart Docker |
SKIP_SMOKE_TEST |
false |
Skip the CUDA smoke test pod |
Once the script prints Cluster '<name>' is ready for GPU workloads, you can start installing your application stack.