Automated scripts for setting up GPU-enabled LXC containers on Proxmox with persistent device mapping.
cd /root
git clone https://github.com/jammsen/proxmox-setup-scripts.git
cd proxmox-setup-scripts
./guided-install.shThe guided installer provides:
- ✅ Interactive menu with progress tracking
- ✅ Auto-detection of completed steps (shows green checkmarks)
- ✅ Smart defaults - "all" runs Basic Host Setup with confirmations
- ✅ Flexible execution - Run individual scripts, ranges, or all steps
- ✅ Progress persistence - Resume where you left off
cd /root
git clone https://github.com/jammsen/proxmox-setup-scripts.git
cd proxmox-setup-scriptscd host
# For NVIDIA GPUs:
./004 - install-nvidia-drivers.sh
# For AMD GPUs:
./003 - install-amd-drivers.sh
# Setup udev rules for persistent GPU device paths:
./006 - setup-udev-gpu-rules.shcd /root/proxmox-setup-scripts/host
./008 - create-gpu-lxc.shThis script will:
- Auto-detect available GPUs
- Create LXC container with GPU passthrough
- Configure persistent PCI-based device mapping
- Automatically mount scripts directory at
/root/proxmox-setup-scriptsinside container - Enable SSH access
# Run installation script directly from host into container
pct exec <CONTAINER_ID> -- bash /root/proxmox-setup-scripts/lxc/install-docker-and-container-runtime-in-lxc-guest.sh# SSH into container
ssh root@<CONTAINER_IP>
# Navigate to mounted scripts
cd /root/proxmox-setup-scripts/lxc
# Run installation
./install-docker-and-container-runtime-in-lxc-guest.sh# From host:
pct exec <CONTAINER_ID> -- docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu24.04 nvidia-smi
# From inside container:
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu24.04 nvidia-smi- Persistent GPU Mapping: Uses PCI paths (
/dev/dri/by-path/pci-*) instead of card0/card1 - Automatic GPU Detection: Detects AMD and NVIDIA GPUs with vendor filtering
- Scripts Available Inside Container: Repository auto-mounted at
/root/proxmox-setup-scripts - Interactive Setup: User-friendly prompts with sensible defaults
- Full Testing Suite: PyTorch CUDA validation included
proxmox-setup-scripts/
├── guided-install.sh # Interactive guided installer (START HERE!)
├── host/ # Host-side scripts (run on Proxmox)
│ ├── 000-list-gpus.sh
│ ├── 001-install-tools.sh
│ ├── 002-setup-igpu-vram.sh
│ ├── 003-install-amd-drivers.sh
│ ├── 004-install-nvidia-drivers.sh
│ ├── 005-verify-nvidia-drivers.sh
│ ├── 006-setup-udev-gpu-rules.sh
│ ├── 007-upgrade-proxmox.sh
│ └── 011-create-gpu-lxc.sh (main LXC creation script)
├── lxc/ # Guest-side scripts (run in LXC container)
│ ├── install-docker-and-container-runtime-in-lxc-guest.sh
│ └── troubleshoot-nvidia-docker.sh
├── includes/ # Shared libraries
│ └── colors.sh
└── README.md
The guided-install.sh script provides an interactive menu:
./guided-install.sh-
all- Run all Basic Host Setup scripts (000-009) with interactive prompts [DEFAULT]- ✅ Shows detailed description for each script
- ✅ Displays completion status (already completed ✓)
- ✅ Always asks before running each script (never auto-skips)
- ✅ Default answer is "Y" - just press Enter to continue
- ✅ Press "n" to skip any script you don't need
- ✅ Never runs LXC Container Setup (010-019) automatically
-
<number>- Run specific script by number- Example:
004runs NVIDIA driver installation
- Example:
-
<start-end>- Run range of scripts- Example:
001-006runs tools, drivers, and udev setup
- Example:
-
reset- Clear progress tracking to start fresh -
quit- Exit the installer
The installer automatically detects completed steps by checking:
- Installed packages (htop, nvtop, nvidia-smi)
- Loaded kernel modules (amdgpu, nvidia)
- Configuration files (udev rules, kernel parameters)
- Shows green checkmarks (✓) for completed steps
Progress is saved to .install-progress file.
========================================
Proxmox GPU Setup - Guided Installer
========================================
Progress: 3 steps completed
=== Basic Host Setup (000-009) ===
[000]: List all available GPUs and their PCI paths
✓ [001]: Install essential tools (htop, nvtop, etc.)
[002]: Setup Intel iGPU VRAM allocation
[003]: Install AMD GPU drivers
✓ [004]: Install NVIDIA GPU drivers
✓ [005]: Verify NVIDIA driver installation
[006]: Setup udev rules for GPU device permissions
[007]: Upgrade Proxmox to latest version
=== LXC Container Setup (010-019) ===
[011]: Create GPU-enabled LXC container (AMD or NVIDIA)
Options:
all - Run all Basic Host Setup scripts (with confirmations) [DEFAULT]
<number> - Run specific script by number (e.g., 001, 004)
<start-end> - Run range of scripts (e.g., 001-006)
reset - Clear progress tracking
quit - Exit installer
Enter your choice [all]:
- No File Copying: Scripts mounted directly from host
- Always Up-to-Date: Pull changes with
git pullon host, available immediately in all containers - Easy Execution: Run scripts from host using
pct execor from inside container - Version Control: All containers use the same script version from git
- Easy Updates: Update scripts once on host, available to all containers
If GPU isn't detected in container:
# Check devices from host:
pct exec <CONTAINER_ID> -- ls -la /dev/nvidia*
pct exec <CONTAINER_ID> -- ls -la /dev/dri/
# Run troubleshooting script:
pct exec <CONTAINER_ID> -- bash /root/proxmox-setup-scripts/lxc/troubleshoot-nvidia-docker.sh