- Architecture overview
- Images and tagging
- Prerequisites
- Quick start (building and using consume images)
- Building custom base images
- File structure
- GPU-specific notes
- Development workflow
A two-stage containerized OpenVSCode setup with Python managed by uv, featuring:
- Base images: Streamlined foundations with OpenVSCode + uv + Python + essential packages
- Consume images: Extended configurations adding your specific Python packages, Open VSX extensions, and VS Code settings
- Dual templates: CPU and GPU variants with CUDA + PyTorch support
- Multi-architecture: AMD64 and ARM64 support
- Fully customizable: Via GitHub Actions for base images and local configuration for consume images
This project uses a two-stage approach:
-
Base images (
base_image/
): Streamlined foundations containing:- Ubuntu + OpenVSCode + uv + Python
- Basic runtime packages
- Jupyter kernel setup
- GPU variant includes CUDA runtime + PyTorch stack
-
Consume images (
consume_image/
): Extended configurations that:- Extend published base images
- Add your specific Python packages, Open VSX extensions, and VS Code settings
- Build quickly for rapid iteration
Base and consume images are published to GHCR as:
ghcr.io/<owner>/<repo>:<tag>
Tag patterns for base images:
- CPU:
{arch}-ubuntu{version}-py{python_version}
- GPU:
{arch}-ubuntu{version}-py{python_version}-cu{cuda_version}
Examples:
amd64-ubuntu24.04-py3.13
arm64-ubuntu24.04-py3.13
amd64-ubuntu24.04-py3.13-cu12.9.1
arm64-ubuntu24.04-py3.13-cu12.9.1
Note: GHCR paths use a lowercase owner. The workflows handle this automatically.
- Docker and Docker Compose v2
- GNU Make (for Makefile targets used to run Compose)
- For GPU usage:
- NVIDIA drivers on host
- NVIDIA Container Toolkit
- Test:
docker run --rm --gpus all nvidia/cuda:12.9.1-base-ubuntu24.04 nvidia-smi
- Edit
consume_image/requirements.txt
for Python packages. - Edit
consume_image/extensions.txt
for Open VSX extensions (one extension id per line). - Edit
consume_image/settings.json
for VS Code settings.
By default, the repository root is mounted to /workspace/bind
. To mount a different directory, edit the volumes
section in the root docker-compose.yml
:
volumes:
- .:/workspace/bind # Default: mount repo root
- ${HOME}/Documents:/workspace/bind # Mount Documents folder
- ./my-project:/workspace/bind # Mount specific project
Create .env
at the repository root with your GHCR owner (lowercase):
GHCR_OWNER=your-gh-username-or-org
Do not add the repo name to .env
. It’s derived locally from the directory name by the Makefile and in Actions from the GitHub repo.
Why a Makefile? It auto-exports:
- GHCR_OWNER from
.env
- GHCR_REPO from the current directory name
This keeps commands short and avoids manual env setup.
From the repository root:
- CPU, AMD64:
make up-amd
- CPU, ARM64:
make up-arm
- GPU, AMD64:
make up-amd-gpu
- GPU, ARM64:
make up-arm-gpu
Stop/cleanup (mirror the profile you used):
make down-amd # or down-arm / down-amd-gpu / down-arm-gpu
The terminal will output a URL with access token:
http://localhost:3000/?t=<token>
Copy the full URL (including token) into your browser.
Use the GitHub Actions workflows to build and publish custom base images.
Navigate to Actions → “Build & Publish to GHCR (base image)” and configure:
Required inputs:
template
platform
ubuntu_version
uv_version
python_version
cuda_version
openvscode_version
Optional inputs:
torch_version
torchvision_version
torchaudio_version
Tags will be published under:
ghcr.io/<owner>/<repo>:<computed-tag>
where <owner>
is your GitHub org/user (lowercased automatically) and <repo>
is this repository’s name.
├── .github/workflows/ # GitHub Actions for building images
│ ├── build-base-image.yaml # CI path for base images
│ └── build-consume-image.yml # CI path for consume images (alternative)
├── base_image/ # Base image Dockerfiles
│ ├── Dockerfile # CPU variant
│ └── Dockerfile.gpu # GPU variant with CUDA + PyTorch
├── consume_image/ # Consume image setup
│ ├── Dockerfile # Extends base images
│ ├── requirements.txt # Your Python packages
│ ├── extensions.txt # Your Open VSX extensions
│ └── settings.json # Your VS Code settings
├── docker-compose.yml # Root-level Compose (references consume_image/)
├── Makefile # Helper targets (profiles for up/down)
├── .env # GHCR owner configuration (GHCR_OWNER only)
├── LICENSE
└── README.md
The GPU Dockerfile:
- Uses
nvidia/cuda:{CUDA_VERSION}-runtime-ubuntu{UBUNTU_VERSION}
base - Derives PyTorch wheel index from CUDA version (e.g.,
12.9.1
→cu129
) - Performs pre-flight check against PyTorch wheel availability
- Installs PyTorch stack with CUDA support
- Build base image(s) using GitHub Actions (“Build & Publish to GHCR (base image)”) to produce tags like:
amd64-ubuntu24.04-py3.13
arm64-ubuntu24.04-py3.13-cu12.9.1
- Locally build and run the consume image using the Makefile (which extends the matching base image via
docker-compose.yml
):make up-amd
,make up-arm
,make up-amd-gpu
, ormake up-arm-gpu
- Alternative: Build the consume image using the “Build & Publish to GHCR (consume image)” Action, which takes
base_image_tag
and publishes<base_image_tag>-<suffix>
.
This approach ensures fast local iteration for consume images while keeping base images consistent and reproducible via Actions.