Everything here will change - a lot. Don't depend on any of it for anything you're doing. Good for demos, that's it.
- Open this repository in GitHub Codespaces or Remote - Containers using VS Code
- File > Open workspace and select
workspace.code-workspace
This repo demonstrates the value of devcontainers/spec#18 (and devcontainers/spec#2) by integrating development container metadata into Cloud Native Buildpacks.
Repo contents:
- devpacks: Code to create a set of buildpacks (e.g., see
devpacks/internal/buildpacks
). Build usingmake
. - images: A Dockerfile and related content to generate a set of stack images.
- builders: Config needed to create two builders that include (1) and (2).
The resulting ghcr.io/chuxel/devpacks/builder-prod-full
builder behaves like a typical buildpack, while ghcr.io/chuxel/devpacks/builder-devcontainer-full
instead focuses on creating a dev container image that is similar to production.
These builders can be used with the pack
CLI or other CNB v3 compliant tools.
Now that label support for the dev container spec has been implemented, the extractor utility that was in this repository is no longer required. The resulting image contains all needed devcontainer.json metadata thanks fo a "finalize" Buildpack that adds devcontainer.json content from each Buildpack to the devcontainer.metadata
label.
Right now it supports basic Node.js apps with a start
script in package.json
, basic Python 3 applications that use pip
(and thus have a requirements.txt
file), building Go apps/services. The Go and Python apps need to include a Procfile
with a web
entry to specify the startup command.
This:
$ pack build devcontainer_image --trust-builder --builder ghcr.io/chuxel/devpacks/builder-devcontainer-full
$ devcontainer-extractor devcontainer_image
...will use the contents of the current folder to create an image called devcontainer_image
and a related devcontainer.json.merged
file. Removing .merged
from the filename would allow you to test it in the VS Code Remote - Containers extension.
And this:
$ pack build devcontainer_image --trust-builder --builder ghcr.io/chuxel/devpacks/builder-prod-full
... will generate a production version of the image with the application inside it instead.
Each buildpack in this repository demos something slightly different.
nodejs
- Demos installing Node.js, supporting different layering requirements, and adding devcontainer.json metadata.npminstall
- Demos a dual-mode buildpack that executesnpm install
in prod mode, but adds apostCreateCommand
instead in devcontainer mode. Also "requires"nodejs
.npmbuild
- Demos an optional, prod-only buildpack.npmstart
- Demos adding a prod-only launch config.
cpython
- Demos installing cpython using GitHub Action's python-versions builds and parsing itsversions-manifest.json
file to find the right download. (This model should extend to other Actions "versions" repositories. ) Also add devcontainer.json metadata.pipinstall
- Another dual-mode buildpack likenpminstall
, but for pip3.pythonutils
- Demonstrates a devcontainer mode only step to install tools likepylint
that you would not want in prod mode.
goutils
- Demonstrates a devcontainer mode only buildpack that can depend on a completely external Paketo buildpack to acquire Go itself, then install tools needed for developing. This buildpack also adds all needed devcontainer.json metadata for go development including setting the ptrace capability for debugging. The full Go Paketo buildpack set is then used in the prod builder.procfile
- Demos creating a launch command while in production mode from aProcfile
.finalize
- Demonstrates processing of accumulating devcontainer.json metadata from multiple Buildpacks, placing it in thedevcontainer.metadata
label, cleaning out the source tree, and adding a launch command that prevents the container from terminating by default.
The Buildpacks are written in Go and take advantage of libcnb to simplify interop with the buildpack spec. Here's how the different pieces in this repository work together:
-
The base images for the
prod
anddevcontainer
builders are constructed using a multi-stage Dockerfile with later dev container stages adding more base content - they are therefore a superset of the prod images. The main reason for two sets of images is that the devcontainer image includes a number of utilities like htop, ps, zsh, etc. Installing these OS utilities requires root access, which is not allowed today. (However, an upcoming "image extension" capability could help with this long term so that these become part of a buildpack instead.) -
The devcontainer base images also include updates to rc/profile files to handle the fact that any Buildpack injected environment variables are not available to "docker exec" (or other CLI). Only the sub-processes of the entrypoint get the environmant variables buildpacks add by default, and interacting with the dev container is typically done using commands like exec. See launcher-hack.sh for details. This is critical to ensuring things work in the dev container context. Here again, this is in the image since buildpacks cannot modify contents outside of their specific layer folders either.
-
A "build mode" allows for dual-purpose buildpacks that can either alter behaviors with shared detection logic or simply not be detected when in one mode or the other. For example, a
pythonutils
buildpack that injects tools likepylint
only executes in devcontainer mode, while others likenodejs
orcpython
execute in both modes. Fornpminstall
, apostCreateCommand
is added in devcontainer mode, while the command is actually fired in prod mode. A file placed in a known location in the Dockerfile from step 1 indicates the mode for the build - though you can also set this mode using theBP_DCNB_BUILD_MODE
environment variable. -
Base buildpacks like
nodejs
andcpython
are set up so that downstream buildpacks likenpminstall
andpythoninstall
can add requirements that affect whether they are available in the build image, launch image (resulting output) or both through metadata. Settingbuild=true
causes thenodejs
orpython
to place the contents in the build image whilelaunch=true
causes it to be in the launch image. The union of all requirements is considered for the final result. As a result, these two buildpacks are set up to always "pass" detection, and instead only "provide" the capability for others to require in the event of a failed detection. Where this dynamic behavior is important for this use case is this enables a downstream buildpack to say something should be in the launch image, but not in the build image in one specific mode without having to alter the original. (Paketo buildpacks use a similar trick so that runtimes can be used for tools in the build image even if they aren't in the output - but have the same benefits. See thegoutils
buildpack for a reuse example.) -
The buildpacks can optionally place a
devcontainer.json
snippet file in their layers and add the path to it in a commonFINALIZE_JSON_SEARCH_PATH
build-time environment variable for the layer. These devcontainer.json files can include tooling settings, runtime settings like adding capabilities (e.g. ptrace or privileged), or even lifecycle commands. They're only added in devcontainer mode. -
A
finalize
buildpack adds all devcontainer.json snippets from theFINALIZE_JSON_SEARCH_PATH
to an array and adds this as json in adevcontainer.metadata
label on the image. It also setsuserEnvProbe
tologinInteractiveShell
to ensure that environment variables from launcher update mentioned above is factored into any tooling processes. -
The
finalize
buildpack also removes the source code since this is expected to be mounted into the container when the image is used. As a result,finalize
will fail detection in production mode and is last in the ordering in the devcontainer builder. It also overrides the default launch step to one that sleeps infinitely to prevent it from shutting down (though this last part is technically optional). -
A specific set of these buildpacks are then added to the devcontainer and prod builder, with finalize being the last step for the devcontainer one.
That's the scoop!
-
Buildpacks cannot install anything that requires root access or modify contents outside of the specified layer folder (which isn't a Docker layer in and of itself). There's a image extension/Dockerfile capability coming in spec 0.9 that could enable it.
-
Furthermore, if this image extension capability would allow for a a single "builder" could be used rather than separate ones for devcontainers and production. The primary reason for separate builders today is base image contents because that you cannot install common utilities without root access or access to folders outside of the layer folder. This would work by using different sets of
[[order.group]]
entries in the builder. Dev container focused sets would include then need to include a "mode" buildpack at the start that only passes if theBP_DCNB_BUILD_MODE
environment variable is set to "devcontainer". The steps in thecommon-debian.sh
andlauncher-hack.sh
referenced in the Dockerfile could be contained inside this mode buildpack. -
Given the way Paketo buildpacks are set up, it would be possible to reuse their
cpython
ornodejs
buildpacks. To do so for Python, the pythonutils buildpack in this repository would need to be modified to add all needed devcontainer.json contents, and then add a requirement specifyingbuild=true
andlaunch=true
in the metadata. However, dev container mode would not be able to reuse their npm install or pip install buildpacks. A secondary buildpack would be needed to add devcontainer.json metadata in those cases. Thegoutils
buildpack is a simplified example of this model.