Skip to content

Conversation

elasticdotventures
Copy link

Summary

Adds Docker container support that automatically builds and publishes multi-platform container images to GitHub Container Registry (GHCR) for each release.

Changes

  • Dockerfile: Multi-stage build downloading appropriate just binary from GitHub releases
  • GitHub Action: Automatically triggers on releases to build and publish container images
  • README: Added container usage documentation with multistage build example

Key Features

  • Multi-architecture support: linux/amd64, linux/arm64, linux/arm/v7
  • Minimal ~10MB image using scratch base
  • Semantic versioning: 1.42.4, 1.42, 1, latest tags
  • Automatic release integration
  • Architecture-aware binary selection

Test Plan

  • Test manual workflow dispatch
  • Verify multi-arch builds work correctly
  • Confirm container runs and executes just --help
  • Test multistage build copying

Fixes #1497

- Add Dockerfile that downloads just binary from GitHub releases
- Add GitHub Action to build and publish multi-platform container images to GHCR
- Container images are built automatically on releases and tagged with semantic versioning
- Support for linux/amd64, linux/arm64, and linux/arm/v7 architectures
- Minimal ~10MB container using scratch base image
- Add container usage documentation to README

Fixes casey#1497
Copy link

@polarathene polarathene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some feedback worth considering (sorry about the verbosity, short on time).

- Use github.repository_owner for more consistent authentication
- Remove ARMv7 platform support (linux/arm/v7) to focus on modern architectures
- Simplify Dockerfile download/extraction using direct tar pipeline
- Improve tagging logic to only update latest/major tags from default branch
@elasticdotventures
Copy link
Author

Hi @polarathene,

Thank you for the thorough feedback on the container implementation! I've addressed all the points you raised:

Changes made:

  • ✅ Updated authentication to use github.repository_owner instead of github.actor for better consistency
  • ✅ Removed ARMv7 platform support (linux/arm/v7) to focus on modern architectures (amd64/arm64 only)
  • ✅ Simplified the Dockerfile download/extraction logic using the direct tar pipeline approach you suggested
  • ✅ Improved tagging logic to only update latest, major, and minor version tags when building from the default branch

The container now follows the best practices you outlined while keeping the implementation clean and maintainable. The ARMv7 removal should help avoid the compatibility issues you mentioned, and the improved tagging prevents inappropriate updates to latest/major tags from older releases.

Thanks again for taking the time to provide such detailed and helpful feedback!

@polarathene polarathene mentioned this pull request Aug 5, 2025
Copy link

@polarathene polarathene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for applying some of the earlier feedback! 😁

I'm a bit unsure why earlier feedback was marked as resolved when the suggestions were dismissed without any context?

Regarding the tag handling, if you're going to add some precautions like that you should probably include flavor.latest=false as an example I shared demonstrates.

I think while it's probably a hassle this publish workflow should probably get some confirmation that it's triggered and successful when a separate workflow triggers a published release like release.yaml does.

elasticdotventures and others added 4 commits August 7, 2025 17:45
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
- Add flavor.latest=false to prevent accidental latest tag overwrites
- Implement robust semver-based latest tag determination using git ls-remote
- Add protection against major version 0.x getting latest tag
- Include workaround for docker/metadata-action#461

Co-Authored-By: polarathene <polarathene@users.noreply.github.com>
@elasticdotventures

This comment was marked as outdated.

@polarathene
Copy link

polarathene commented Aug 8, 2025

omg, these comments are huge .. and I flipped to the branch on my lappy, retyped the changes, it was late, force pushed, missed one.

No worries, apologies for the excessive feedback, I rarely have the time to condense it (I should probably learn how to use LLMs to help with that at some point 🤔).

Anyway, here's another round of review, but I've tried to save you the trouble with a checklist in this comment and snippets in the collapsed overview that you can roughly copy/paste to apply.

The individual review comments provide more justification to support the change requests.


Actionables:

  • WORKDIR removal from Dockerfile (preferred), or documentation added to README.md (if verified to have a valid working use-case).
  • Apply one of the change suggestions for semver tag config, the smaller variant is preferred.
  • Switch from on.release to on.workflow_run.
  • Add QEMU setup action to workflow, or modify Dockerfile to build only with native execution (preferred).
  • Remove _b00t_.toml from .gitignore, or provide context to why it was added.
Change request overview (click to view)

You should either remove the WORKDIR instruction, or document it in the README section as requested for those that would use the published container image to run just (EDIT: won't be functional though?), rather than only as convenient form of packaging for other images to pull from.

UPDATE: Since a shell is required in the image to use just, it seems better to remove WORKDIR and let others extend the Dockerfile to manage that as they see fit (a static shell binary like NuShell could be added + configured).

Another valid documentation example is instead of the COPY instruction, if you don't need just at runtime but only during the image build, it could be temporarily sourced via the published image (like I demonstrated how a hypothetical eget image could):

FROM alpine

# Run `just` from the official image with a local `justfile` from your build context without needing a `COPY`:
RUN \
  --mount=type=bind,from=ghcr.io/casey/just:latest,source=/usr/local/bin/just,target=/usr/local/bin/just \
  --mount=type=bind,source=./justfile,target=./justfile \
    just do-it

From scratch image example (could alternatively extend the just image), with NuShell added:

# NOTE: Instead of `scratch`, alternatively extend the `just` image for a simpler `RUN` statement
FROM scratch
COPY --link --from=ghcr.io/nushell/nushell /usr/bin/nu /usr/local/bin/nu
SHELL ["/usr/local/bin/nu", "-c"]

WORKDIR /example
COPY <<HEREDOC /example/justfile
set shell := ["/usr/local/bin/nu", "-c"]

do-it:
  echo 'hello world'
HEREDOC

RUN --mount=type=bind,from=ghcr.io/casey/just:latest,source=/usr/local/bin/just,target=/usr/local/bin/just \
    just do-it

Build it and it'll execute that justfile via RUN successfully:

docker build --progress plain --tag localhost/example .

# ...
echo 'hello world'
hello world

I think there was a bit of a misunderstanding with this feedback where I demonstrate how to be more thorough with semver concerns I've seen in the wild 😅

As just is a much smaller project and is highly unlikely to maintain multiple release branches for backports AFAIK, the project is probably quite safe from those concerns (unlike projects such as Tyk which had several mishaps with their release pipeline and GH releases latest / tagging concerns that affected their image tags incorrectly). just only ever seems to have sequential releases at a glance.

Likewise, as I was providing an existing workflow snippet I have as reference material, it included another condition for major zero-ver releases which is no longer applicable to just, so that is redundant (assuming no images would be built for these old releases via the manual workflow_dispatch trigger).

To get a maintainer to feel more comfortable with approving a PR like this, it would be better to minimize what is being offloaded to them. I'm partly to blame here for not communicating more clearly in the review feedback 😓

I have provided two suggestions for semver tag handling:

  • More resilient (adjusted to this repo) with fixes for the bugs introduced.
  • Simple (no identifying the latest semver tag), should be suitable for this project 👍 (unless the maintainer prefers the added complexity)

Simple variant:

    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v5
      with:
        images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
        # NOTE: The `semver` tag type below with the default `flavor.latest=auto` action setting,
        # will implicitly append a `latest` tag to publish:
        # https://github.com/docker/metadata-action/issues/461#issuecomment-2680849083
        tags: |
          type=semver,pattern={{version}},value=${{ steps.version.outputs.version }}
          type=semver,pattern={{major}}.{{minor}},value=${{ steps.version.outputs.version }}
          type=semver,pattern={{major}},value=${{ steps.version.outputs.version }}

Consider swapping the on.release workflow trigger for on.workflow_run instead:

on:
  # Event triggered upon completion of the `release.yaml` workflow
  workflow_run:
    workflows: ['Release']
    types:
      - completed

  # Manual trigger
  workflow_dispatch:
  # ...

# ...

jobs:
  container:
  if: ${{ github.event.workflow_run.conclusion == 'success' || inputs.version }}

I think that'd be more reliable for triggering correctly after a release is published via the release.yaml workflow (tag push event trigger).


Presumably the workflow hasn't been verified to build/publish the ARM64 image? At least at a glance I think you'll need to either:

Add QEMU to your workflow setup to support emulating ARM64 image execution:

    - name: 'Set up QEMU'
      uses: docker/setup-qemu-action@v3
      with:
        platforms: arm64

or modify the Dockerfile to build ARM64 image from an AMD64 base image (since the final stage is scratch with only the platform specific binary):

# syntax=docker/dockerfile:1

# NOTE: ARGs `BUILDPLATFORM` + `TARGETARCH` are implicitly defined by BuildKit (but must be declared after `FROM`):
# https://docs.docker.com/reference/dockerfile/#automatic-platform-args-in-the-global-scope
FROM --platform=${BUILDPLATFORM} alpine AS downloader

elasticdotventures and others added 7 commits August 8, 2025 17:04
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
- Change container.yaml trigger from release.published to workflow_call
- Simplify version extraction to always use inputs.version
- Remove condition on latest-semver step since it's always called with a version
- Add container job to release.yaml that calls container.yaml after checksum
- This addresses @polarathene's concern about release event reliability

Co-Authored-By: polarathene <polarathene@users.noreply.github.com>
@elasticdotventures
Copy link
Author

elasticdotventures commented Aug 11, 2025

@polarathene
plz review

@polarathene
Copy link

Collapsed as outdated response

NOTE: Below was a response I was preparing before I started a review, you later pushed changes and I did not wrap up this comment in time prior so I switched to a review.

You can probably can ignore this entire comment. But might be of some value so I've retained it.


Ok... so I'm not entirely sure how you're approaching this PR on your end, but my attempt at a checklist didn't seem to get read (presumably even less likely for the more verbose feedback), and you just applied the first/all suggestions as-is from my review feedback? 🤔

image

The arrow points out a collapsed area that would have all the preferred snippets available for an easy copy/paste in your editor.

For example, on the checklist (and mentioned in individual feedback comments) I suggested either QEMU action for the workflow or a Dockerfile change, since only one should have been necessary. You applied both suggestions (with the Dockerfile one clearly under a "For reference" section header, with a much simpler alternative shown.. but hey both work, so I guess that's fine 🤷‍♂️).


If you weren't wanting to manually make changes locally via your editor, you could also do this fully in the browser via the GH web editor (there's a full workspace editor to modify multiple files, or you can use the other web UI to edit per file):

image

I am not sure if you're new-ish to PR review processes, but seeing as you applied each suggestion via the web UI individually, if you click the "Files changed" tab at the top of this PR page, it would take you to the view that I took the screenshot above from. In that view you can batch commit multiple suggestions within a single commit.

However you should really try grok what the suggestion was about rather than apply it blindly, which is all I can infer given the lack of interaction and how you went about it 😅 (I'm not berating you btw, I'm just a terrible communicator.. there's lots to learn and do as a dev so I'm just assuming these are areas you're inexperienced in which is totally okay!)


Since concerns were raised about inconsistent workflow config and potential for bugs but no verification was done, I've done so myself.

Regarding on.release event trigger:

  • I can confirm that with how this repo is setup pushing a tag to trigger release.yaml to publish a release will not trigger container.yaml via on.release. The bottom of the release actions docs also references this to raise awareness about that concern.
  • Additionally if publishing a release via the GH release to create the tag that way, the on.release trigger for container.yaml is also not feasible as it would attempt to run before release.yaml completes to update the published release with the assets container.yaml wants to retrieve for the Dockerfile build.

Copy link

@polarathene polarathene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey sorry for the late review, I had most of this prepared when you requested but have been caught up elsewhere, I can't quite recall the state of this feedback but I think I covered most things (Might still need some verification 😅).

I'll hopefully have my own tasks elsewhere wrapped up so we can try get this PR in a good state 💪


NOTE: The iterations of this PR perhaps better clarifies the concerns I expressed in the associated issue, despite your "bloody easy" expectation and offer to help ease the maintenance burden... this kinda thing can get tricky when it's not our own personal projects 😝 (speaking from my own experiences since when a maintenance issue does prop up, I often have to get familiar with all this stuff again since it's an area you ideally don't touch much again once setup, but it does happen 😓)

Comment on lines +23 to +30
# Use scratch for minimal final image - no OS, just the binary
# This results in a ~10MB image vs ~50MB+ with a full OS
FROM scratch
COPY --from=downloader /usr/local/bin/just /usr/local/bin/just

# Default to running just with help if no arguments provided
ENTRYPOINT ["just"]
CMD ["--help"] No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming you agree with the prior Dockerfile comment on CMD + scratch, this will improve usability of the image directly at the expense of going from 5MB to 13MB (less in network overhead due to compression at the image registry), seems like a reasonable tradeoff:

Suggested change
# Use scratch for minimal final image - no OS, just the binary
# This results in a ~10MB image vs ~50MB+ with a full OS
FROM scratch
COPY --from=downloader /usr/local/bin/just /usr/local/bin/just
# Default to running just with help if no arguments provided
ENTRYPOINT ["just"]
CMD ["--help"]
ENTRYPOINT ["just"]

Otherwise update the README to inform users about the image not being intended for direct usage, so they don't burden the maintainer with support requests.

publish_dir: www

container:
uses: ./.github/workflows/container.yaml

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No action required, just informative for those unaware.


For reference: This will run the container.yaml workflow associated to the commit of this workflows trigger (the tagged commit), just as release.yaml itself does.

  • For this project in particular and the way this container job is triggered, this isn't a concern since it's unlikely to have a release with an older Dockerfile checkout being used.
  • In other projects where the trigger isn't the latest commit on the default branch being tagged, but some other release series branch, you'd have to keep in mind the risk of divergence (note: there is a similar caveat with workflow_dispatch at container.yaml when selecting to run from a different ref instead of the default branch offered).
  • This can be worth being mindful of when workflow_run is involved, since it instead operates from the latest commit (head ref) at the default branch, regardless of the commit that the workflow it depends on was using (release.yaml in this case).

However if it was desirable to fix container.yaml or the Dockerfile as a CI fix in a commit that landed after the commit to be tagged for release, before that release was triggered via the maintainer pushing the tag... then it'd be useful to ensure this were adjusted.

That said, in other contexts doing so can be problematic when other branches are used such as a PR update for a workflow that is triggered by PRs, or breaking changes affecting other long-lived branches or forks (if neither patched workflows that weren't restricted to the default branch to trigger).

The alternative with a specific git ref would look like this (the owner + repo prefix is mandatory however and expressions cannot be used for the benefit of forks or copy/paste):

 container:
    uses: casey/just/.github/workflows/container.yaml@master

elasticdotventures and others added 2 commits August 21, 2025 21:20
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
elasticdotventures and others added 6 commits August 21, 2025 21:21
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Co-authored-by: Brennan Kinney <5098581+polarathene@users.noreply.github.com>
Copy link

@polarathene polarathene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 @casey should be good to take over review from here.


I'd still suggest avoiding the multi-stage to scratch image and just publish with the Alpine base image, it's not much more weight but makes it actually usable (EDIT: actually this would require modifying FROM scratch to FROM alpine due to the first stage avoiding emulation via FROM --platform=${BUILDPLATFORM}, publishing that for a non-native arch would have the wrong base).

If you're really against doing so, then you really should update the README for reasons covered in prior review feedback.


Alternatively the container.yaml workflow could also just handle the retrieval of the release archive instead (if this Dockerfile would only be intended for CI usage), then the Dockerfile would be simplified to something like:

FROM alpine
ARG TARGETARCH
COPY ./releases/${TARGETARCH}/just /usr/local/bin/just
ENTRYPOINT ["just"]

With container.yaml adding an extra step to fetch the two archives and extract to ./releases/{amd64,arm64}/just.

The current Dockerfile is more convenient though as-is.

Comment on lines +23 to +30
# Use scratch for minimal final image - no OS, just the binary
# This results in a ~10MB image vs ~50MB+ with a full OS
FROM scratch
COPY --from=downloader /usr/local/bin/just /usr/local/bin/just

# Default to running just with help if no arguments provided
ENTRYPOINT ["just"]
CMD ["--help"] No newline at end of file
Copy link

@polarathene polarathene Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As previously detailed in feedback.


5MB => 13MB increase, but the image can be used directly.

Suggested change
# Use scratch for minimal final image - no OS, just the binary
# This results in a ~10MB image vs ~50MB+ with a full OS
FROM scratch
COPY --from=downloader /usr/local/bin/just /usr/local/bin/just
# Default to running just with help if no arguments provided
ENTRYPOINT ["just"]
CMD ["--help"]
FROM alpine
COPY --from=downloader /usr/local/bin/just /usr/local/bin/just
ENTRYPOINT ["just"]

No need to update README to mention the lack of a shell if adopting this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Container image
3 participants