Skip to content

Conversation

@silverham
Copy link

No description provided.

@tobybellwood
Copy link

Technically here you don't need to rename the base image in the Dockerfile, as you're already defining it as a target

- FROM ${CLI_IMAGE} as cli
+ FROM cli_base_image as cli

You should then be able to reference the target as the additional_context in the docker-compose.yml

      additional_contexts:
        cli: "service:cli"

We do this upstream at lagoon-examples/drupal-base#175

…he thr name is not technically used directly as reference as the solution is simpler.
@tobybellwood
Copy link

FWIW I did manage to get the additional_contexts lines working locally in a docker-compose.override.yml (on non govcms projects) - so no scaffold update needed - but also had reports that it didn't work via ahoy

@silverham
Copy link
Author

Good point. I thought the the reference key had to used, but actually, docker doesn't detect if it's is used in the dockerfile build file. So we can hijack this to ensure cli is built first so it's image name ${CLI_IMAGE} is available to be reused as side effect of being built already, instead of the using the context build image name directly.

I have updated the PR.

@silverham
Copy link
Author

so no scaffold update needed

Not sure what you mean, isn't docker-compose.override.yml part of the scaffolding? Yes we can have overridden values and not pushed upstream but if all projects need the value (otherwise it breaks on new install & future update results in different versions of containers as new versions on build old cli version while cli is rebuilt). So scaffold update (this git repo) update is needed???

but also had reports that it didn't work via ahoy

I used this change on my local computer while testing out Ubuntu 24.04, ahoy v2.5.0, Docker 28.3.3 - and it works. so should okay? not sure if MAC OS X ahoy is different?

To replicate:

  1. Have computer with no govcms all (or no images)
  2. ahoy up on a GovCMS project
  3. See it fails to build php or niginx container.

You can revert back to clean state by ahoy down and docker image prune -a (assuming no other project running), then start again.

docker version details >docker version

Client: Docker Engine - Community
Version: 28.3.3
API version: 1.51
Go version: go1.24.5
Git commit: 980b856
Built: Fri Jul 25 11:34:09 2025
OS/Arch: linux/amd64
Context: default

Server: Docker Engine - Community
Engine:
Version: 28.3.3
API version: 1.51 (minimum version 1.24)
Go version: go1.24.5
Git commit: bea959c
Built: Fri Jul 25 11:34:09 2025
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.27
GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da
runc:
Version: 1.2.5
GitCommit: v1.2.5-0-g59923ef
docker-init:
Version: 0.19.0
GitCommit: de40ad0

I note the [additional_contexts](https://docs.docker.com/reference/compose-file/build/#additional_contexts) requires Docker Compose 2.17.0 or later so perhaps users are running old versions of docker?

If this the case, maybe we should add warning to ahoy if using with old docker.

@jackwrfuller
Copy link

FWIW I did manage to get the additional_contexts lines working locally in a docker-compose.override.yml (on non govcms projects) - so no scaffold update needed - but also had reports that it didn't work via ahoy

@tobybellwood are you able to give any more detail on these reports? Are there any sites you are aware of that I could try replicate the reported behaviour?

@tobybellwood
Copy link

FWIW I did manage to get the additional_contexts lines working locally in a docker-compose.override.yml (on non govcms projects) - so no scaffold update needed - but also had reports that it didn't work via ahoy

@tobybellwood are you able to give any more detail on these reports? Are there any sites you are aware of that I could try replicate the reported behaviour?

I never experienced it on my demo projects via ahoy , but others did - I tagged you in an internal chat that may have more answers?

@jackwrfuller
Copy link

which chat is this sorry?

@silverham
Copy link
Author

silverham commented Oct 16, 2025

I would like to mention / reiterate that is the crashing error only happens with the CLI / nginx / php images are not already built.
Usually the actual issue is hidden as the containers are built in parallel, so new nginx / php containers will be built from the old CLI container while the new cli container is being built. Hence if you upgrade s/w of the container you get different build versions between them. (actual ongoing problem)

I made a script to reproduce (it clean all nginx/php/cli images and dependant containers, so you just run ahoy build to see the issue.) note uses bash and jq

#!/bin/bash

SHARED_IMAGES=()
SHARED_IMAGES+=("govcms/govcms:10.x-latest")
# Remove these too as by the time they are downloaded, the CLI is already built.
SHARED_IMAGES+=("govcms/nginx-drupal:10.x-latest")
SHARED_IMAGES+=("govcms/php:10.x-latest")

for SHARED_IMAGE in "${SHARED_IMAGES[@]}"; do
  printf "\n\n##### Starting $SHARED_IMAGE #####\n\n"

  # Pull the original image so we can get its top layer.
  docker pull "$SHARED_IMAGE"

  SHARED_TOP_LAYER=$(docker inspect "$SHARED_IMAGE" | jq -r '.[0].RootFS.Layers[-1]?')

  # Find all images that include this top layer.
  DEPENDENTS=()
  for ALL_IMAGES_ID in $(docker images -q | sort | uniq); do
    MATCH=$(docker inspect "$ALL_IMAGES_ID" | jq -r '.[0].RootFS.Layers[]?' 2>/dev/null | grep "$SHARED_TOP_LAYER")
    if [ -n "$MATCH" ]; then
      TAG=$(docker inspect "$ALL_IMAGES_ID" --format '{{index .RepoTags 0}}' 2>/dev/null)
      if [ -n "$TAG" ]; then
        DEPENDENTS+=("$TAG")
      fi
    fi
  done

  # Show original and dependents
  printf "\n####### Show dependents ########"
  for DEPENDENT_IMAGE_NAME in "${DEPENDENTS[@]}"; do
    printf "\n$DEPENDENT_IMAGE_NAME\n"
  done

  # WARNING: Actually removes them
  printf "\n####### Removing original and dependents ########"
  for DEPENDENT_IMAGE_NAME in "${DEPENDENTS[@]}"; do
    docker image rm -f "$DEPENDENT_IMAGE_NAME"
  done

done

# Remove any assoicated build cache so all steps are build fresh and a new
docker builder prune -a

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants