HYPERFLEET-488 | test: Update the dockerfile to meet Prow job usage#14
Conversation
WalkthroughThe PR modifies Docker build configuration and two e2e test files. The Dockerfile switches the base image to Red Hat UBI 9, restructures the builder stage with updated build dependencies and working directories, adds kubectl binary download, updates runtime stage configuration with additional package installations, and reorganizes binary and test data paths. Additionally, e2e/cluster/creation.go removes an import and disables cluster verification logic via a commented TODO block, while e2e/nodepool/creation.go changes the test suite label from Tier0 to Tier1. Sequence Diagram(s)Sequence diagrams are not applicable to these changes, as they consist primarily of infrastructure configuration updates and test labeling modifications without multi-component sequential flows. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Line 44: The Dockerfile currently copies the built binary to
/usr/local/bin/hyperfleet-e2e but the ENTRYPOINT still points to
/app/hyperfleet-e2e; update the ENTRYPOINT to reference
/usr/local/bin/hyperfleet-e2e (or alternatively change the COPY destination to
/app/hyperfleet-e2e) so the runtime path matches the copied binary and the
container can execute hyperfleet-e2e successfully.
- Around line 34-39: The Dockerfile temporarily elevates to root for package
installation via the "USER root" directive but never switches back; after the
RUN dnf -y install ... && dnf clean all step, add a "USER 1001" (or the image's
non-root UID) to restore the non-root runtime, and if you added or copied files
while root, ensure their ownership/permissions are set appropriately
(chown/chmod) before switching back so the non-root user can access them.
🧹 Nitpick comments (1)
Dockerfile (1)
26-29: Consider pinning kubectl version for reproducible builds.Dynamically fetching the "stable" kubectl version means different builds may produce different images. For CI/Prow jobs, reproducibility is valuable. Additionally, no checksum verification is performed on the downloaded binary.
♻️ Suggested improvement with version pinning and checksum verification
-# Download kubectl (stable version) -RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && \ +# Download kubectl (pinned version for reproducibility) +ARG KUBECTL_VERSION=v1.31.0 +RUN curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl" && \ + curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl.sha256" && \ + echo "$(cat kubectl.sha256) kubectl" | sha256sum --check && \ chmod +x kubectl && \ mv kubectl /usr/local/bin/kubectl
52ca4e1 to
247d0a4
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Around line 17-20: The Dockerfile downloads the latest kubectl binary without
pinning a version or verifying integrity; change the build to accept or set a
pinned KUBECTL_VERSION (e.g., via ARG KUBECTL_VERSION) and a corresponding
expected checksum (ARG KUBECTL_SHA256 or fetch the release’s sha256 file),
download the kubectl binary for that pinned version and the checksum file, then
verify the binary with sha256sum -c (or compare the computed sha256) and fail
the build on mismatch; update the kubectl download step (the RUN that creates
/build/bin/kubectl) to perform these actions and mark the binary executable only
after verification.
In `@e2e/cluster/creation.go`:
- Around line 38-73: The Tier0 test currently has the readiness and adapter
health assertions commented out (the block containing h.WaitForClusterPhase,
h.Client.GetClusterStatuses, h.HasCondition and h.Client.GetCluster), which
makes the test pass even if cluster creation never completes; either restore
these checks (re-enable the WaitForClusterPhase call, the GetClusterStatuses
assertions using HasCondition, and final GetCluster status/Phase assertions) or
explicitly mark the test as skipped until adapter verification is implemented
(add a ginkgo.Skip with a short note about adapter scope being pending at the
top of this test) so the intent is clear; update the test to reference the same
helpers (h.WaitForClusterPhase, h.Client.GetClusterStatuses, h.HasCondition,
h.Client.GetCluster) when re-enabling.
247d0a4 to
229a3ab
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: yasun1 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
855d197
into
openshift-hyperfleet:main
This is the test result in Prow Hyperfleet-e2e ci job
Summary by CodeRabbit
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.