Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add minikube build in prow #22051

Merged

Conversation

azhao155
Copy link
Contributor

@azhao155 azhao155 commented May 2, 2021

Tested with

Build job

zyanshu@zyanshu-ubuntu:~/test-infra$ bazel run //prow/cmd/mkpj -- --config-path=/home/zyanshu/test-infra/config/prow/config.yaml --job-config-path=/home/zyanshu/test-infra/config/
jobs/kubernetes/minikube/minikube.yaml --job=pull-minikube-build > /tmp/foo
Starting local Bazel server and connecting to it...
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
    currently loading: prow/cmd/mkpj
Analyzing: target //prow/cmd/mkpj:mkpj (1 packages loaded, 0 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (9 packages loaded, 6 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (13 packages loaded, 6 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (25 packages loaded, 5425 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (36 packages loaded, 7036 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (38 packages loaded, 7142 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (111 packages loaded, 7613 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (196 packages loaded, 7770 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (305 packages loaded, 8605 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (502 packages loaded, 10505 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (614 packages loaded, 11319 targets configured)
INFO: Analyzed target //prow/cmd/mkpj:mkpj (755 packages loaded, 12075 targets configured).
INFO: Found 1 target...
[2 / 4] [Prepa] BazelWorkspaceStatusAction stable-status.txt
[793 / 1,156] checking cached actions
Target //prow/cmd/mkpj:mkpj up-to-date:
  bazel-bin/prow/cmd/mkpj/mkpj_/mkpj
INFO: Elapsed time: 60.595s, Critical Path: 6.23s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/prow/cmd/mkpj/mkpj_/mkpj '--config-path=/home/zyanshu/test-infra/config/prow/config.yaml' '--job-config-path=/home/zyanshu/test-infra/config/
jobs/kubernetes/minikube/minikube.yaml' '--job=pull-minikube-build'
INFO: Build completed successfully, 1 total action
WARN[0000] empty -github-token-path, will use anonymous github client
PR Number: 11176
INFO[0007] GetPullRequest(kubernetes, minikube, 11176)   client=github
DEBU[0008] GetPullRequest(kubernetes, minikube, 11176) finished  client=github duration=507.615796ms

Run

zyanshu@zyanshu-ubuntu:~/test-infra$ bazel run //prow/cmd/phaino -- /tmp/foo --privileged                                                                               [563/15690]
INFO: Analyzed target //prow/cmd/phaino:phaino (1 packages loaded, 5 targets configured).
INFO: Found 1 target...
Target //prow/cmd/phaino:phaino up-to-date:
  bazel-bin/prow/cmd/phaino/phaino_/phaino
INFO: Elapsed time: 1.553s, Critical Path: 0.70s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
INFO[0000] Reading...                                    path=/tmp/foo
INFO[0000] Converting job into docker run command...     job=pull-minikube-build
WARN[0000] WARNING: running privileged job "pull-minikube-build" can allow nearly all access to the host, please be careful with it
fallback to GOPATH: /home/zyanshu/go
: "docker" "run" "--rm=true" \
 "--name=phaino-593522-1" \
 "--entrypoint=wrapper.sh" \
 "--privileged" \
 "-w" \
 "/go/src/k8s.io/minikube" \
 "-v" \
 "/home/zyanshu/minikube:/go/src/k8s.io/minikube" \
 "-v" \
 ":/docker-graph" \
 "-v" \
 ":/var/lib/docker" \
 "-e" \
 "GOPROXY:https://proxy.golang.org" \
 "-e" \
 "DOCKER_IN_DOCKER_ENABLED:true" \
 "--label=prow.k8s.io/type=presubmit" \
 "--label=created-by-prow=true" \
 "--label=preset-dind-enabled=true" \
 "--label=prow.k8s.io/job=pull-minikube-build" \
 "--label=prow.k8s.io/refs.org=kubernetes" \
 "--label=prow.k8s.io/refs.pull=11176" \
 "--label=prow.k8s.io/refs.repo=minikube" \
 "--label=phaino=true" \
 "docker.io/azhao155/prow-test:1.7" \
 "bash" \
 "-c" \
 "make && ./out/minikube start --force && kubectl get pods -A"
INFO[0000] Starting job...                               job=pull-minikube-build
INFO[0000] Waiting for job to finish...                  container=phaino-593522-1 job=pull-minikube-build
wrapper.sh] [INFO] Wrapping Test Command: `bash -c make && ./out/minikube start --force && kubectl get pods -A`
================================================================================
wrapper.sh] [SETUP] Performing pre-test setup ...
wrapper.sh] [SETUP] Docker in Docker enabled, initializing ...
Starting Docker: docker.
wrapper.sh] [SETUP] Waiting for Docker to be ready, sleeping for 1 seconds ...
wrapper.sh] [SETUP] Done setting up Docker in Docker.
================================================================================
wrapper.sh] [TEST] Running Test Command: `bash -c make && ./out/minikube start --force && kubectl get pods -A` ...
...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   etcd-minikube                      0/1     Pending   0          2s
kube-system   kube-apiserver-minikube            0/1     Running   0          2s
kube-system   kube-controller-manager-minikube   0/1     Running   0          2s
kube-system   kube-scheduler-minikube            0/1     Running   0          2s
kube-system   storage-provisioner                0/1     Pending   0          1s
wrapper.sh] [TEST] Test Command exit code: 0
wrapper.sh] [CLEANUP] Cleaning up after Docker in Docker ...
66ceba696a64
Stopping Docker: docker.
wrapper.sh] [CLEANUP] Done cleaning up after Docker in Docker.
================================================================================
wrapper.sh] Exiting 0
INFO[0450] PASS: deccc97c-ab70-11eb-83eb-42010a800002    duration=7m30.636192668s job=pull-minikube-build
INFO[0450] SUCCESS
INFO[0450] Press Ctrl + c to exit.
^CINFO[0533] Received signal.                              signal=interrupt
INFO[0533] Interrupt received.
INFO[0533] All workers gracefully terminated, exiting.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. area/config Issues or PRs related to code in /config area/jobs sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 2, 2021
Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even though I am not an approver here, but as minikube maintainer I support this PR ! thank you @azhao155 for this effort

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 4, 2021
Copy link
Member

@spiffxp spiffxp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@medyagh @azhao155 can you please also add config/jobs/kubernetes/minikube/OWNERS to this PR so that minikube maintainers can approve future job changes that are relevant to their repo?

I don't have a strong preference who's in it, but a good start might be https://github.com/kubernetes/minikube/blob/c367472f43110f2bef4a1220092e7ccc9dc247e0/OWNERS#L13-L18 and/or sig-cluster-lifecycle-leads since minikube is a sig-cluster-lifecycle subproject (cc @neolit123)

preset-dind-enabled: "true"
spec:
containers:
- image: docker.io/azhao155/prow-test:1.7
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't use a dockerhub-hosted image, or an image from a personal project/repo. Whatever was used to produce this image needs to be open-source and reproducible by others.

What needs to be in this image, and does one of the existing images in gcr.io/k8s-testimages satisfy your needs? If not, what is missing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thats a good idea ! thanks @spiffxp I will provide access to @azhao155 to push the image to k8s-minikube project

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated image and add OWNER file.

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 4, 2021
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 4, 2021
Copy link
Member

@spiffxp spiffxp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Owners LGTM but still have some open questions

preset-dind-enabled: "true"
spec:
containers:
- image: gcr.io/k8s-minikube/prow-test:v0.0.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the source for this image? How can we independently reproduce this image? Let alone ensure there's not a Bitcoin miner tucked inside (not suggesting you would do this, just giving a colorful example). I consider this a blocker.

And again, what does this image have that is missing from the existing set of images available at gcr.io/k8s-testimages? Could we use one of those instead? I don't consider this a blocker.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This is the image: https://github.com/kubernetes/minikube/tree/master/deploy/prow. If you want to build it. Just need to download minikube git repo and make push-prow-test-image.
  2. I haven't looked images gcr.io/k8s-testimages, There might be one i could use. The reason why we keep our own images for minikube is that we want to have everything managed on our own instead of have some dependency from some other images.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM, thanks for the answers.

it might be worth considering an image-pushing job and moving the image to community-hosted infrastructure at some point (ref: https://github.com/kubernetes/test-infra/tree/master/config/jobs/image-pushing#image-pushing-jobs)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will work on that for the next pr, Thanks for the info!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now it's merged, will the prow job run on every minikube pr? Or do i need to configure something to make the prow job run on every minikube pr? Thanks!

Copy link
Member

@spiffxp spiffxp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/lgtm

preset-dind-enabled: "true"
spec:
containers:
- image: gcr.io/k8s-minikube/prow-test:v0.0.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM, thanks for the answers.

it might be worth considering an image-pushing job and moving the image to community-hosted infrastructure at some point (ref: https://github.com/kubernetes/test-infra/tree/master/config/jobs/image-pushing#image-pushing-jobs)

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 6, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: azhao155, medyagh, spiffxp

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 6, 2021
@k8s-ci-robot k8s-ci-robot merged commit 118e2fe into kubernetes:master May 6, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.22 milestone May 6, 2021
@k8s-ci-robot
Copy link
Contributor

@azhao155: Updated the job-config configmap in namespace default at cluster test-infra-trusted using the following files:

  • key minikube.yaml using file config/jobs/kubernetes/minikube/minikube.yaml

In response to this:

Tested with

Build job

zyanshu@zyanshu-ubuntu:~/test-infra$ bazel run //prow/cmd/mkpj -- --config-path=/home/zyanshu/test-infra/config/prow/config.yaml --job-config-path=/home/zyanshu/test-infra/config/
jobs/kubernetes/minikube/minikube.yaml --job=pull-minikube-build > /tmp/foo
Starting local Bazel server and connecting to it...
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
   currently loading: prow/cmd/mkpj
Analyzing: target //prow/cmd/mkpj:mkpj (1 packages loaded, 0 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (9 packages loaded, 6 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (13 packages loaded, 6 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (25 packages loaded, 5425 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (36 packages loaded, 7036 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (38 packages loaded, 7142 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (111 packages loaded, 7613 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (196 packages loaded, 7770 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (305 packages loaded, 8605 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (502 packages loaded, 10505 targets configured)
Analyzing: target //prow/cmd/mkpj:mkpj (614 packages loaded, 11319 targets configured)
INFO: Analyzed target //prow/cmd/mkpj:mkpj (755 packages loaded, 12075 targets configured).
INFO: Found 1 target...
[2 / 4] [Prepa] BazelWorkspaceStatusAction stable-status.txt
[793 / 1,156] checking cached actions
Target //prow/cmd/mkpj:mkpj up-to-date:
 bazel-bin/prow/cmd/mkpj/mkpj_/mkpj
INFO: Elapsed time: 60.595s, Critical Path: 6.23s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/prow/cmd/mkpj/mkpj_/mkpj '--config-path=/home/zyanshu/test-infra/config/prow/config.yaml' '--job-config-path=/home/zyanshu/test-infra/config/
jobs/kubernetes/minikube/minikube.yaml' '--job=pull-minikube-build'
INFO: Build completed successfully, 1 total action
WARN[0000] empty -github-token-path, will use anonymous github client
PR Number: 11176
INFO[0007] GetPullRequest(kubernetes, minikube, 11176)   client=github
DEBU[0008] GetPullRequest(kubernetes, minikube, 11176) finished  client=github duration=507.615796ms

Run

zyanshu@zyanshu-ubuntu:~/test-infra$ bazel run //prow/cmd/phaino -- /tmp/foo --privileged                                                                               [563/15690]
INFO: Analyzed target //prow/cmd/phaino:phaino (1 packages loaded, 5 targets configured).
INFO: Found 1 target...
Target //prow/cmd/phaino:phaino up-to-date:
 bazel-bin/prow/cmd/phaino/phaino_/phaino
INFO: Elapsed time: 1.553s, Critical Path: 0.70s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
INFO[0000] Reading...                                    path=/tmp/foo
INFO[0000] Converting job into docker run command...     job=pull-minikube-build
WARN[0000] WARNING: running privileged job "pull-minikube-build" can allow nearly all access to the host, please be careful with it
fallback to GOPATH: /home/zyanshu/go
: "docker" "run" "--rm=true" \
"--name=phaino-593522-1" \
"--entrypoint=wrapper.sh" \
"--privileged" \
"-w" \
"/go/src/k8s.io/minikube" \
"-v" \
"/home/zyanshu/minikube:/go/src/k8s.io/minikube" \
"-v" \
":/docker-graph" \
"-v" \
":/var/lib/docker" \
"-e" \
"GOPROXY:https://proxy.golang.org" \
"-e" \
"DOCKER_IN_DOCKER_ENABLED:true" \
"--label=prow.k8s.io/type=presubmit" \
"--label=created-by-prow=true" \
"--label=preset-dind-enabled=true" \
"--label=prow.k8s.io/job=pull-minikube-build" \
"--label=prow.k8s.io/refs.org=kubernetes" \
"--label=prow.k8s.io/refs.pull=11176" \
"--label=prow.k8s.io/refs.repo=minikube" \
"--label=phaino=true" \
"docker.io/azhao155/prow-test:1.7" \
"bash" \
"-c" \
"make && ./out/minikube start --force && kubectl get pods -A"
INFO[0000] Starting job...                               job=pull-minikube-build
INFO[0000] Waiting for job to finish...                  container=phaino-593522-1 job=pull-minikube-build
wrapper.sh] [INFO] Wrapping Test Command: `bash -c make && ./out/minikube start --force && kubectl get pods -A`
================================================================================
wrapper.sh] [SETUP] Performing pre-test setup ...
wrapper.sh] [SETUP] Docker in Docker enabled, initializing ...
Starting Docker: docker.
wrapper.sh] [SETUP] Waiting for Docker to be ready, sleeping for 1 seconds ...
wrapper.sh] [SETUP] Done setting up Docker in Docker.
================================================================================
wrapper.sh] [TEST] Running Test Command: `bash -c make && ./out/minikube start --force && kubectl get pods -A` ...
...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
 - Generating certificates and keys ...
 - Booting up control plane ...
 - Configuring RBAC rules ...
* Verifying Kubernetes components...
 - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   etcd-minikube                      0/1     Pending   0          2s
kube-system   kube-apiserver-minikube            0/1     Running   0          2s
kube-system   kube-controller-manager-minikube   0/1     Running   0          2s
kube-system   kube-scheduler-minikube            0/1     Running   0          2s
kube-system   storage-provisioner                0/1     Pending   0          1s
wrapper.sh] [TEST] Test Command exit code: 0
wrapper.sh] [CLEANUP] Cleaning up after Docker in Docker ...
66ceba696a64
Stopping Docker: docker.
wrapper.sh] [CLEANUP] Done cleaning up after Docker in Docker.
================================================================================
wrapper.sh] Exiting 0
INFO[0450] PASS: deccc97c-ab70-11eb-83eb-42010a800002    duration=7m30.636192668s job=pull-minikube-build
INFO[0450] SUCCESS
INFO[0450] Press Ctrl + c to exit.
^CINFO[0533] Received signal.                              signal=interrupt
INFO[0533] Interrupt received.
INFO[0533] All workers gracefully terminated, exiting.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/config Issues or PRs related to code in /config area/jobs cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants