Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schedule ingress-dns pod on the minikube primary node in multi node cluster. #17649

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

fbyrne
Copy link
Contributor

@fbyrne fbyrne commented Nov 19, 2023

fixes #17648

  • Add nodeSelector for primary minikube node to ingress-dns pod template.
  • Add toleration for kubernetes master role.

Copy link

linux-foundation-easycla bot commented Nov 19, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Nov 19, 2023
@k8s-ci-robot
Copy link
Contributor

Welcome @fbyrne!

It looks like this is your first PR to kubernetes/minikube 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/minikube has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Nov 19, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @fbyrne. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Nov 19, 2023
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Nov 19, 2023
Copy link
Contributor

@kundan2707 kundan2707 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fix will work and schedule ingree-dns pod on primary node

@kundan2707
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 28, 2023
@medyagh
Copy link
Member

medyagh commented Nov 29, 2023

/ok-to-test

@medyagh
Copy link
Member

medyagh commented Nov 29, 2023

@fbyrne do u mind pasting the output of minikube or using this addon, before and After this PR?

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 28, 2024
@hansingt
Copy link

hansingt commented Mar 5, 2024

Any Ideas, why the test are failing? This issue is currently preventing me from setting up a multi-node cluster.

But I'm not familiar enough with the internals of minikube to be able to fix the tests on my own.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 4, 2024
@kundan2707
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 12, 2024
@fbyrne
Copy link
Contributor Author

fbyrne commented Apr 16, 2024

@kundan2707 is there a way to rerun the tests? or to run them locally?

@fbyrne
Copy link
Contributor Author

fbyrne commented Apr 16, 2024

nvm, found the guide. https://minikube.sigs.k8s.io/docs/contrib/testing/

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_crio_arm64 TestAddons/parallel/Headlamp (gopogh) 0.00 (chart)
Hyper-V_Windows TestPause/serial/VerifyStatus (gopogh) 0.00 (chart)
Docker_Linux_docker_arm64 TestFunctional/parallel/MountCmd/specific-port (gopogh) 0.58 (chart)
none_Linux TestBinaryMirror (gopogh) 3.73 (chart)
none_Linux TestDownloadOnly/v1.29.3/binaries (gopogh) 3.73 (chart)
none_Linux TestDownloadOnly/v1.29.3/json-events (gopogh) 3.73 (chart)
Hyper-V_Windows TestForceSystemdEnv (gopogh) 13.33 (chart)
Hyperkit_macOS TestCertOptions (gopogh) 40.70 (chart)
Hyper-V_Windows TestMultiControlPlane/serial/StopSecondaryNode (gopogh) 45.24 (chart)
Hyperkit_macOS TestStartStop/group/embed-certs/serial/Pause (gopogh) 47.62 (chart)
Hyperkit_macOS TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (gopogh) 47.62 (chart)
Hyperkit_macOS TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (gopogh) 48.24 (chart)
Hyperkit_macOS TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (gopogh) 48.24 (chart)
Hyperkit_macOS TestStartStop/group/embed-certs/serial/SecondStart (gopogh) 48.84 (chart)

To see the flake rates of all tests by environment, click here.

@fbyrne
Copy link
Contributor Author

fbyrne commented Apr 17, 2024

@kundan2707 @spowelljr Looks like test failures are not related. is there a way to rerun the failed tests?

@fbyrne
Copy link
Contributor Author

fbyrne commented Apr 17, 2024

@kundan2707 @spowelljr Testing Output requested
Before:

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 delete --purge --all
🔥  Successfully deleted all profiles
💀  Successfully purged minikube directory located at - [/home/fergus/.minikube]
📌  Kicbase images have not been deleted. To delete images run:
    ▪ docker rmi gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p before-bugfix-17648 -n 3 start
😄  [before-bugfix-17648] minikube v1.33.0-beta.0 on Ubuntu 22.04
✨  Automatically selected the docker driver. Other choices: kvm2, qemu2, ssh
📌  Using Docker driver with root privileges
👍  Starting "before-bugfix-17648" primary control-plane node in "before-bugfix-17648" cluster
🚜  Pulling base image v0.0.43-1713236840-18649 ...
💾  Downloading Kubernetes v1.29.3 preload ...
    > preloaded-images-k8s-v18-v1...:  350.95 MiB / 350.95 MiB  100.00% 7.51 Mi
🔥  Creating docker container (CPUs=2, Memory=5266MB) ...
🐳  Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting "before-bugfix-17648-m02" worker node in "before-bugfix-17648" cluster
🚜  Pulling base image v0.0.43-1713236840-18649 ...
🔥  Creating docker container (CPUs=2, Memory=5266MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.58.2
🐳  Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
    ▪ env NO_PROXY=192.168.58.2
🔎  Verifying Kubernetes components...

👍  Starting "before-bugfix-17648-m03" worker node in "before-bugfix-17648" cluster
🚜  Pulling base image v0.0.43-1713236840-18649 ...
🔥  Creating docker container (CPUs=2, Memory=5266MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.58.2,192.168.58.3
🐳  Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
    ▪ env NO_PROXY=192.168.58.2
    ▪ env NO_PROXY=192.168.58.2,192.168.58.3
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "before-bugfix-17648" cluster and "default" namespace by default

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p before-bugfix-17648 addons enable ingress
💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p before-bugfix-17648 addons enable ingress-dns
💡  ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
🌟  The 'ingress-dns' addon is enabled

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p before-bugfix-17648 kubectl -- describe pod kube-ingress-dns-minikube --namespace=kube-system
    > kubectl.sha256:  64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    > kubectl:  47.49 MiB / 47.49 MiB [-------------] 100.00% 8.28 MiB p/s 5.9s
Name:             kube-ingress-dns-minikube
Namespace:        kube-system
Priority:         0
Service Account:  minikube-ingress-dns
Node:             before-bugfix-17648-m02/192.168.58.3
Start Time:       Wed, 17 Apr 2024 22:44:17 +0100
Labels:           app=minikube-ingress-dns
                  app.kubernetes.io/part-of=kube-system
Annotations:      <none>
Status:           Running
IP:               192.168.58.3
IPs:
  IP:  192.168.58.3
Containers:
  minikube-ingress-dns:
    Container ID:   docker://25ace2f2565f759c49244bc0fb609d0b6df94a5d8a4facff26df4c0087bd3f96
    Image:          gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f
    Image ID:       docker-pullable://gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f
    Port:           53/UDP
    Host Port:      53/UDP
    State:          Running
      Started:      Wed, 17 Apr 2024 22:44:29 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      DNS_PORT:  53
      POD_IP:     (v1:status.podIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bs9b9 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-bs9b9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  73s   default-scheduler  Successfully assigned kube-system/kube-ingress-dns-minikube to before-bugfix-17648-m02
  Normal  Pulling    73s   kubelet            Pulling image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f"
  Normal  Pulled     62s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f" in 11.299s (11.3s including waiting)
  Normal  Created    61s   kubelet            Created container minikube-ingress-dns
  Normal  Started    61s   kubelet            Started container minikube-ingress-dns

~/src/github/fbyrne/minikube$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p before-bugfix-17648 kubectl -- get po
NAME                               READY   STATUS    RESTARTS   AGE
hello-world-app-5d77478584-r8fcn   1/1     Running   0          21s

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p before-bugfix-17648 kubectl -- get ing --all-namespaces
NAMESPACE     NAME              CLASS   HOSTS                             ADDRESS        PORTS   AGE
kube-system   example-ingress   nginx   hello-john.test,hello-jane.test   192.168.58.2   80      5m21s

~/src/github/fbyrne/minikube$ nslookup hello-john.test $(out/minikube-linux-amd64 -p before-bugfix-17648 ip)
;; communications error to 192.168.58.2#53: connection refused
;; communications error to 192.168.58.2#53: connection refused
;; communications error to 192.168.58.2#53: connection refused
;; no servers could be reached


~/src/github/fbyrne/minikube$ nslookup hello-jane.test $(out/minikube-linux-amd64 -p before-bugfix-17648 ip)
;; communications error to 192.168.58.2#53: connection refused
;; communications error to 192.168.58.2#53: connection refused
;; communications error to 192.168.58.2#53: connection refused
;; no servers could be reached

After:

~/src/github/fbyrne/minikube$ git log --oneline -n 1
10199b51c (HEAD -> bugfix-17648, origin/bugfix-17648) 17648 Remove linux selector annotation
~/src/github/fbyrne/minikube$ make clean cross
rm -rf /home/fergus/src/github/fbyrne/minikube/out
rm -f pkg/minikube/assets/assets.go
rm -f pkg/minikube/translate/translations.go
rm -rf ./vendor
rm -rf /tmp/tmp.*.minikube_*
GOOS="linux" GOARCH="amd64"  \
go build -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.33.0-beta.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.33.0-1713236417-18649 -X k8s.io/minikube/pkg/version.gitCommitID="10199b51c2eedf1f5950c0370420b2864d527849" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -a -o out/minikube-linux-amd64 k8s.io/minikube/cmd/minikube
GOOS="darwin" GOARCH="amd64"  \
go build -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.33.0-beta.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.33.0-1713236417-18649 -X k8s.io/minikube/pkg/version.gitCommitID="10199b51c2eedf1f5950c0370420b2864d527849" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -a -o out/minikube-darwin-amd64 k8s.io/minikube/cmd/minikube
GOOS="windows" GOARCH="amd64"  \
go build -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.33.0-beta.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.33.0-1713236417-18649 -X k8s.io/minikube/pkg/version.gitCommitID="10199b51c2eedf1f5950c0370420b2864d527849" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -a -o out/minikube-windows-amd64 k8s.io/minikube/cmd/minikube
cp out/minikube-windows-amd64 out/minikube-windows-amd64.exe

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p bugfix-17648 -n 3 start
😄  [bugfix-17648] minikube v1.33.0-beta.0 on Ubuntu 22.04
✨  Automatically selected the docker driver. Other choices: kvm2, qemu2, ssh
📌  Using Docker driver with root privileges
👍  Starting "bugfix-17648" primary control-plane node in "bugfix-17648" cluster
🚜  Pulling base image v0.0.43-1713236840-18649 ...
💾  Downloading Kubernetes v1.29.3 preload ...
    > preloaded-images-k8s-v18-v1...:  350.95 MiB / 350.95 MiB  100.00% 8.82 Mi
🔥  Creating docker container (CPUs=2, Memory=5266MB) ...
🐳  Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting "bugfix-17648-m02" worker node in "bugfix-17648" cluster
🚜  Pulling base image v0.0.43-1713236840-18649 ...
🔥  Creating docker container (CPUs=2, Memory=5266MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.58.2
🐳  Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
    ▪ env NO_PROXY=192.168.58.2
🔎  Verifying Kubernetes components...

👍  Starting "bugfix-17648-m03" worker node in "bugfix-17648" cluster
🚜  Pulling base image v0.0.43-1713236840-18649 ...
🔥  Creating docker container (CPUs=2, Memory=5266MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.58.2,192.168.58.3
🐳  Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
    ▪ env NO_PROXY=192.168.58.2
    ▪ env NO_PROXY=192.168.58.2,192.168.58.3
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "bugfix-17648" cluster and "default" namespace by default

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p bugfix-17648 addons enable ingress
💡  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p bugfix-17648 addons enable ingress-dns
💡  ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
🌟  The 'ingress-dns' addon is enabled

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p bugfix-17648 kubectl -- describe pod kube-ingress-dns-minikube --namespace=kube-system
Name:             kube-ingress-dns-minikube
Namespace:        kube-system
Priority:         0
Service Account:  minikube-ingress-dns
Node:             bugfix-17648/192.168.58.2
Start Time:       Wed, 17 Apr 2024 23:04:30 +0100
Labels:           app=minikube-ingress-dns
                  app.kubernetes.io/part-of=kube-system
Annotations:      <none>
Status:           Running
IP:               192.168.58.2
IPs:
  IP:  192.168.58.2
Containers:
  minikube-ingress-dns:
    Container ID:   docker://e37f13e134fb814acfe36a24f0d0b6396ee044e8a6e794d96459153999faaee0
    Image:          gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f
    Image ID:       docker-pullable://gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f
    Port:           53/UDP
    Host Port:      53/UDP
    State:          Running
      Started:      Wed, 17 Apr 2024 23:04:41 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      DNS_PORT:  53
      POD_IP:     (v1:status.podIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bgqqn (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-bgqqn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              minikube.k8s.io/primary=true
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  36s   default-scheduler  Successfully assigned kube-system/kube-ingress-dns-minikube to bugfix-17648
  Normal  Pulling    36s   kubelet            Pulling image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f"
  Normal  Pulled     26s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f" in 10.015s (10.015s including waiting)
  Normal  Created    26s   kubelet            Created container minikube-ingress-dns
  Normal  Started    25s   kubelet            Started container minikube-ingress-dns

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p bugfix-17648 kubectl -- apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created

~/src/github/fbyrne/minikube$ out/minikube-linux-amd64 -p bugfix-17648 kubectl -- get ing --all-namespaces
NAMESPACE     NAME              CLASS   HOSTS                             ADDRESS   PORTS   AGE
kube-system   example-ingress   nginx   hello-john.test,hello-jane.test             80      16s

~/src/github/fbyrne/minikube$ nslookup hello-john.test $(out/minikube-linux-amd64 -p bugfix-17648 ip)
Server:		192.168.58.2
Address:	192.168.58.2#53

Non-authoritative answer:
Name:	hello-john.test
Address: 192.168.58.2
Name:	hello-john.test
Address: 192.168.58.2

~/src/github/fbyrne/minikube$ nslookup hello-jane.test $(out/minikube-linux-amd64 -p bugfix-17648 ip)
Server:		192.168.58.2
Address:	192.168.58.2#53

Non-authoritative answer:
Name:	hello-jane.test
Address: 192.168.58.2
Name:	hello-jane.test
Address: 192.168.58.2

@fbyrne
Copy link
Contributor Author

fbyrne commented Apr 22, 2024

@kundan2707 @spowelljr @sharifelgamal can i get a review on this please.

Looks like test failures are not related. is there a way to rerun the failed tests?

@fbyrne
Copy link
Contributor Author

fbyrne commented May 10, 2024

@kundan2707 @spowelljr @sharifelgamal bump on this review.

1 similar comment
@fbyrne
Copy link
Contributor Author

fbyrne commented May 13, 2024

@kundan2707 @spowelljr @sharifelgamal bump on this review.

@ammirator-administrator

@medyagh

@fbyrne
Copy link
Contributor Author

fbyrne commented May 27, 2024

@medyagh @kundan2707 @spowelljr @sharifelgamal bump on this review.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 24, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: fbyrne
Once this PR has been reviewed and has the lgtm label, please assign prezha for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@spowelljr spowelljr removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 25, 2024
@spowelljr
Copy link
Member

/retest-this-please

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

@fbyrne
Copy link
Contributor Author

fbyrne commented Oct 4, 2024

rebased to latest upstream/master

/retest-this-please

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 17649) |
+----------------+----------+---------------------+
| minikube start | 51.1s    | 49.8s               |
| enable ingress | 16.8s    | 16.5s               |
+----------------+----------+---------------------+

Times for minikube start: 50.3s 50.8s 53.9s 50.4s 50.1s
Times for minikube (PR 17649) start: 48.9s 48.8s 51.1s 51.5s 48.8s

Times for minikube ingress: 15.5s 15.5s 14.9s 19.0s 19.0s
Times for minikube (PR 17649) ingress: 15.0s 18.5s 19.0s 15.0s 15.0s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 17649) |
+----------------+----------+---------------------+
| minikube start | 22.1s    | 23.0s               |
| enable ingress | 12.7s    | 12.6s               |
+----------------+----------+---------------------+

Times for minikube start: 20.8s 21.7s 23.8s 20.9s 23.3s
Times for minikube (PR 17649) start: 23.3s 24.7s 21.7s 24.1s 21.1s

Times for minikube ingress: 12.8s 12.8s 12.8s 12.3s 12.8s
Times for minikube (PR 17649) ingress: 12.3s 12.3s 12.8s 12.3s 13.3s

docker driver with containerd runtime

+-------------------+----------+---------------------+
|      COMMAND      | MINIKUBE | MINIKUBE (PR 17649) |
+-------------------+----------+---------------------+
| minikube start    | 21.9s    | 22.0s               |
| ⚠️  enable ingress | 22.8s    | 33.1s ⚠️             |
+-------------------+----------+---------------------+

Times for minikube (PR 17649) ingress: 39.3s 24.3s 40.4s 38.8s 22.8s
Times for minikube ingress: 22.8s 22.8s 22.8s 22.8s 22.8s

Times for minikube start: 19.8s 20.9s 23.4s 22.5s 23.0s
Times for minikube (PR 17649) start: 22.5s 24.1s 22.7s 21.1s 19.8s

@minikube-pr-bot
Copy link

Here are the number of top 10 failed tests in each environments with lowest flake rate.

Environment Test Name Flake Rate
Docker_Windows (3 failed) TestForceSystemdFlag(gopogh) 0.00% (chart)
KVM_Linux (1 failed) TestFunctional/parallel/MountCmd/specific-port(gopogh) 0.00% (chart)

Besides the following environments also have failed tests:

To see the flake rates of all tests by environment, click here.

@fbyrne
Copy link
Contributor Author

fbyrne commented Nov 6, 2024

@spowelljr ok to merge?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ingress-dns addon now working as expected for multinode clusters
10 participants