Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress-dns addon now working as expected for multinode clusters #17648

Open
fbyrne opened this issue Nov 19, 2023 · 8 comments · May be fixed by #17649
Open

Ingress-dns addon now working as expected for multinode clusters #17648

fbyrne opened this issue Nov 19, 2023 · 8 comments · May be fixed by #17649
Assignees
Labels
addon/ingress co/multinode Issues related to multinode clusters

Comments

@fbyrne
Copy link
Contributor

fbyrne commented Nov 19, 2023

What Happened?

When in multi-node cluster mode the ingress-dns pod is not scheduled on the primary node leading to the errors setting up on the get started guide.

Steps to reproduce:

  1. minikube start -n 3 --addons=ingress,ingress-dns
  2. follow the steps in the ingress-dns guide
  3. do a dns lookup on the running service nslookup hello-jane.test $(minikube ip)

Expected: the nslookup succeeds
Actual: the dns lookup fails with communication error: connection refused.

Attach the log file

N/A

Operating System

Ubuntu

Driver

Docker

@eugenwintersberger
Copy link

I can confirm that behavior. A quick fix was to modify the ingress-dns template (see the last to lines of the following snippet):

spec:
  serviceAccountName: minikube-ingress-dns
  hostNetwork: true
  containers:
    - name: minikube-ingress-dns
      image: {{.CustomRegistries.IngressDNS | default .ImageRepository | default .Registries.IngressDNS }}{{.Images.IngressDNS}}
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 53
          protocol: UDP
      env:
        - name: DNS_PORT
          value: "53"
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
  nodeSelector:
    minikube.k8s.io/primary: "true"

With these modification the ingress-dns addon works on a multi-node cluster too. I am not certian if this is a good solution and should become an PR. Any thought on this?

@fbyrne
Copy link
Contributor Author

fbyrne commented Nov 27, 2023

I've submitted #17649 as a fix

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 25, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spowelljr spowelljr reopened this Sep 25, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 26, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@spowelljr spowelljr reopened this Oct 26, 2024
@spowelljr spowelljr removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 26, 2024
@spowelljr spowelljr added co/multinode Issues related to multinode clusters addon/ingress labels Oct 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/ingress co/multinode Issues related to multinode clusters
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants