Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add addon for external DNS #8980

Open
segevfiner opened this issue Aug 12, 2020 · 22 comments
Open

Add addon for external DNS #8980

segevfiner opened this issue Aug 12, 2020 · 22 comments
Assignees
Labels
area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code

Comments

@segevfiner
Copy link

segevfiner commented Aug 12, 2020

Add an addon for installing external DNS. See https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md which uses CoreDNS as the DNS server.

Unlike ingress-dns, which is minikube specific:

  1. This is sometimes used in production, so will give similar behavior in development to production.
  2. Will support LoadBalancer services.

Some notes about configuring the host that also apply for ingress-dns:

  1. For Linux distros using systemd-resolved, you can use systemd.network units for configuring DNS domain specific DNS servers. Using the Domains & DNS keys in a new network unit matching the required interface. I used it before, but don't remember the details.
  2. For Windows, there is NRPT which should allow setting this.
@priyawadhwa priyawadhwa added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 12, 2020
@priyawadhwa
Copy link

Hey @segevfiner by default minikube has coredns running:

$ kubectl get po -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-6wtb9           1/1     Running   0          34s

is this what you're looking for?

@priyawadhwa priyawadhwa added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. and removed kind/feature Categorizes issue or PR as related to a new feature. labels Aug 12, 2020
@segevfiner
Copy link
Author

segevfiner commented Aug 12, 2020

That one is for the cluster internal DNS. Handling DNS for services and pods inside the cluster.

The external DNS setup referenced is for setting another CoreDNS server for out of cluster DNS, emulating what you get with Kubernetes external DNS in peoduction Kubernetes deployments, where it configures DNS providers like AWS Route53. Similar to what the ingress-dns addon achieves but with the benefits mentioned in the description.

I hope this clarifies what this is about.

@priyawadhwa
Copy link

@segevfiner thanks for clarifying! Would you be interested in contributing this addon?

If so, documentation can be found here: https://minikube.sigs.k8s.io/docs/contrib/addons/

@priyawadhwa priyawadhwa added area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. labels Aug 13, 2020
@segevfiner
Copy link
Author

segevfiner commented Aug 13, 2020

Looking into this further, there are some things to iron out first:

First, the setup in the tutorial there is not the cleanest, I tried to make a simpler one here using just helm to get started:

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade -i etcd-operator stable/etcd-operator --set customResources.createEtcdClusterCRD=true
helm upgrade -i coredns stable/coredns -f coredns-values.yaml
helm upgrade -i external-dns bitnami/external-dns --set provider=coredns --set coredns.etcdEndpoints=http://etcd-cluster-client:2379
coredns-values.yaml
isClusterService: false

serviceType: NodePort

servers:
- zones:
  - zone: .
  port: 53
  plugins:
  - name: errors
  # Serves a /health endpoint on :8080, required for livenessProbe
  - name: health
    configBlock: |-
      lameduck 5s
  # Serves a /ready endpoint on :8181, required for readinessProbe
  - name: ready
  # Required to query kubernetes API for data
  - name: kubernetes
    parameters: cluster.local in-addr.arpa ip6.arpa
    configBlock: |-
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
      ttl 30
  # Serves a /metrics endpoint on :9153, required for serviceMonitor
  - name: prometheus
    parameters: 0.0.0.0:9153
  - name: forward
    parameters: . /etc/resolv.conf
  - name: cache
    parameters: 30
  - name: loop
  - name: reload
  - name: loadbalance
  - name: etcd
    parameters: test
    configBlock: |-
      stubzones
      path /skydns
      endpoint http://etcd-cluster-client:2379

(Yes, had to copy that entire large block from the values.yaml cause it's an array 🤷‍♂️ )

And after it is all up and running, I can create some LoadBalancer service and annotate it:

kubectl create deployment nginx --image nginx
kubectl expose deployment nginx --port 80 --type LoadBalancer
kubectl annotate service nginx "external-dns.alpha.kubernetes.io/hostname=nginx.test"

Start minikube tunnel so the LoadBalancer service gets an IP and is accessible.

Get the CoreDNS IP & port:

  export COREDNS_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services coredns-coredns)
  export COREDNS_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

And query it:

dig @$COREDNS_IP -p $COREDNS_PORT nginx.test

Problems:

  1. This is a complex helm deployment that needs to somehow be untangled and simplified to fit in a minikube addon.
  2. It uses the archived/unmaintained etcd-operator to setup the etcd cluster. Now only is it unarchived/unmaintained, but it might be too big of a thing to install in an addon just for this. So probably should use a standalone single node etcd like minikube itself uses from kubeadm.
  3. CoreDNS is exposed using a NodePort which doesn't use the standard DNS port, meaning it can't be configured in most hosts. Can't use LoadBalancer either because it uses both UDP and TCP. Maybe there is a workaround to get LoadBalancer to work, or maybe we should make it available under the right port using a different means, e.g. hostNetwork like ingress-dns uses or hostPort, or some other proxy shenanigans.
  4. We might want to make the domain configurable, shouldn't really be a problem.

Host configuration afterwards will be similar to ingress-dns where the instructions can probably be expanded with information for more operating systems or possibly automated at some point (Being careful not to destroy the host...)

@darkn3rd
Copy link

I am using minikube w/ kvm. I noticed that dnsmasq was setup for minikube, but saw no entries. It would be nice to have external-dns use dnsmasq.

@segevfiner
Copy link
Author

That dnsmasq is likely listening on a localhost address and is a caching forwarding resolver for the VM itself, rather than a DNS server intended to be queried externally. (Serves the same purpose as systemd-resolved on newer distros). On most distros that use dnsmasq like so, installing dnsmasq directly would often install a second copy that is listening on the interfaces directly and is meant to be configured for external queries, separate from the one listening on localhost.

Also note that such a caching resolver on Linux often doesn't listen on 127.0.0.1 but rather on some other localhost address, such as 127.0.1.1 on Ubuntu, as to allow another DNS server to bind to 127.0.0.1.

Besides that, the upstream external-dns project has no support for using dnsmasq as the DNS server.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2020
@segevfiner
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2020
@medyagh
Copy link
Member

medyagh commented Feb 10, 2021

@segevfiner are you still interested to make this happen ? I would be happy to accept any PR that adds this feature.

@lorenzleutgeb
Copy link

I came here after setting up ingress-dns, and wanting to change the domain (from *.test. to *.mycorp.example.com). I realized that minikube-ingress-dns, which is used by the ingress-dns extension, is a very small and rather inactive Node.js project. CoreDNS appears to be a much more stable, and better backed up candidate. Please make this happen!

@spowelljr spowelljr added long-term-support Long-term support issues that can't be fixed in code and removed triage/long-term-support labels May 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 16, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@denniseffing
Copy link

denniseffing commented Jan 12, 2022

I will try to come up with a solution. I'm working on a PR for an external dns addon, not sure if I want to use CoreDNS though. Maybe bind in combination with RFC2136 is a better, more basic alternative that doesn't rely on another etcd.

@medyagh Would you mind reopening this issue?

@k8s-ci-robot
Copy link
Contributor

@denniseffing: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I will try to come up with a solution. I'm working on a PR for an external dns addon, not sure if I want to use CoreDNS though. Maybe bind in combination with RFC2136 is a better, more basic alternative that doesn't rely on another etcd.

@medyagh Would you mind reopening this issue?

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@denniseffing
Copy link

/assign

@segevfiner
Copy link
Author

I will try to come up with a solution. I'm working on a PR for an external dns addon, not sure if I want to use CoreDNS though. Maybe bind in combination with RFC2136 is a better, more basic alternative that doesn't rely on another etcd.

@medyagh Would you mind reopening this issue?

The CoreDNS via etcd is just what was/is currently available in the official external-dns for such a setup. Yeah, it's a bit clumsy. My PR, besides various documentation and fixes that I listed, and it likely becoming stale by now, is mostly complete except that I couldn't figure out how to get the host IP since for DNS I have to explicitly bind to the external interface due to having dnsmasq bound to localhost:22 on many hosts.

If there is a lighter weight alternative than CoreDNS via etcd avilable now, than that is likely a better option. Technically, with a proper plugin CoreDNS could have implemented external-dns by itself (Without actually needing external-dns itself), there just isn't a plugin that implements external-dns currently (What exists implements different behaviours than external-dns, unless things changed since I last checked)

@segevfiner
Copy link
Author

/reopen

@k8s-ci-robot
Copy link
Contributor

@segevfiner: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Jan 12, 2022
@denniseffing
Copy link

externaldns has support for RFC2136 as well and a quick proof of concept on my machine worked flawlessly using hostNetwork: true. Seems promising!

@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 26, 2022
@gbaso
Copy link

gbaso commented Sep 10, 2024

any progress on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/addons help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet