Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1] jettech/kube-webhook-certgen is not compatible with 1.22+ #7418

Closed
maybe-sybr opened this issue Aug 3, 2021 · 29 comments
Closed

[v1] jettech/kube-webhook-certgen is not compatible with 1.22+ #7418

maybe-sybr opened this issue Aug 3, 2021 · 29 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@maybe-sybr
Copy link

maybe-sybr commented Aug 3, 2021

NGINX Ingress controller version: v1.0.0-beta.1

Kubernetes version (use kubectl version): 1.22+ server (usernetes v20210708.0)

Environment: Bare metal usernetes

  • Cloud provider or hardware configuration: bare metal
  • OS (e.g. from /etc/os-release): Fedora 34
  • Kernel (e.g. uname -a): 5.13.6-200.fc34.x86_64
  • Install tools: usernetes, helm
  • Others:

What happened:

The 1.0.0-beta.1 chart and baremetal/deploy.yaml use jettech/kube-webhook-certgen:v1.5.1 as an admission hook to patch in certs. This image attempts to use admissionregistration/v1beta1 which disappeared in API 1.22. The main repository has an outstanding issue (jet/kube-webhook-certgen#30) to update to using v1 of this API but it hasn't been worked on AFAICT.

This manifests in the following error when attempting to set up an ingress-nginx on a 1.22+ server with the default chart values or example manifest YAMLs:

$ kubectl logs -n ingress-nginx   pod/ingress-nginx-admission-patch--1-xpr9f
W0803 02:54:40.519953       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"level":"info","msg":"patching webhook configurations 'ingress-nginx-admission' mutating=false, validating=true, failurePolicy=Fail","source":"k8s/k8s.go:39","time":"2021-08-03T02:54:40Z"}
{"err":"the server could not find the requested resource","level":"fatal","msg":"failed getting validating webhook","source":"k8s/k8s.go:48","time":"2021-08-03T02:54:40Z"}

What you expected to happen:

How to reproduce it:

sh-5.1$ cat ingress-nginx.values.yaml
controller:
  image:
    tag: "v1.0.0-beta.1"
    digest: "sha256:f058f3fdc940095957695829745956c6acddcaef839907360965e27fd3348e2e"
sh-5.1$ helm install test-ingress ./charts/ingress-nginx/ --values ingress-nginx.values.yaml
^C  # Waited for a while here, it gets stuck
sh-5.1$ kubectl get all -A
NAMESPACE     NAME                                                        READY   STATUS    RESTARTS      AGE
default       pod/test-ingress-ingress-nginx-admission-patch--1-nfp8h     0/1     Error     2 (22s ago)   23s
default       pod/test-ingress-ingress-nginx-controller-98f5696c9-m8k84   1/1     Running   0             23s
kube-system   pod/coredns-6cff99dc8c-bpv9g                                1/1     Running   0             28m
kube-system   pod/coredns-6cff99dc8c-nv7gj                                1/1     Running   0             28m

NAMESPACE     NAME                                                      TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)                      AGE
default       service/kubernetes                                        ClusterIP      10.0.0.1     <none>        443/TCP                      84s
default       service/test-ingress-ingress-nginx-controller             LoadBalancer   10.0.0.133   <pending>     80:32264/TCP,443:32554/TCP   23s
default       service/test-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.0.210   <none>        443/TCP                      23s
kube-system   service/kube-dns                                          ClusterIP      10.0.0.53    <none>        53/UDP,53/TCP,9153/TCP       6d2h

NAMESPACE     NAME                                                    READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/test-ingress-ingress-nginx-controller   1/1     1            1           23s
kube-system   deployment.apps/coredns                                 2/2     2            2           6d2h

NAMESPACE     NAME                                                              DESIRED   CURRENT   READY   AGE
default       replicaset.apps/test-ingress-ingress-nginx-controller-98f5696c9   1         1         1       23s
kube-system   replicaset.apps/coredns-6cff99dc8c                                2         2         2       6d2h

NAMESPACE   NAME                                                   COMPLETIONS   DURATION   AGE
default     job.batch/test-ingress-ingress-nginx-admission-patch   0/1           23s        23s
sh-5.1$ kubectl logs pod/test-ingress-ingress-nginx-admission-patch--1-nfp8h
W0803 03:16:06.529542       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"level":"info","msg":"patching webhook configurations 'test-ingress-ingress-nginx-admission' mutating=false, validating=true, failurePolicy=Fail","source":"k8s/k8s.go:39","time":"2021-08-03T03:16:06Z"}
{"err":"the server could not find the requested resource","level":"fatal","msg":"failed getting validating webhook","source":"k8s/k8s.go:48","time":"2021-08-03T03:16:06Z"}
  • Attempt reinstall with patch hook disabled
    It works if you do this.
$ helm uninstall test-ingress
$ kubectl delete -n default job --all
sh-5.1$ cat ingress-nginx.values.yaml
controller:
  image:
    tag: "v1.0.0-beta.1"
    digest: "sha256:f058f3fdc940095957695829745956c6acddcaef839907360965e27fd3348e2e"
  admissionWebhooks:
    patch:
      enabled: false
sh-5.1$ helm install test-ingress ./charts/ingress-nginx/ --values ingress-nginx.values.yaml
NAME: test-ingress
LAST DEPLOYED: Tue Aug  3 13:17:48 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w test-ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class:
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
sh-5.1$ kubectl get all -A
NAMESPACE     NAME                                                        READY   STATUS    RESTARTS   AGE
default       pod/test-ingress-ingress-nginx-controller-98f5696c9-z8xnb   0/1     Running   0          5s
kube-system   pod/coredns-6cff99dc8c-bpv9g                                1/1     Running   0          29m
kube-system   pod/coredns-6cff99dc8c-nv7gj                                1/1     Running   0          29m

NAMESPACE     NAME                                                      TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)                      AGE
default       service/kubernetes                                        ClusterIP      10.0.0.1     <none>        443/TCP                      3m5s
default       service/test-ingress-ingress-nginx-controller             LoadBalancer   10.0.0.103   <pending>     80:31922/TCP,443:31545/TCP   5s
default       service/test-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.0.10    <none>        443/TCP                      5s
kube-system   service/kube-dns                                          ClusterIP      10.0.0.53    <none>        53/UDP,53/TCP,9153/TCP       6d2h

NAMESPACE     NAME                                                    READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/test-ingress-ingress-nginx-controller   0/1     1            0           5s
kube-system   deployment.apps/coredns                                 2/2     2            2           6d2h

NAMESPACE     NAME                                                              DESIRED   CURRENT   READY   AGE
default       replicaset.apps/test-ingress-ingress-nginx-controller-98f5696c9   1         1         0       5s
kube-system   replicaset.apps/coredns-6cff99dc8c                                2         2         2       6d2h

Anything else we need to know:

/kind bug

@maybe-sybr maybe-sybr added the kind/bug Categorizes issue or PR as related to a bug. label Aug 3, 2021
@k8s-ci-robot
Copy link
Contributor

@maybe-sybr: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Aug 3, 2021
kingdonb pushed a commit to kingdonb/old-fleet-infra that referenced this issue Aug 3, 2021
@kingdonb
Copy link

kingdonb commented Aug 3, 2021

I encountered this issue through the 4.0.0 chart (beta) that was just released, I think I was able to resolve it by disabling the admissionWebhooks.patch.enabled chart field value as you suggested 👍

The Kubernetes cluster is 1.22.0-rc.0

@maybe-sybr
Copy link
Author

I actually hit some further issues late yesterday with the admissionWebhooks.patch.enabled: false value applied. I have to sanitise this output slightly since it's an internal deployment, the main difference being that it is an ingress backed by an actual service and using a rewrite rule. Hopefully I haven't missed any other subtleties.

Applying a templatised YAML for one of my charts:

$ kubectl apply -f my-app.yaml
service/my-app created
deployment.apps/my-app created
pod/my-app-test-connection created
Error from server (InternalError): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"kubernetes.io/ingress.class\":\"nginx\",\"nginx.ingress.kubernetes.io/rewrite-target\":\"/$2\",\"nginx.ingress.kubernetes.io/x-forwarded-prefix\":\"/my-app\"},\"labels\":{\"app.kubernetes.io/instance\":\"my-app\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"my-app\",\"app.kubernetes.io/version\":\"1.6.0\",\"helm.sh/chart\":\"my-app-1.2.0\"},\"name\":\"my-app\",\"namespace\":\"default\"},\"spec\":{\"rules\":[{\"host\":\"frontend.app.lan\",\"http\":{\"paths\":[{\"backend\":{\"service\":{\"name\":\"my-app\",\"port\":{\"number\":80}}},\"path\":\"/my-app(/|$)(.*)\",\"pathType\":\"Prefix\"}]}}]}}\n"}},"spec":{"rules":[{"host":"frontend.app.lan","http":{"paths":[{"backend":{"service":{"name":"my-app","port":{"number":80}}},"path":"/my-app(/|$)(.*)","pathType":"Prefix"}]}}]}}
to:
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind:
"networking.k8s.io/v1, Kind=Ingress"
Name: "my-app", Namespace: "default"
for: "my-app.yaml": Internal error occurred: failed calling webhook
"validate.nginx.ingress.kubernetes.io": Post
"https://ingress-nginx-controller-admission.kube-system.svc:443/networking/v1/ingresses?timeout=30s":
x509: certificate signed by unknown authority

That kind of makes sense I guess, since it's likely that without the patching admission webhook, there might be some stubbed TLS certificate rather than the one which should have been minted. I may have to run through this stuff again to see if I've made some mistake.

@kingdonb
Copy link

kingdonb commented Aug 4, 2021

That's what I'm seeing as well. At first glance it looked like disabling the patch hook solved the problem, but it only stopped the ingress controller from crashing. I got the same x509 errors you saw after that, you're not doing anything wrong (or we both are!)

@fracarvic
Copy link

fracarvic commented Aug 5, 2021

In my case ValidatingWebhookConfiguration stayed installed from previous helm installation. I deleted mine with

 kubectl delete ValidatingWebhookConfiguration ingress-nginx-admission

and install 4.0.0 helm chart with controller.admissionWebhooks.enabled: false and can create new ingress resources without errors.

@kingdonb
Copy link

kingdonb commented Aug 6, 2021

I can confirm, the 1.0.0-beta.0 image works for me with the 4.0.0 chart, as long as I disable admission webhooks globally.

Disabling the patch hook by itself was not enough. I think the other hook must depend on the certificates that are applied through the patch hook. I'm sure this will have to be resolved somehow before the 1.0.0 final can be released.

But I am currently using Kubernetes 1.22.0, Cert-manager 1.5.0-beta.0, and ingress-nginx together (and it is glorious 🎉) thanks for the tip @fracarvic

@rikatz
Copy link
Contributor

rikatz commented Aug 8, 2021

/priority critical-urgent
/assign
Cc @tao12345666333 @strongjz before the v1 release this also should be fixed

@k8s-ci-robot k8s-ci-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed needs-priority labels Aug 8, 2021
@tao12345666333
Copy link
Member

/cc

before the v1 release this also should be fixed

yep. we need to discuss a solution.

  • submit a PR for the project/ fork it

  • Use other solutions

@rikatz
Copy link
Contributor

rikatz commented Aug 9, 2021

I've reached @vsliouniaev on Slack before thinking about fork or other solutions.

I'm ready to submit a PR, just checking if it works as expected here :)

@rikatz
Copy link
Contributor

rikatz commented Aug 9, 2021

Folks, can you please test the webhook with image:

rpkatz/kube-webhook-certgen:v1.5.2

And check if the problem persists? If this is solved, I'm going to submit a PR to the original project

@rikatz
Copy link
Contributor

rikatz commented Aug 9, 2021

fyi patch works:

helm install --set controller.admissionWebhooks.patch.image.image=rpkatz/kube-webhook-certgen --set controller.admissionWebhooks.patch.image.tag=v1.5.2 --devel ingress-nginx ingress-nginx/ingress-nginx

Gonna open the PR here

@maybe-sybr
Copy link
Author

That helm install command appears to work fine for me, @rikatz. Output from the controller pod serving from a simple static landing page service for your reference:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"archive", BuildDate:"2021-03-30T00:00:00Z", GoVersion:"go1.16", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.0-beta.0", GitCommit:"a3f24e8459465495738af1b9cc6c3db80696e3c1", GitTreeState:"clean", BuildDate:"2021-06-22T21:00:26Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl logs -fn default           pod/ingress-nginx-controller-5977fdd7bd-hz6mn
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.0.0-beta.1
  Build:         da790570bd8d07d4980b175719f16c194301950d
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.20.1

-------------------------------------------------------------------------------

W0809 23:31:11.901950       2 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0809 23:31:11.904762       2 main.go:221] "Creating API client" host="https://10.0.0.1:443"
I0809 23:31:11.918946       2 main.go:265] "Running in Kubernetes cluster" major="1" minor="22+" git="v1.22.0-beta.0" state="clean" commit="a3f24e8459465495738af1b9cc6c3db80696e3c1" platform="linux/amd64"
I0809 23:31:12.163030       2 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0809 23:31:12.188577       2 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0809 23:31:12.209661       2 nginx.go:254] "Starting NGINX Ingress controller"
I0809 23:31:12.215795       2 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"ingress-nginx-controller", UID:"f10c0b6c-edd8-43ea-b421-29ccbb6fd27e", APIVersion:"v1", ResourceVersion:"90049", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/ingress-nginx-controller
I0809 23:31:13.312735       2 store.go:365] "Found valid IngressClass" ingress="myapp-dev/landing" ingressclass="nginx"
I0809 23:31:13.312842       2 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"myapp-dev", Name:"landing", UID:"99772a49-e575-4f7c-856a-aeb83b246533", APIVersion:"networking.k8s.io/v1", ResourceVersion:"89912", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0809 23:31:13.410102       2 leaderelection.go:243] attempting to acquire leader lease default/ingress-controller-leader...
I0809 23:31:13.410101       2 nginx.go:296] "Starting NGINX process"
I0809 23:31:13.411195       2 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0809 23:31:13.412183       2 controller.go:150] "Configuration changes detected, backend reload required"
I0809 23:31:13.414002       2 leaderelection.go:253] successfully acquired lease default/ingress-controller-leader
I0809 23:31:13.414033       2 status.go:84] "New leader elected" identity="ingress-nginx-controller-5977fdd7bd-hz6mn"
I0809 23:31:13.417211       2 status.go:284] "updating Ingress status" namespace="myapp-dev" ingress="landing" currentValue=[] newValue=[{IP:10.4.2.0 Hostname: Ports:[]}]
I0809 23:31:13.420441       2 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"myapp-dev", Name:"landing", UID:"99772a49-e575-4f7c-856a-aeb83b246533", APIVersion:"networking.k8s.io/v1", ResourceVersion:"90137", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0809 23:31:13.512673       2 controller.go:167] "Backend successfully reloaded"
I0809 23:31:13.512762       2 controller.go:178] "Initial sync, sleeping for 1 second"
I0809 23:31:13.512809       2 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"ingress-nginx-controller-5977fdd7bd-hz6mn", UID:"9f8ac8f1-3859-4360-8946-b984a804bfa5", APIVersion:"v1", ResourceVersion:"90086", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.88.0.1 - - [09/Aug/2021:23:34:26 +0000] "GET / HTTP/1.1" 200 509 "-" "curl/7.76.1" 89 0.001 [myapp-dev-landing-80] [] 10.88.1.88:80 509 0.000 200 7e352456ab148d039e774f4c059ec2dc

Thanks for chasing this up!

@rikatz
Copy link
Contributor

rikatz commented Aug 10, 2021

Great news. Thank you all for sticking with us :)

I’m expecting some answer on this by tuesday this week (aka tomorrow in my timezone) and will discuss with other maintainers the best approach so this wont be a showstopper.

Will keep this issue open right now, as there’s no official solution yet

@fracarvic
Copy link

Tested helm chart 4.0.0-beta.1 with webhooks enabled

  admissionWebhooks:
    patch:
      image:
        image: rpkatz/kube-webhook-certgen
        tag: v1.5.2

and works perfectly, can create new ingress resources without problem.

Thanks.

@kingdonb
Copy link

Just tested helm chart 4.0.0-beta.2 with all of my customizations removed, with the included image and with re-enabled admission webhooks. It's working for me. 👍

@longwuyuan
Copy link
Contributor

hello @maybe-sybr @kingdonb @fracarvic ,

We have released a new beta

% date ; helm search repo ingress --devel
Fri Aug 13 01:22:55 IST 2021
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                                                                                                                
ingress-nginx/ingress-nginx     4.0.0-beta.2    1.0.0-beta.2    Ingress controller for Kubernetes using NGINX a...

This contains a new image for the certgen k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068

We are hoping you can test this beta.2 and provide feedback.
Thanks,
; Long

@fracarvic
Copy link

Tested helm chart 4.0.0-beta.2 with default admission configuration from chart and all is working well.

@longwuyuan
Copy link
Contributor

Thanks for updating @kingdonb @fracarvic

@maybe-sybr
Copy link
Author

maybe-sybr commented Aug 13, 2021

@longwuyuan - the new chart also works for me with all of my changes reverted and replaced with a simple --version 4.0.0-beta.2 option. Thanks!

Edit: Turns out I tested on a 1.21+ cluster since I had to walk it back for other reasons. In any case, at least the fact that the new chart and images worked suggests that they're using the ingress/v1 API happily now, and that the new certgen image was pulled as expected.

@longwuyuan
Copy link
Contributor

Thank you for updating @maybe-sybr

@longwuyuan
Copy link
Contributor

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@briantopping
Copy link

briantopping commented Aug 13, 2021

Happy to report success on k8s v1.21.0 as well. Thanks to the team for your hard work getting this quickly resolved!

@longwuyuan
Copy link
Contributor

@briantopping thank you

@freshteapot
Copy link

Out of curiosity...

What is the command you are using to install the "specific version"?

helm search repo ingress-nginx                                       
NAME                       	CHART VERSION	APP VERSION	DESCRIPTION                                       
ingress-nginx/ingress-nginx	3.35.0       	0.48.1     	Ingress controller for Kubernetes using NGINX a...
helm search repo ingress-nginx --devel 
NAME                       	CHART VERSION	APP VERSION 	DESCRIPTION                                       
ingress-nginx/ingress-nginx	4.0.0-beta.3 	1.0.0-beta.3	Ingress controller for Kubernetes using NGINX a...

I am trying

helm fetch --untar ingress-nginx/ingress-nginx --version 4.0.0-beta.3

But the Changelog is still 3.34.

I do note a continued issues around "--version" over helm/helm#8739.
But I cant ignore people in this thread saying it worked.

Even when I do fetch without untar, the tar image references the old Changelog and more importantly the tag is 1.5.1 not 1.5.2.

I suspect I am missing something very obvious :).

@longwuyuan
Copy link
Contributor

longwuyuan commented Aug 14, 2021 via email

@maybe-sybr
Copy link
Author

@freshteapot

But the Changelog is still 3.34.

The chart's changelog doesn't appear to have been updated yet. Check the Chart.yaml instead. I use helm pull ingress-nginx/ingress-nginx --version <exact_version> and it pulls the one I asked for.

@longwuyuan
Copy link
Contributor

Can you check the release we made 2 days ago and update ;
% helm search repo ingress --devel
NAME CHART VERSION APP VERSION DESCRIPTION
ingress-nginx/ingress-nginx 4.0.0-beta.3 1.0.0-beta.3 Ingress controller for Kubernetes using NGINX a...
[~]
%

@freshteapot
Copy link

@maybe-sybr thank you. This worked. A nice alternative to "fetch --untar".
Now I have it down and verified the Chart.yaml (Another fine tip!).
I will try again, to understand what my next user error is :).

The error

5s Warning FailedMount pod/frontdoor-ingress-nginx-controller-5b59c9bb8c-mqp6r MountVolume.SetUp failed for volume "webhook-cert" : secret "frontdoor-ingress-nginx-admission" not found

How I got here

rm -rf output/ingress-nginx/
helm template frontdoor ingress-nginx/ingress-nginx  -f custom/ingress-nginx.yaml --output-dir ./output
kubectl apply -f output/ingress-nginx/templates

More output

112s        Normal    Started                   pod/svclb-frontdoor-ingress-nginx-controller-xxfnp         Started container lb-port-443
112s        Normal    Started                   pod/svclb-frontdoor-ingress-nginx-controller-xtr9x         Started container lb-port-443
101s        Warning   FailedMount               pod/frontdoor-ingress-nginx-controller-5b59c9bb8c-pc4mr    Unable to attach or mount volumes: unmounted volumes=[webhook-cert kube-api-access-bm55g], unattached volumes=[webhook-cert kube-api-access-bm55g]: timed out waiting for the condition
49s         Warning   FailedMount               pod/frontdoor-ingress-nginx-controller-5b59c9bb8c-stwzq    MountVolume.SetUp failed for volume "webhook-cert" : secret "frontdoor-ingress-nginx-admission" not found

values file

controller:
  config:
    log-format-upstream:
      '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr",
      "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user":
      "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status,
      "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query":
      "$args", "request_length": $request_length, "duration": $request_time,"method":
      "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent"
      }'

The issue still seems to be linked to:

  • MountVolume.SetUp failed for volume "webhook-cert"
  • secret "frontdoor-ingress-nginx-admission" not found

It works when I disable the adminssionWebhooks.

  admissionWebhooks:
    enabled: false

It does not work if I use:

admissionWebhooks:
    patch:
      image:
        image: rpkatz/kube-webhook-certgen
        tag: v1.5.2

I need to understand why I would want admissionWebhooks enabled. But I am happy to get it working with it globally off.

Not sure if it helps, but if you wondered, I am running k3s cluster.

kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:58:09Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3+k3s1", GitCommit:"1d1f220fbee9cdeb5416b76b707dde8c231121f2", GitTreeState:"clean", BuildDate:"2021-07-22T20:52:14Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

9 participants