Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KIC+Knative Serving. KService is getting stuck in Unknown state with IngressNotConfigured reason #2543

Open
1 task done
Gaspero opened this issue Jun 6, 2022 · 9 comments
Labels
area/knative bug Something isn't working help wanted Extra attention is needed priority/low

Comments

@Gaspero
Copy link

Gaspero commented Jun 6, 2022

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I'm following steps from Kong guide https://docs.konghq.com/kubernetes-ingress-controller/2.3.x/guides/using-kong-with-knative/

KService 1st revision seems OK, but after updating KService, it's N-th revision is getting stuck in Unknown state with IngressNotConfigured reason.

Although changes are applied to Service: e.g. Hello world's TARGET env variable seem to be changed.

Expected Behavior

When KService is updated and new Revision is created, KService should be in Ready state if changes have applied successfully.

Steps To Reproduce

Environment:
- MacOS Monterey 12.1 (21C52) Intel
- K8S v1.23.1 in minikube local cluster
- Knative Serving v1.5/1.4/1.1
- Kong v2.8 + Kong Ingress Controller v2.3

===

Detailed command i/o:

minikube start --driver=hyperkit --cpus=4 --memory=4g

# 😄  minikube v1.25.1 on Darwin 12.1
# ✨  Using the hyperkit driver based on user configuration
# 👍  Starting control plane node minikube in cluster minikube
# 🔥  Creating hyperkit VM (CPUs=4, Memory=4096MB, Disk=20000MB) ...
# 🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
#     ▪ kubelet.housekeeping-interval=5m
#     ▪ Generating certificates and keys ...
#     ▪ Booting up control plane ...
#     ▪ Configuring RBAC rules ...
# 🔎  Verifying Kubernetes components...
#     ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
# 🌟  Enabled addons: storage-provisioner, default-storageclass
# 🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


kubectl apply --filename https://github.com/knative/serving/releases/download/knative-v1.4.0/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/knative-v1.4.0/serving-core.yaml

# customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
# customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
# namespace/knative-serving created
# clusterrole.rbac.authorization.k8s.io/knative-serving-aggregated-addressable-resolver created
# clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created
# clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created
# clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created
# clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created
# clusterrole.rbac.authorization.k8s.io/knative-serving-core created
# clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created
# serviceaccount/controller created
# clusterrole.rbac.authorization.k8s.io/knative-serving-admin created
# clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created
# clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-addressable-resolver created
# customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev unchanged
# customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev unchanged
# image.caching.internal.knative.dev/queue-proxy created
# configmap/config-autoscaler created
# configmap/config-defaults created
# configmap/config-deployment created
# configmap/config-domain created
# configmap/config-features created
# configmap/config-gc created
# configmap/config-leader-election created
# configmap/config-logging created
# configmap/config-network created
# configmap/config-observability created
# configmap/config-tracing created
# Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
# horizontalpodautoscaler.autoscaling/activator created
# poddisruptionbudget.policy/activator-pdb created
# deployment.apps/activator created
# service/activator-service created
# deployment.apps/autoscaler created
# service/autoscaler created
# deployment.apps/controller created
# service/controller created
# deployment.apps/domain-mapping created
# deployment.apps/domainmapping-webhook created
# service/domainmapping-webhook created
# horizontalpodautoscaler.autoscaling/webhook created
# poddisruptionbudget.policy/webhook-pdb created
# deployment.apps/webhook created
# service/webhook created
# validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created
# mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created
# mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.domainmapping.serving.knative.dev created
# secret/domainmapping-webhook-certs created
# validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.domainmapping.serving.knative.dev created
# validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created
# secret/webhook-certs created


kubectl apply -f https://bit.ly/k4k8s

# namespace/kong created
# customresourcedefinition.apiextensions.k8s.io/kongclusterplugins.configuration.konghq.com created
# customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
# customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
# customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
# customresourcedefinition.apiextensions.k8s.io/tcpingresses.configuration.konghq.com created
# customresourcedefinition.apiextensions.k8s.io/udpingresses.configuration.konghq.com created
# serviceaccount/kong-serviceaccount created
# role.rbac.authorization.k8s.io/kong-leader-election created
# clusterrole.rbac.authorization.k8s.io/kong-ingress created
# rolebinding.rbac.authorization.k8s.io/kong-leader-election created
# clusterrolebinding.rbac.authorization.k8s.io/kong-ingress created
# service/kong-proxy created
# service/kong-validation-webhook created
# deployment.apps/ingress-kong created
# ingressclass.networking.k8s.io/kong created


kubectl patch configmap/config-network \
  --namespace knative-serving \
    --type merge \
      --patch '{"data":{"ingress-class":"kong"}}'

# configmap/config-network patched


kubectl get service kong-proxy -n kong

# NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
# kong-proxy   LoadBalancer   10.110.165.198   10.110.165.198   80:31248/TCP,443:31344/TCP   20h


kubectl patch configmap/config-domain \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"10.110.165.198.sslip.io":""}}'
  
# configmap/config-domain patched


curl -i http://helloworld-go.default.10.110.165.198.sslip.io/

# HTTP/1.1 404 Not Found
# Date: Sat, 04 Jun 2022 15:00:42 GMT
# Content-Type: application/json; charset=utf-8
# Connection: keep-alive
# Content-Length: 48
# X-Kong-Response-Latency: 0
# Server: kong/2.8.1

# {"message":"no Route matched with those values"}%  


echo "
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          env:
            - name: TARGET
              value: Go Sample v1
" | kubectl apply -f -

# service.serving.knative.dev/helloworld-go created


kubectl get kservice

# NAME            URL                                                    LATESTCREATED         LATESTREADY           READY     REASON
# helloworld-go   http://helloworld-go.default.10.110.165.198.sslip.io   helloworld-go-00001   helloworld-go-00001   True


curl -i http://helloworld-go.default.10.110.165.198.sslip.io/

# HTTP/1.1 200 OK
# Content-Type: text/plain; charset=utf-8
# Content-Length: 20
# Connection: keep-alive
# Date: Sat, 04 Jun 2022 15:12:20 GMT
# X-Kong-Upstream-Latency: 14
# X-Kong-Proxy-Latency: 1
# Via: kong/2.8.1

# Hello Go Sample v1!


echo "
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          env:
            - name: TARGET
              value: Go Sample v2
" | kubectl apply -f -

# service.serving.knative.dev/helloworld-go configured


kubectl get kservice

# NAME            URL                                                    LATESTCREATED         LATESTREADY           READY     REASON
# helloworld-go   http://helloworld-go.default.10.110.165.198.sslip.io   helloworld-go-00002   helloworld-go-00002   Unknown   IngressNotConfigured


curl -i http://helloworld-go.default.10.110.165.198.sslip.io/

# HTTP/1.1 200 OK
# Content-Type: text/plain; charset=utf-8
# Content-Length: 20
# Connection: keep-alive
# Date: Sat, 04 Jun 2022 15:12:37 GMT
# X-Kong-Upstream-Latency: 10
# X-Kong-Proxy-Latency: 1
# Via: kong/2.8.1

# Hello Go Sample v2!


echo "
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-python
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-python
          env:
            - name: TARGET
              value: python Sample v1
" | kubectl apply -f -

# service.serving.knative.dev/helloworld-python created


kubectl get kservice

# NAME                URL                                                        LATESTCREATED             LATESTREADY               READY     REASON
# helloworld-go       http://helloworld-go.default.10.110.165.198.sslip.io       helloworld-go-00002       helloworld-go-00002       Unknown   IngressNotConfigured
# helloworld-python   http://helloworld-python.default.10.110.165.198.sslip.io   helloworld-python-00001   helloworld-python-00001   True

Kong Ingress Controller version

v2.3

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

Anything else?

No response

@Gaspero Gaspero added the bug Something isn't working label Jun 6, 2022
@shaneutt shaneutt added help wanted Extra attention is needed priority/low labels Jul 26, 2022
@shaneutt
Copy link
Contributor

Hi @Gaspero thank you for your detailed, in-depth report.

I want to start with providing some context I feel is highly relevant to the issue: Knative support in the ingress controller is currently behind a feature gate and we consider the integration at an alpha level of maturity. Our integration with Knative has lagged behind upstream because we (the maintainers) have had little to no feedback or usage of the integration by end-users and so consequently we have had no impetus to further its development. We have in fact been considering removing the feature entirely (as opposed to promoting it to a full and supported feature as per the feature gates documentation) due to lack of interest.

I provide that context just to set some expectations: we consider this a low priority issue for the reasons stated above meaning that there's a lot of other work that needs to happen first. In the meantime however we're open to community users coming forth to help champion this integration, so let us know if that's something you'd be interested in? In general I want to know if the above all makes sense to you and I would like to have you share your thoughts and concerns on the matter. In particular if you could help us to better understand your use case and needs for this integration that would be helpful as we've had almost no such engagements for over a year.

@Gaspero
Copy link
Author

Gaspero commented Aug 1, 2022

Hello @shaneutt

Thank you for providing the context and some details regarding current status of KIC+Knative integration.

I will be happy to share my thoughts and give you feedback on how KIC+Knative suited my needs.

To start with, I think it is important to mention that I am not a professional software engineer and more like a hobby developer.
The project I'm currently working on is multi-instance SaaS based on existing OSS project (that was originally using docker-compose and Kong with declarative config). So using KIC with declarative config seemed as an optimal decision to minimize transition overhead. The main reason Knative is used in my setup is lowering trial period infrastructure costs.
The general idea was to create a Helm chart with all instance's microservices where each microservice lives in dedicated instance namespace and kong-proxy+plugins in kong namespace are responsible for serving APIs outside the cluster. Please see diagram, hope it will be clearer than text explanation.

One of the main reasons to choose KIC was Kong's rich plugin ecosystem. Also, when I was researching alternative networking layers for Knative, they seemed a little less user-friendly, harder to configure and lacked documentation compared to Kong. As for today, I guess I overestimated the need in ACL and JWT Auth Plugin in my particular use case as their functionality could be integrated in microservices with a little customization.

I would also like to point out that Viktor Gamov's Youtube tutorials helped a lot to understand the basics and strengths of KIC+Knative integration.

@shaneutt
Copy link
Contributor

shaneutt commented Aug 2, 2022

Thanks for that update @Gaspero, it's good to know about your active use case as this can help us to make more informed decisions. Previously I had asked if taking this issue on is something you'd want to take on personally, just wanted to check in on that again before we put this on our board?

@stale
Copy link

stale bot commented Aug 10, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@yairyairyair
Copy link

Can someone take a look at this? really want to use kong because of its plugins system with knative and cant

@yairyairyair
Copy link

Apparently it worked with older version of knative and kong ingress controller

@dark-m0de
Copy link

I can see that the feature gate is stil in in alpha and there is no activity in this since one year (although the feature is supported since 0.8.0). Is it then fair to assume that Knative is not supported by Kong?

It would be a pity. But I don't see a point in saying you support something and not fixing issues at all.

@pmalek
Copy link
Member

pmalek commented Sep 11, 2023

Hi @dark-m0de 👋

Thanks for your comment.

The reason for this still marked as alpha is that we are seeing relatively low usage in this feature hence we are contributing our resources accordingly (with comparison to other KIC functionalities and integrations). In our eyes this way it reflects our level of support more accurately than giving it a GA stamp and claiming full support. It also reflects the maturity of the feature in KIC. It doesn't matter that it was introduced in 0.8.0 which was released a long time ago if we explicitly state that it's still alpha given our standards.

The issue is open for all users to see and if there's perhaps a brave soul out there: to contribute to the project by fixing this before we might ever get to it.


Given the above and @shaneutt's comment we're still thinking about the support of Knative in KIC but our current reasoning got us to a point where we want to reconsider support for Knative at all (related: #2813).

@dark-m0de
Copy link

ok, thank you. According to this comment in #2813 it is then clear the Knative support will be deprecated.

Its a pity. However, at least we have certainty and can find another way to use Knative.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/knative bug Something isn't working help wanted Extra attention is needed priority/low
Projects
None yet
Development

No branches or pull requests

5 participants