Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runing CronJobSource Problem "53: no such host" #1973

Closed
richard2006 opened this issue Sep 29, 2019 · 34 comments
Closed

Runing CronJobSource Problem "53: no such host" #1973

richard2006 opened this issue Sep 29, 2019 · 34 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/doc priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.

Comments

@richard2006
Copy link
Contributor

richard2006 commented Sep 29, 2019

Describe the bug
About Knative Eventing v0.9.0, creating the CronJobSource, but got the problem :

{"level":"error","ts":1569760380.0051033,"logger":"fallback","caller":"cronjobevents/adapter.go:113","msg":"failed to send cloudevent{error 25 0  Post http://event-display.default.svc.cluster.local: dial tcp: lookup event-display.default.svc.cluster.local on 172.21.0.10:53: no such host}","stacktrace":"knative.dev/eventing/pkg/adapter/cronjobevents.(*Adapter).cronTick\n\t/home/prow/go/src/knative.dev/eventing/pkg/adapter/cronjobevents/adapter.go:113\nknative.dev/eventing/vendor/github.com/robfig/cron.FuncJob.Run\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:92\nknative.dev/eventing/vendor/github.com/robfig/cron.(*Cron).runWithRecovery\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:165"}

Expected behavior
Send the message to the event-display

To Reproduce
Create the CronJobSource:

apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource
metadata:
  name: test-cronjob
spec:
  schedule: "*/1 * * * *"
  data: '{"message": "sync"}'
  sink:
    apiVersion: serving.knative.dev/v1alpha1
    kind: Service
    name: event-display

And the event-display:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: event-display
  namespace: default
spec:
  template:
    spec:
      containers:
      - # This corresponds to
        # https://github.com/knative/eventing-contrib/blob/release-0.5/cmd/event_display/main.go
        image: gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/event_display@sha256:bf45b3eb1e7fc4cb63d6a5a6416cf696295484a7662e0cf9ccdf5c080542c21d

Error Output:

$ kubectl logs cronjobsource-test-cronjob-9314365a-e2b0-11e9-a78e-2ef5695pjjbx

{"level":"error","ts":1569760620.0037365,"logger":"fallback","caller":"cronjobevents/adapter.go:113","msg":"failed to send cloudevent{error 25 0  Post http://event-display.default.svc.cluster.local: dial tcp: lookup event-display.default.svc.cluster.local on 172.21.0.10:53: no such host}","stacktrace":"knative.dev/eventing/pkg/adapter/cronjobevents.(*Adapter).cronTick\n\t/home/prow/go/src/knative.dev/eventing/pkg/adapter/cronjobevents/adapter.go:113\nknative.dev/eventing/vendor/github.com/robfig/cron.FuncJob.Run\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:92\nknative.dev/eventing/vendor/github.com/robfig/cron.(*Cron).runWithRecovery\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:165"}
{"level":"error","ts":1569760680.0039418,"logger":"fallback","caller":"cronjobevents/adapter.go:113","msg":"failed to send cloudevent{error 25 0  Post http://event-display.default.svc.cluster.local: dial tcp: lookup event-display.default.svc.cluster.local on 172.21.0.10:53: no such host}","stacktrace":"knative.dev/eventing/pkg/adapter/cronjobevents.(*Adapter).cronTick\n\t/home/prow/go/src/knative.dev/eventing/pkg/adapter/cronjobevents/adapter.go:113\nknative.dev/eventing/vendor/github.com/robfig/cron.FuncJob.Run\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:92\nknative.dev/eventing/vendor/github.com/robfig/cron.(*Cron).runWithRecovery\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:165"}
{"level":"error","ts":1569760740.0035992,"logger":"fallback","caller":"cronjobevents/adapter.go:113","msg":"failed to send cloudevent{error 25 0  Post http://event-display.default.svc.cluster.local: dial tcp: lookup event-display.default.svc.cluster.local on 172.21.0.10:53: no such host}","stacktrace":"knative.dev/eventing/pkg/adapter/cronjobevents.(*Adapter).cronTick\n\t/home/prow/go/src/knative.dev/eventing/pkg/adapter/cronjobevents/adapter.go:113\nknative.dev/eventing/vendor/github.com/robfig/cron.FuncJob.Run\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:92\nknative.dev/eventing/vendor/github.com/robfig/cron.(*Cron).runWithRecovery\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:165"}
{"level":"error","ts":1569760800.0040033,"logger":"fallback","caller":"cronjobevents/adapter.go:113","msg":"failed to send cloudevent{error 25 0  Post http://event-display.default.svc.cluster.local: dial tcp: lookup event-display.default.svc.cluster.local on 172.21.0.10:53: no such host}","stacktrace":"knative.dev/eventing/pkg/adapter/cronjobevents.(*Adapter).cronTick\n\t/home/prow/go/src/knative.dev/eventing/pkg/adapter/cronjobevents/adapter.go:113\nknative.dev/eventing/vendor/github.com/robfig/cron.FuncJob.Run\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:92\nknative.dev/eventing/vendor/github.com/robfig/cron.(*Cron).runWithRecovery\n\t/home/prow/go/src/knative.dev/eventing/vendor/github.com/robfig/cron/cron.go:165"}

Knative release version
v0.9.0

Additional context
event-display service is running normally:

[root@iZ8vb67td1x5m1dnct3ki3Z ~]# kubectl get ksvc
NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON
event-display   http://event-display.default.example.com   event-display-qjxbr   event-display-qjxbr   True
@richard2006 richard2006 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 29, 2019
@lionelvillard
Copy link
Member

Have you tried invoking event-display directly, eg. by using curl? See https://knative.dev/development/serving/samples/hello-world/helloworld-go/index.html for instructions.

@jamesward
Copy link

Similar error for me:

kubectl logs cronjobsource-cronjob-spri-403c53cd-e460-11e9-81ad-42010a8nzfnc
{"level":"info","ts":"2019-10-01T15:29:30.295Z","caller":"logging/config.go:102","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n  \"level\": \"info\",\n  \"development\": false,\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"ts\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"iso8601\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}\n"}
{"level":"info","ts":"2019-10-01T15:29:30.295Z","caller":"logging/config.go:103","msg":"Logging level set to info"}
{"level":"info","ts":"2019-10-01T15:29:30.296Z","caller":"logging/config.go:71","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","ts":"2019-10-01T15:29:30.296Z","logger":"cronjobsource","caller":"metrics/config.go:235","msg":"Flushing the existing exporter before setting up the new exporter."}
{"level":"info","ts":"2019-10-01T15:29:30.296Z","logger":"cronjobsource","caller":"metrics/prometheus_exporter.go:37","msg":"Created Opencensus Prometheus exporter with config: &{knative.dev/sources cronjobsource prometheus 5000000000 9090  false false  }. Start the server for Prometheus exporter."}
{"level":"info","ts":"2019-10-01T15:29:30.296Z","logger":"cronjobsource","caller":"metrics/config.go:244","msg":"Successfully updated the metrics exporter; old config: <nil>; new config &{knative.dev/sources cronjobsource prometheus 5000000000 9090  false false  }"}
{"level":"error","ts":"2019-10-01T15:29:30.412Z","logger":"cronjobsource","caller":"tracing/opencensus.go:146","msg":"error building zipkin endpoint","error":"lookup cronjobsource on 10.0.0.10:53: no such host","stacktrace":"knative.dev/eventing/vendor/knative.dev/pkg/tracing.WithExporter.func1\n\t/home/prow/go/src/knative.dev/eventing/vendor/knative.dev/pkg/tracing/opencensus.go:146\nknative.dev/eventing/vendor/knative.dev/pkg/tracing.(*OpenCensusTracer).ApplyConfig\n\t/home/prow/go/src/knative.dev/eventing/vendor/knative.dev/pkg/tracing/opencensus.go:58\nknative.dev/eventing/pkg/tracing.SetupStaticPublishing\n\t/home/prow/go/src/knative.dev/eventing/pkg/tracing/setup.go:62\nmain.main\n\t/home/prow/go/src/knative.dev/eventing/cmd/cronjob_receive_adapter/main.go:107\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
{"level":"error","ts":"2019-10-01T15:29:30.412Z","logger":"cronjobsource","caller":"cronjob_receive_adapter/main.go:110","msg":"Error setting up trace publishing","error":"unable to set OpenCensusTracing config: lookup cronjobsource on 10.0.0.10:53: no such host","stacktrace":"main.main\n\t/home/prow/go/src/knative.dev/eventing/cmd/cronjob_receive_adapter/main.go:110\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
{"level":"info","ts":"2019-10-01T15:29:30.412Z","logger":"cronjobsource","caller":"cronjob_receive_adapter/main.go:122","msg":"Starting Receive Adapter","adapter":{"Schedule":"* * * * *","Data":"{\"message\": \"hello, world\"}","SinkURI":"http://default-broker.default.svc.cluster.local","Name":"cronjob-spring-upper","Namespace":"default","Reporter":{}}}

@jamesward
Copy link

I'm getting similar errors with any event consumer that is a knative service. Switching to a deployment + k8s service gets things working, but not ideal.

@vaikas vaikas added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Oct 1, 2019
@vaikas vaikas self-assigned this Oct 1, 2019
@meteatamel
Copy link

I also ran into the same issue with my HelloWorld step in knative-tutorial. I was told that I need to update Istio use cluster local gateway as explained here. I haven't tried this yet.

@vaikas
Copy link
Contributor

vaikas commented Oct 1, 2019

@tcnghia Are there any known issues? Is this something that should work with the 0.9 comprehensive install, or do you need to do something else to get this working? I'd expect that the comprehensive version of the install would support this out of the box?
Is there some step missing from our documentation? I'm just curious if this is indeed an eventing problem (looks like it might not be?) but seems to be an issue either with the:

  1. installation
  2. istio misconfigured
  3. me?

@lionelvillard
Copy link
Member

or maybe this: https://knative.dev/development/install/installing-istio/ section Updating your install to use cluster local gateway

@richard2006
Copy link
Contributor Author

Have you tried invoking event-display directly, eg. by using curl? See https://knative.dev/development/serving/samples/hello-world/helloworld-go/index.html for instructions.

Yes, I have tried invoking event-display directly, but it seems no problem.

[root@iZ2zeae8wzyq0ypgjowzq2Z ~]# curl -H "host:event-display.default.example.com" http://39.106.232.122
{"error":"unknown encoding for message &{map[Accept:[*/*] Accept-Encoding:[gzip] Forwarded:[for=172.20.0.1;proto=http, for=127.0.0.1] K-Proxy-Request:[activator] User-Agent:[curl/7.29.0] X-Forwarded-For:[172.20.0.1, 127.0.0.1, 172.20.2.55] X-Forwarded-Proto:[http] X-Istio-Attributes:[Ck8KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEjQSMmlzdGlvOi8vZGVmYXVsdC9zZXJ2aWNlcy9ldmVudC1kaXNwbGF5LWsyY21wLWY5ZndoClEKGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBI1EjNldmVudC1kaXNwbGF5LWsyY21wLWY5ZndoLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwKNwoYZGVzdGluYXRpb24uc2VydmljZS5uYW1lEhsSGWV2ZW50LWRpc3BsYXktazJjbXAtZjlmd2gKKgodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USCRIHZGVmYXVsdApHCgpzb3VyY2UudWlkEjkSN2t1YmVybmV0ZXM6Ly9hY3RpdmF0b3ItODZjY2M4NmM5Ny1odGxjcC5rbmF0aXZlLXNlcnZpbmc=] X-Request-Id:[e33e0cfa-f18b-48a0-8590-31254bbeaec0]] []}"}[root@iZ2zeae8wzyq0ypgjowzq2Z ~]# kubectl get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
cronjobsource-test-cronjob-4ef4fd29-e299-11e9-aad4-569953f7gz46   1/1     Running   0          3d20h
event-display-k2cmp-deployment-7d654d4545-k7kk5                   2/2     Running   0          18s
kafka-default-0                                                   1/1     Running   0          4d4h
kafka-source-bxk22-85c7478db5-kh5lp                               1/1     Running   0          3d20h
nginx-577596968c-g5kxc                                            1/1     Running   0          15d
nginx-577596968c-x45hk                                            1/1     Running   0          15d
zookeeper-default-0                                               1/1     Running   0          4d4h

@richard2006
Copy link
Contributor Author

richard2006 commented Oct 3, 2019

@tcnghia Are there any known issues? Is this something that should work with the 0.9 comprehensive install, or do you need to do something else to get this working? I'd expect that the comprehensive version of the install would support this out of the box?
Is there some step missing from our documentation? I'm just curious if this is indeed an eventing problem (looks like it might not be?) but seems to be an issue either with the:

  1. installation
  2. istio misconfigured
  3. me?

Hi @vaikas-google , I think you're right, it's not an eventing problem. I found the VirtualService have been changed between the v0.8.0 and the v0.9.0.
With the 0.9.0 of event-display:

  http:
  - match:
    - authority:
        regex: ^event-display\.default\.example\.com(?::\d{1,5})?$
      gateways:
      - knative-serving/knative-ingress-gateway
    - authority:
        regex: ^event-display\.default(\.svc(\.cluster\.local)?)?(?::\d{1,5})?$
      gateways:
      - knative-serving/cluster-local-gateway

With the 0.8.0 of event-display:

  http:
  - match:
    - authority:
        regex: ^event-display\.default\.example\.com(?::\d{1,5})?$
      gateways:
      - knative-serving/knative-ingress-gateway
    - authority:
        regex: ^event-display\.default(\.svc(\.cluster\.local)?)?(?::\d{1,5})?$
      gateways:
      - knative-serving/knative-ingress-gateway

@lionelvillard
Copy link
Member

Have you upgraded to serving 0.9.0? In that case you also need to install the istio cluster-local-gateway.

@vaikas
Copy link
Contributor

vaikas commented Oct 3, 2019

Thanks all for the pointers, yes, the current instructions that I had been following for the repro omitted the installation of the cluster-local-gateway.

@vaikas
Copy link
Contributor

vaikas commented Oct 4, 2019

and little more, I had been using GKE Istio addon, and trying to install the cluster-local-gateway appeared to be problematic, so mayhaps we need to update the installation instructions for that.
@lionelvillard did you install Istio manually yourself? Or did the Istio come with your cluster? Just trying to figure out what all might need to get updated in our docs.
@richard2006 which platform are you running on btw?

@lionelvillard
Copy link
Member

I installed istio manually and added the cluster-local-gateway manually, following the knative.dev doc.

@richard2006
Copy link
Contributor Author

Have you upgraded to serving 0.9.0? In that case you also need to install the istio cluster-local-gateway.

Hi @lionelvillard , yes, I have upgraded to serving 0.9.0. And i have seen the knative.dev doc about the cluster-local-gateway, but it seems need to disable the ingressgateway. And I still need the ingressgateway.

@richard2006
Copy link
Contributor Author

and little more, I had been using GKE Istio addon, and trying to install the cluster-local-gateway appeared to be problematic, so mayhaps we need to update the installation instructions for that.
@lionelvillard did you install Istio manually yourself? Or did the Istio come with your cluster? Just trying to figure out what all might need to get updated in our docs.
@richard2006 which platform are you running on btw?

Hi, @vaikas-google , I use the knative in alibaba cloud. it has the ingressgateway now, but it seem that I still need install the cluster-local-gateway manually in istio after upgrading to the v0.9.0.

[root@iZ2zeae8wzyq0ypgjowzq2Z ~]# kubectl get pod -n istio-system -l istio=ingressgateway -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE                      NOMINATED NODE   READINESS GATES
istio-ingressgateway-dd766479f-lwbkg   1/1     Running   0          24d   172.20.0.6   cn-beijing.192.168.0.77   <none>           <none>
[root@iZ2zeae8wzyq0ypgjowzq2Z ~]# kubectl get pod -n istio-system -l istio=cluster-local-gateway -o wide
No resources found.

@meteatamel
Copy link

@vaikas-google I also used Istio add-on on GKE. Is it straightforward to add cluster-local-gateway onto that?

@vaikas
Copy link
Contributor

vaikas commented Oct 8, 2019

Sorry for the tardy reply... work got in the way of things ;)

Yes, so the workaround is to install the correct version from the serving stack. So, if you create a GKE cluster using the Istio Add on, you need to then install this (mine is 1.1.13).

https://github.com/knative/serving/blob/master/third_party/istio-1.1.15/istio-knative-extras.yaml

You can figure out which version of istio is by looking at the tagged version, something like:
vaikas@penguin:~/crondebug$ kubectl -n istio-system get pods istio-ingressgateway-f659695c4-dmnxs -oyaml | grep image
image: gke.gcr.io/istio/proxyv2:1.1.13-gke.0
imagePullPolicy: IfNotPresent
image: gke.gcr.io/istio/proxyv2:1.1.13-gke.0
imageID: docker-pullable://gke.gcr.io/istio/proxyv2@sha256:829a78105b088e931e3644753f6460b9eab262af665127d2016d639f04dd1834

So, mine is 1.1.13. We don't have that specific version, but 1.1.15 istio works fine.

We're going to fix this either in the install instructions themselves, but if you kubectl create -f the file above I have verified that it then works.

Just a quick update for now to hopefully unblock folks.

@vaikas
Copy link
Contributor

vaikas commented Oct 8, 2019

knative/docs#1842

@richard2006
Copy link
Contributor Author

richard2006 commented Oct 9, 2019

@vaikas-google Tks. I can refer the https://github.com/knative/serving/blob/master/third_party/istio-1.2.6/istio-knative-extras.yaml to install the 'cluster-local-gateway'. And it works now

@rtmvc
Copy link

rtmvc commented Oct 9, 2019

Still no luck on my side even with a default namespace broker, either with Istio add-on or a manually installed Istio (1.3.1) with extras applied (that is, cluster-local-gateway created).
Here is what I do to create and setup my cluster: https://pastebin.com/jLWvg4s8
I can't manage to find out why it's not working...

@vaikas
Copy link
Contributor

vaikas commented Oct 9, 2019

@ratamovic
I can repro what you see in the logs, however in my case it's non-fatal. My belief is that this is because istio hasn't come up fully yet, so our container can't reach the API server. My logs show something like this:
E1009 18:34:40.411161 1 reflector.go:125] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ConfigMap: Get https://10.63.240.1:443/api/v1/namespaces/knative-eventing/configmaps?limit=500&resourceVersion=0: dial tcp 10.63.240.1:443: connect: connection refused
E1009 18:34:41.413245 1 reflector.go:125] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ConfigMap: Get https://10.63.240.1:443/api/v1/namespaces/knative-eventing/configmaps?limit=500&resourceVersion=0: dial tcp 10.63.240.1:443: connect: connection refused
{"level":"info","ts":"2019-10-09T18:34:42.423Z","logger":"provisioner","caller":"configmap/store.go:169","msg":"tracing-config config "config-tracing" config was added or updated: &config.Config{Backend:"none", ZipkinEndpoint:"", StackdriverProjectID:"", Debug:false, SampleRate:0.1}","commit":"a1a0981"}
{"level":"info","ts":"2019-10-09T18:34:42.424Z","logger":"provisioner","caller":"metrics/config.go:235","msg":"Flushing the existing exporter before setting up the new exporter.","commit":"a1a0981"}
{"level":"info","ts":"2019-10-09T18:34:42.424Z","logger":"provisioner","caller":"metrics/prometheus_exporter.go:37","msg":"Created Opencensus Prometheus exporter with config: &{knative.dev/eventing broker_ingress prometheus 5000000000 9090 false false }. Start the server for Prometheus exporter.","commit":"a1a0981"}
{"level":"info","ts":"2019-10-09T18:34:42.424Z","logger":"provisioner","caller":"metrics/config.go:244","msg":"Successfully updated the metrics exporter; old config: ; new config &{knative.dev/eventing broker_ingress prometheus 5000000000 9090 false false }","commit":"a1a0981"}
{"level":"info","ts":"2019-10-09T18:34:42.499Z","logger":"provisioner","caller":"ingress/main.go:140","msg":"Starting informers.","commit":"a1a0981"}

Is there anything in your ingress after the log entries you have added to the pastebin?

I created the namespace exactly as you did and then created the following objects and it's working fine for me.

apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource
metadata:
name: test-cronjob
namespace: my-namespace
spec:
schedule: "*/1 * * * *"
data: '{"message": "sync"}'
sink:
apiVersion: eventing.knative.dev/v1alpha1
kind: Broker
name: default
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: event-display
namespace: my-namespace
spec:
template:
spec:
containers:
- # This corresponds to
# https://github.com/knative/eventing-contrib/blob/release-0.5/cmd/event_display/main.go
image: gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/event_display@sha256:bf45b3eb1e7fc4cb63d6a5a6416cf696295484a7662e0cf9ccdf5c080542c21dapiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: testevents-trigger
namespace: my-namespace
spec:
subscriber:
ref:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: event-display

And in the event-display logs, I see the following:
vaikas@penguin:~/projects/go/src/knative.dev/eventing$ kubectl -n my-namespace logs event-display-fsk4z-deployment-69cb44dc7c-k4gw2 user-container

☁️ CloudEvent: valid ✅
Context Attributes,
SpecVersion: 0.3
Type: dev.knative.cronjob.event
Source: /apis/v1/namespaces/my-namespace/cronjobsources/test-cronjob
ID: d46bd6e9-7431-4bc6-9844-01424f6aa824
Time: 2019-10-09T21:24:00.000559648Z
DataContentType: application/json
Extensions:
knativehistory: default-kn2-trigger-kn-channel.my-namespace.svc.cluster.local
traceparent: 00-a08689bb7c2f68a56fea99e6a94fe02f-9d74f34b74fc2631-00
knativearrivaltime: 2019-10-09T21:24:00Z
Transport Context,
URI: /
Host: event-display.my-namespace.svc.cluster.local
Method: POST
Data,
{
"message": "sync"
}

What is the error message you're observing from the Source that's trying to publish to the Broker? Or if that succeeds, are there any other errors in the pods logs besides the two lines you added to pastebin?
Thanks!

@richard2006
Copy link
Contributor Author

richard2006 commented Oct 10, 2019

Still no luck on my side even with a default namespace broker, either with Istio add-on or a manually installed Istio (1.3.1) with extras applied (that is, cluster-local-gateway created).
Here is what I do to create and setup my cluster: https://pastebin.com/jLWvg4s8
I can't manage to find out why it's not working...

Hi @ratamovic, you can refer 'https://github.com/knative/docs/tree/master/docs/eventing/debugging' to find the problem.

@rtmvc
Copy link

rtmvc commented Oct 10, 2019

Hi. Thanks @vaikas-google @richard2006 . Indeed it works the "extras" yaml works.
I was mislead by the non-fatal error message. My container source now sends correctly messages trough the broker (and are received by a sequence) and doesn't receive a 404 anymore.
So all is good now 👍

@vaikas
Copy link
Contributor

vaikas commented Oct 10, 2019

@ratamovic Fantastic to hear, sorry for the troubles 😢

@vaikas
Copy link
Contributor

vaikas commented Oct 10, 2019

@meteatamel were you able to get unblocked with the extras installed?

@meteatamel
Copy link

I haven't tried yet, need to keep my cluster for talks I'm giving this week but I will report back soon.

@vaikas
Copy link
Contributor

vaikas commented Oct 10, 2019 via email

@meteatamel
Copy link

meteatamel commented Oct 10, 2019

@vaikas-google Looks like istio-1.1.15 was removed from https://github.com/knative/serving/blob/master/third_party/. The closest one that exists in that folder is istio-1.2.7 but how can I ensure that this will work with 1.1.13 that I have with GKE add-on?

As a bigger point, is this going to be the developer experience for installing Knative eventing with Knative Service sinks on GKE? The current approach is totally broken IMO. Do we have a bug capturing this somewhere? => Nevermind, I saw issue 1842

@vaikas
Copy link
Contributor

vaikas commented Oct 10, 2019

@meteatamel You're 💯 % correct that this experience is totally b0rk3d :) We're working on updating the docs as well as finding a way to cherrypick so the docs make sense. Oh snap on the version of the Istio, it was there just the other day.
There seems to be some signs that 1.2.7 worked:
@richard2006 @ratamovic which version did you use?
I will carve time to try this so we can make sure docs are verified, but if you have done the work already, saves me time :)

@ovkhasch
Copy link

I had this issue with istio 1.3.2 and running this command fixed the eventing (and cronjob):
kubectl apply -f https://raw.githubusercontent.com/knative/serving/master/third_party/istio-1.3.2/istio-knative-extras.yaml

@richard2006
Copy link
Contributor Author

@meteatamel You're % correct that this experience is totally b0rk3d :) We're working on updating the docs as well as finding a way to cherrypick so the docs make sense. Oh snap on the version of the Istio, it was there just the other day.
There seems to be some signs that 1.2.7 worked:
@richard2006 @ratamovic which version did you use?
I will carve time to try this so we can make sure docs are verified, but if you have done the work already, saves me time :)

Hi, @vaikas-google , I used the 1.2.6. And It works fine.

---
# Source: istio/charts/gateways/templates/serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: cluster-local-gateway-service-account
  namespace: istio-system
  labels:
    app: cluster-local-gateway
    chart: gateways
    heritage: Tiller
    release: release-name
---


---
# Source: istio/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-multi
  namespace: istio-system

---
# Source: istio/templates/clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: istio-reader
rules:
  - apiGroups: ['']
    resources: ['nodes', 'pods', 'services', 'endpoints', "replicationcontrollers"]
    verbs: ['get', 'watch', 'list']
  - apiGroups: ["extensions", "apps"]
    resources: ["replicasets"]
    verbs: ["get", "list", "watch"]

---
# Source: istio/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: istio-multi
  labels:
    chart: istio-1.2.6
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: istio-reader
subjects:
- kind: ServiceAccount
  name: istio-multi
  namespace: istio-system

---
# Source: istio/charts/gateways/templates/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: cluster-local-gateway
  namespace: istio-system
  annotations:
  labels:
    chart: gateways
    heritage: Tiller
    release: release-name
    app: cluster-local-gateway
    istio: cluster-local-gateway
spec:
  type: ClusterIP
  selector:
    release: release-name
    app: cluster-local-gateway
    istio: cluster-local-gateway
  ports:
    -
      name: status-port
      port: 15020
    -
      name: http2
      port: 80
    -
      name: https
      port: 443
---

---
# Source: istio/charts/gateways/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-local-gateway
  namespace: istio-system
  labels:
    chart: gateways
    heritage: Tiller
    release: release-name
    app: cluster-local-gateway
    istio: cluster-local-gateway
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-local-gateway
      istio: cluster-local-gateway
  template:
    metadata:
      labels:
        chart: gateways
        heritage: Tiller
        release: release-name
        app: cluster-local-gateway
        istio: cluster-local-gateway
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      serviceAccountName: cluster-local-gateway-service-account
      containers:
        - name: istio-proxy
          image: "docker.io/istio/proxyv2:1.2.6"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 15020
            - containerPort: 80
            - containerPort: 443
            - containerPort: 15090
              protocol: TCP
              name: http-envoy-prom
          args:
          - proxy
          - router
          - --domain
          - $(POD_NAMESPACE).svc.cluster.local
          - --log_output_level=default:info
          - --drainDuration
          - '45s' #drainDuration
          - --parentShutdownDuration
          - '1m0s' #parentShutdownDuration
          - --connectTimeout
          - '10s' #connectTimeout
          - --serviceCluster
          - cluster-local-gateway
          - --zipkinAddress
          - zipkin:9411
          - --proxyAdminPort
          - "15000"
          - --statusPort
          - "15020"
          - --controlPlaneAuthPolicy
          - NONE
          - --discoveryAddress
          - istio-pilot:15010
          readinessProbe:
            failureThreshold: 30
            httpGet:
              path: /healthz/ready
              port: 15020
              scheme: HTTP
            initialDelaySeconds: 1
            periodSeconds: 2
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 10m

          env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: INSTANCE_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: HOST_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.hostIP
          - name: ISTIO_META_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: ISTIO_META_CONFIG_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          volumeMounts:
          - name: istio-certs
            mountPath: /etc/certs
            readOnly: true
          - name: cluster-local-gateway-certs
            mountPath: "/etc/istio/cluster-local-gateway-certs"
            readOnly: true
          - name: cluster-local-gateway-ca-certs
            mountPath: "/etc/istio/cluster-local-gateway-ca-certs"
            readOnly: true
      volumes:
      - name: istio-certs
        secret:
          secretName: istio.cluster-local-gateway-service-account
          optional: true
      - name: cluster-local-gateway-certs
        secret:
          secretName: "istio-cluster-local-gateway-certs"
          optional: true
      - name: cluster-local-gateway-ca-certs
        secret:
          secretName: "istio-cluster-local-gateway-ca-certs"
          optional: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
                - ppc64le
                - s390x
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 2
            preference:
              matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
          - weight: 2
            preference:
              matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - ppc64le
          - weight: 2
            preference:
              matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - s390x
---

---
# Source: istio/charts/gateways/templates/autoscale.yaml


---
# Source: istio/charts/gateways/templates/poddisruptionbudget.yaml


---
# Source: istio/charts/gateways/templates/preconfigured.yaml


---
# Source: istio/charts/gateways/templates/role.yaml


---
# Source: istio/charts/gateways/templates/rolebindings.yaml


---
# Source: istio/charts/mixer/templates/autoscale.yaml


---
# Source: istio/charts/mixer/templates/clusterrole.yaml


---
# Source: istio/charts/mixer/templates/clusterrolebinding.yaml


---
# Source: istio/charts/mixer/templates/config.yaml


---
# Source: istio/charts/mixer/templates/deployment.yaml


---
# Source: istio/charts/mixer/templates/poddisruptionbudget.yaml


---
# Source: istio/charts/mixer/templates/service.yaml



---
# Source: istio/charts/mixer/templates/serviceaccount.yaml


---
# Source: istio/templates/configmap.yaml


---
# Source: istio/templates/endpoints.yaml


---
# Source: istio/templates/install-custom-resources.sh.tpl


---
# Source: istio/templates/service.yaml


---
# Source: istio/templates/sidecar-injector-configmap.yaml



@meteatamel
Copy link

I just verified that Istio GKE add-on version 1.1.13 works fine against https://raw.githubusercontent.com/knative/serving/master/third_party/istio-1.2.7/istio-knative-extras.yaml and I was able to get Knative Services as eventing sinks in my samples.

My Hello World Eventing sample shows all the steps needed to get Knative Eventing setup for consuming GCP Pub/Sub messages now.

@vaikas
Copy link
Contributor

vaikas commented Oct 14, 2019 via email

@vaikas
Copy link
Contributor

vaikas commented Oct 17, 2019

I'm closing this as the documentation has been updated on GKE specific instructions to install the cluster local gateway and it's been cherrypicked into 0.9 documents.

@vaikas vaikas closed this as completed Oct 17, 2019
@richard2006
Copy link
Contributor Author

I'm closing this as the documentation has been updated on GKE specific instructions to install the cluster local gateway and it's been cherrypicked into 0.9 documents.

@vaikas-google Good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/doc priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now.
Projects
None yet
Development

No branches or pull requests

8 participants