You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Therefore, all future runs will try to connect to the empty BACKEND_ID and fails. I think we need some validation if generated configmap is sane before saving it.
Example error log:
kubectl logs proxy-agent-7fdfbddd88-64vhf
+++ dirname /opt/proxy/attempt-register-vm-on-proxy.sh
++ cd /opt/proxy
++ pwd
+ DIR=/opt/proxy
+ kubectl get configmap inverse-proxy-config
NAME DATA AGE
inverse-proxy-config 3 26m
++ jq -r .data.ProxyUrl
++ kubectl get configmap inverse-proxy-config -o json
+ PROXY_URL=https://datalab-us-east1.cloud.google.com/tun/m/4592f092208ecc84946b8f8f8016274df1b36a14
++ kubectl get configmap inverse-proxy-config -o json
++ jq -r .data.BackendId
+ BACKEND_ID=
+ run-proxy-agent
+ /opt/bin/proxy-forwarding-agent --debug=false --proxy=https://datalab-us-east1.cloud.google.com/tun/m/4592f092208ecc84946b8f8f8016274df1b36a14 --proxy-timeout=60s --backend= --host=10.39.243.16:80 --shim-websockets=true --shim-path=websocket-shim --health-check-path=/ --health-check-interval-seconds=0 --health-check-unhealthy-threshold=2
2019/11/22 07:48:59 You must specify a backend ID
The text was updated successfully, but these errors were encountered:
Checked some recently merged PRs, like #2696 and #2743. proxy-agent is no longer in crash loop back off state when the test finishes.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What happened:
In presubmit e2e tests, inverse proxy stucks at CrashLoopBackOff from time to time.
For example: https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/kubeflow_pipelines/2643/kubeflow-pipeline-e2e-test/1198909726310535169#1:build-log.txt%3A1177
What did you expect to happen:
It should not be flaky.
I briefly did some investigation and it seems when the first time inverse proxy runs, it gets an empty BACKEND_ID: https://github.com/kubeflow/pipelines/blob/master/proxy/attempt-register-vm-on-proxy.sh#L70 and saves it to the configmap.
Therefore, all future runs will try to connect to the empty BACKEND_ID and fails. I think we need some validation if generated configmap is sane before saving it.
Example error log:
The text was updated successfully, but these errors were encountered: