-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Description
NGINX Ingress controller version: 0.22.0 and 0.17.1
Kubernetes version (use kubectl version): 1.9.6
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): Don't know
- Kernel (e.g.
uname -a): 4.4.121-k8s - Install tools: kops
- Others:
What happened:
When connecting to a backend via ingress-nginx and a frontend ELB, the Connect: upgrade and Upgrade: websocket headers are being dropped from the request. This is causing my backend to reject the request with a 426 Upgrade Required response, though that is specific to the app server (Cowboy in a Phoenix/Elixir backend).
What you expected to happen:
Those headers should be passed through. Looking at the generated nginx.conf, I have these lines in the server block for the Ingress:
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;That should be passing them along. I have also tried overwriting/overloading them via the Ingress:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;No such luck there. I've also tried forcing their values, but it doesn't work:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade "websocket";
proxy_set_header Connection "upgrade";How to reproduce it (as minimally and precisely as possible):
I'm able to reproduce this issue with the echoserver container:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: websocket-test
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/send-timeout: "3600"
spec:
rules:
- host: websocket-test.domain.com
http:
paths:
- path: /
backend:
serviceName: websocket-test
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: websocket-test
spec:
ports:
- name: websocket-test
port: 80
targetPort: 8080
protocol: TCP
selector:
app: websocket-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: websocket-test
spec:
replicas: 1
selector:
matchLabels:
app: websocket-test
template:
metadata:
labels:
app: websocket-test
spec:
containers:
- name: websocket-test
image: k8s.gcr.io/echoserver:1.4
ports:
- containerPort: 8080Making a dummy WebSocket request to the endpoint produces these results:
curl -v 'http://websocket-test.domain.com/' -H 'Upgrade: websocket' -H 'Connection: Upgrade'
* Trying 1.2.3.4...
* TCP_NODELAY set
* Connected to websocket-test.domain.com (1.2.3.4) port 80 (#0)
> GET / HTTP/1.1
> Host: websocket-test.domain.com
> User-Agent: curl/7.54.0
> Accept: */*
> Upgrade: websocket
> Connection: Upgrade
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Date: Sat, 09 Feb 2019 20:58:11 GMT
< Server: nginx/1.15.8
< Vary: Accept-Encoding
< transfer-encoding: chunked
< Connection: keep-alive
<
CLIENT VALUES:
client_address=100.100.0.16
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://websocket-test.domain.com:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=websocket-test.domain.com
user-agent=curl/7.54.0
x-forwarded-for=1.2.3.4
x-forwarded-host=websocket-test.domain.com
x-forwarded-port=80
x-forwarded-proto=http
x-original-forwarded-for=1.2.3.4
x-original-uri=/
x-real-ip=1.2.3.4
x-request-id=a788919db6caf0294be42fdfea14ca27
x-scheme=http
Now, for the fun part: It works just fine from within the cluster!
Here's a request from within another pod:
curl 'ingress-nginx.ingress-nginx.svc.cluster.local' -H 'Upgrade: websocket' -H 'Connection: Upgrade' -H 'Host: websocket-test.domain.com' -v
* Rebuilt URL to: ingress-nginx.ingress-nginx.svc.cluster.local/
* Trying 100.70.191.39...
* TCP_NODELAY set
* Connected to ingress-nginx.ingress-nginx.svc.cluster.local (100.70.191.39) port 80 (#0)
> GET / HTTP/1.1
> Host: websocket-test.domain.com
> User-Agent: curl/7.52.1
> Accept: */*
> Upgrade: websocket
> Connection: Upgrade
>
< HTTP/1.1 200 OK
< Server: nginx/1.15.8
< Date: Sat, 09 Feb 2019 20:58:07 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=100.100.0.16
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://websocket-test.domain.com:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
connection=upgrade
host=websocket-test.domain.com
upgrade=websocket
user-agent=curl/7.52.1
x-forwarded-for=100.126.0.10
x-forwarded-host=websocket-test.domain.com
x-forwarded-port=80
x-forwarded-proto=http
x-original-uri=/
x-real-ip=100.126.0.10
x-request-id=9dcc6cc94455ec7e04fcf89cd488714b
x-scheme=http
Even more odd is that I can get them to go through for the first request after the config is reloaded in nginx. Whatever is filtering it out is only doing so after a first pass of the request chain.
My best guess is something to do with openresty. I have yet to do more testing with manual tweaking of the nginx config. I've tried logging out the access headers in the request tail phase, but they appear to be stripped by that point (not surprising):
nginx.ingress.kubernetes.io/configuration-snippet: |
access_by_lua_block {
local h = ngx.req.get_headers()
for k, v in pairs(h) do
ngx.log(ngx.ERR, "Got header "..k..": "..v..";")
end
}Any thoughts on what we're doing wrong?