Description
Description
On my x86 fedora 33 server environment I face the following issue with latest nginx-proxy image. The automatic default.conf generation seems to fail and results in an empty upstream configuration.
Steps to Reproduce
I use this minimal example:
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
Startup log:
admin@tpdachsserver:~/repos/reverse-proxy-test ▶ docker-compose up
Creating network "reverse-proxy-test_default" with the default driver
Creating reverse-proxy-test_nginx-proxy_1 ... done
Creating reverse-proxy-test_whoami_1 ... done
Attaching to reverse-proxy-test_whoami_1, reverse-proxy-test_nginx-proxy_1
whoami_1 | Listening on :8000
nginx-proxy_1 | WARNING: /etc/nginx/dhparam/dhparam.pem was not found. A pre-generated dhparam.pem will be used for now while a new one
nginx-proxy_1 | is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded.
nginx-proxy_1 | forego | starting dockergen.1 on port 5000
nginx-proxy_1 | forego | starting nginx.1 on port 5100
nginx-proxy_1 | dockergen.1 | 2021/01/02 03:07:54 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
nginx-proxy_1 | dockergen.1 | 2021/01/02 03:07:54 Running 'nginx -s reload'
nginx-proxy_1 | dockergen.1 | 2021/01/02 03:07:54 Error running notify command: nginx -s reload, exit status 1
nginx-proxy_1 | dockergen.1 | 2021/01/02 03:07:54 Watching docker events
nginx-proxy_1 | dockergen.1 | 2021/01/02 03:07:54 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx-proxy_1 | 2021/01/02 03:07:55 [emerg] 47#47: no servers are inside upstream in /etc/nginx/conf.d/default.conf:58
nginx-proxy_1 | nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/default.conf:58
nginx-proxy_1 | Generating DSA parameters, 4096 bit long prime
nginx-proxy_1 | dhparam generation complete, reloading nginx
nginx-proxy_1 | nginx.1 | 172.28.0.1 - - [02/Jan/2021:03:11:18 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.71.1" "-"
As a consequence curl request returns the following:
Curl request
admin@tpdachsserver:~ ▶ curl -H "Host: whoami.local" localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Please expand to see the resulting configuration:
default.conf
admin@tpdachsserver:~/repos/reverse-proxy-test $ docker exec -it reverse-proxy-test_nginx-proxy_1 cat /etc/nginx/conf.d/default.conf
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# whoami.local
upstream whoami.local {
}
server {
server_name whoami.local;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://whoami.local;
}
}
Pre-Analysis
I found one common issue which often results in this behaviour. It needs to be ensured the started containers are really in the same network and can reach each other. I tried many variants with external network and manually adding the containers to that network etc. All without success.
The second hint I found is that in some cases cgroup might be a reason .Docker.CurrentContainerID
, used in nginx.tmpl
, can't be resolved. Since Fedora33 uses cgroup2 there might be a link. However, I lack the debug knowledge on how to check whether .Docker.CurrentContainerID
is empty or not. I tried this very same example on an Debian Stretch server with 4.9 Kernel. Here the default.conf gets configured correctly. Thus, I assume some relation to the host environment.
Please let me know the supported Information is not sufficient. I'm grateful for any kind of support. TIA
Thanks,
Steffen
Update #1
After some investigation I found that CurrentContainerID is empty and docker-gen can not get the information to fill the template.
I prepared a RFC PR and a preliminary fix here: nginx-proxy/docker-gen#335