-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Closed
Description
Nginx templator generated Config based on Node(instance spec)
daemon off;
worker_processes 4;
pid /run/nginx.pid;
worker_rlimit_nofile 261120;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
ps -ef | grep nginx
root 1 0 0 10:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingress-controller --default-backend-service=kube-system/nginx-default-backend --configmap=kube-system/ingress-nginx
*Maximum number of open files permitted*
# cat /proc/sys/fs/file-max
2097152
# ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
memory(kbytes) unlimited
locked memory(kbytes) 64
process 1048576
nofiles 1048576
vmemory(kbytes) unlimited
locks unlimited
rtprio 0
wp, err := strconv.Atoi(cfg.WorkerProcesses)
glog.V(3).Infof("number of worker processes: %v", wp)
if err != nil {
wp = 1
}
maxOpenFiles := (sysctlFSFileMax() / wp) - 1024
glog.V(3).Infof("maximum number of open file descriptors : %v", sysctlFSFileMax())
if maxOpenFiles < 1024 {
// this means the value of RLIMIT_NOFILE is too low.
maxOpenFiles = 1024
}
maxOpenFiles := (sysctlFSFileMax() / wp) - 1024 |
(1048576 /4 ) - 1024 =261120
- Cloud provider or hardware configuration: AWS
AWS Instance Type: m4.xlarge vCPU 4 Mem (GiB) 16 vCore 2 - OS (e.g. from /etc/os-release):
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/" - Kernel (e.g.
uname -a
): Linux ingress-nginx-535084222-12q1w 4.4.102-k8s Basic structure #1 SMP Sun Nov 26 23:32:43 UTC 2017 x86_64 GNU/Linux - Install tools: HELM
- Others:
NGINX Ingress controller version:
latest 0.9 stable
Kubernetes version (use kubectl version
):
GitVersion:"v1.8.3"
What happened:
Nginx config generated based on instance spec.
This would cause Nginx ingress LB performance issue as many pods running one particular node.
Any recommendation or best practice for deploy LB in k8s?
perhaps keep LB deploy independently on each node.
What you expected to happen:
worker_process , worker connection, worker_rlimit_nofile and other related configuration should generate based on container spec.
Metadata
Metadata
Assignees
Labels
No labels