-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kong 1.4.1 High memory per worker (Docker) #5324
Comments
@saltxwater thanks for the report! |
@saltxwater I can confirm that it is really related to #5148 and so affects db-mode too. |
@saltxwater in a meanwhile you can perhaps set the |
I believe this is the reason for that: |
@bungle Great, thanks for taking a look at this so promptly! |
@saltxwater: as @bungle said, if you could confirm that you get the expected memory consumption with 1.4.1 (or 1.4.2, which was just released a couple of hours ago...) by setting |
@hishamhm: I have tried what @bungle suggested and modified the docker-entrypoint.sh to include ulimit -n 16384 prior to running kong 1.4.1. This did fix the issue and the service started up with 4 workers consuming a total of 210Mb memory as opposed to 1.9Gb without the ulimit. Thanks for your help |
As there is a workaround, I changed it from |
I'm also facing this issue in my Docker Swarm deployment :( How about adding an environment variable that can be used to configure Edit: there is an Edit2: because this issue seems more relevant to the Docker images than Kong itself, I'm going to open an issue in |
I found that modifying Kong's working relationship in 1.4.2 can solve the problem of excessive memory
Step 2: Rebuild image
Kong work connections are passed in through external variables |
As #5390 was merged, I will close this. The new upper cap is 65536. |
Summary
When running Kong 1.4.1 in Docker (via nomad) then the memory usage is roughly 500Mb per worker (!!!). Testing the same config and setup with Kong 1.4.0 the total is 75Mb with 1 worker, 200Mb with 4.
Possibly related to 1.4.1: Removed arbitrary limit on worker connections. #5148
Steps To Reproduce
Our Kong.conf:
log_level = debug
proxy_access_log = /dev/stdout
proxy_error_log = /dev/stderr
admin_access_log = /dev/stdout
admin_error_log = /dev/stderr
proxy_listen = 0.0.0.0:8443 http2 ssl
ssl_cert = /certs/local.cer
ssl_cert_key = /certs/local.key
database = off
dns_order = SRV,A,LAST,CNAME
declarative_config = /custom/custom-proxy.yml
custom-proxy.yml:
_format_version: '1.4'
services:
port: 80
protocol: https
host: MyService.service.team.consul.company.com
routes:
paths: ["/"]
preserve_host: true
Additional Details & Logs
...
2019/12/11 13:46:33 [notice] 1#0: using the "epoll" event method
2019/12/11 13:46:33 [notice] 1#0: openresty/1.15.8.2
2019/12/11 13:46:33 [notice] 1#0: built by gcc 8.3.0 (Alpine 8.3.0)
2019/12/11 13:46:33 [notice] 1#0: OS: Linux 3.10.0-1062.1.1.el7.x86_64
2019/12/11 13:46:33 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2019/12/11 13:46:33 [notice] 1#0: start worker processes
2019/12/11 13:46:33 [notice] 1#0: start worker process 24
2019/12/11 13:46:33 [debug] 24#0: *1 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2019/12/11 13:46:33 [debug] 24#0: *1 [lua] globalpatches.lua:269: randomseed(): random seed: 192795110016 for worker nb 0
2019/12/11 13:46:33 [debug] 24#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=24, data=nil
2019/12/11 13:46:33 [notice] 24#0: 1 [kong] init.lua:298 declarative config loaded from /custom/custom-proxy.yml, context: init_worker_by_lua
2019/12/11 13:46:33 [debug] 24#0: *2 [lua] balancer.lua:764: init(): initialized 0 balancer(s), 0 error(s)
The text was updated successfully, but these errors were encountered: