Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kong 1.4.1 High memory per worker (Docker) #5324

Closed
saltxwater opened this issue Dec 11, 2019 · 11 comments
Closed

Kong 1.4.1 High memory per worker (Docker) #5324

saltxwater opened this issue Dec 11, 2019 · 11 comments
Labels
task/feature Requests for new features in Kong

Comments

@saltxwater
Copy link

saltxwater commented Dec 11, 2019

Summary

When running Kong 1.4.1 in Docker (via nomad) then the memory usage is roughly 500Mb per worker (!!!). Testing the same config and setup with Kong 1.4.0 the total is 75Mb with 1 worker, 200Mb with 4.
Possibly related to 1.4.1: Removed arbitrary limit on worker connections. #5148

Steps To Reproduce

  1. Start Kong in docker with a simple declarative config.
    Our Kong.conf:

log_level = debug
proxy_access_log = /dev/stdout
proxy_error_log = /dev/stderr
admin_access_log = /dev/stdout
admin_error_log = /dev/stderr
proxy_listen = 0.0.0.0:8443 http2 ssl
ssl_cert = /certs/local.cer
ssl_cert_key = /certs/local.key
database = off
dns_order = SRV,A,LAST,CNAME
declarative_config = /custom/custom-proxy.yml

custom-proxy.yml:
_format_version: '1.4'

services:

  • name: Route-All
    port: 80
    protocol: https
    host: MyService.service.team.consul.company.com
    routes:
    • name: Route-All
      paths: ["/"]
      preserve_host: true

Additional Details & Logs

  • Kong version: 1.4.1
  • Kong debug-level startup logs:
    ...
    2019/12/11 13:46:33 [notice] 1#0: using the "epoll" event method

2019/12/11 13:46:33 [notice] 1#0: openresty/1.15.8.2

2019/12/11 13:46:33 [notice] 1#0: built by gcc 8.3.0 (Alpine 8.3.0)

2019/12/11 13:46:33 [notice] 1#0: OS: Linux 3.10.0-1062.1.1.el7.x86_64

2019/12/11 13:46:33 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576

2019/12/11 13:46:33 [notice] 1#0: start worker processes

2019/12/11 13:46:33 [notice] 1#0: start worker process 24

2019/12/11 13:46:33 [debug] 24#0: *1 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()

2019/12/11 13:46:33 [debug] 24#0: *1 [lua] globalpatches.lua:269: randomseed(): random seed: 192795110016 for worker nb 0

2019/12/11 13:46:33 [debug] 24#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=24, data=nil

2019/12/11 13:46:33 [notice] 24#0: 1 [kong] init.lua:298 declarative config loaded from /custom/custom-proxy.yml, context: init_worker_by_lua

2019/12/11 13:46:33 [debug] 24#0: *2 [lua] balancer.lua:764: init(): initialized 0 balancer(s), 0 error(s)

@bungle bungle added the task/needs-investigation Requires investigation and reproduction before classifying it as a bug or not. label Dec 11, 2019
@bungle
Copy link
Member

bungle commented Dec 11, 2019

@saltxwater thanks for the report!

@bungle
Copy link
Member

bungle commented Dec 11, 2019

@saltxwater I can confirm that it is really related to #5148 and so affects db-mode too.

@bungle bungle added task/bug and removed task/needs-investigation Requires investigation and reproduction before classifying it as a bug or not. labels Dec 11, 2019
@bungle
Copy link
Member

bungle commented Dec 11, 2019

@saltxwater in a meanwhile you can perhaps set the ulimit.

@bungle
Copy link
Member

bungle commented Dec 11, 2019

I believe this is the reason for that:
https://trac.nginx.org/nginx/browser/nginx/src/event/ngx_event.c#L731

@saltxwater
Copy link
Author

@bungle Great, thanks for taking a look at this so promptly!
We were originally running 1.3 on postgres but I wanted to upgrade to db-less and so picked up the latest (1.4.1). I couldn't test our existing setup with postgres without upgrading the db, hence why I didn't test that. I'll remove the (Declarative) tag from the title.
I'm happy working with 1.4.0 for now

@saltxwater saltxwater changed the title Kong 1.4.1 High memory per worker (Docker) (Declarative) Kong 1.4.1 High memory per worker (Docker) Dec 11, 2019
@hishamhm
Copy link
Contributor

hishamhm commented Dec 11, 2019

@saltxwater: as @bungle said, if you could confirm that you get the expected memory consumption with 1.4.1 (or 1.4.2, which was just released a couple of hours ago...) by setting ulimit -n 16384 prior to launching kong, that would be great!

@saltxwater
Copy link
Author

@hishamhm: I have tried what @bungle suggested and modified the docker-entrypoint.sh to include ulimit -n 16384 prior to running kong 1.4.1. This did fix the issue and the service started up with 4 workers consuming a total of 210Mb memory as opposed to 1.9Gb without the ulimit.

Thanks for your help

@bungle bungle added task/feature Requests for new features in Kong and removed task/bug labels Dec 12, 2019
@bungle
Copy link
Member

bungle commented Dec 12, 2019

As there is a workaround, I changed it from bug to feature to make it configurable in a future. Configuring worker_connections with ulimit is not what we want after all, and they should be separate (perhaps using ulimit as an unconfigured default).

@hhromic
Copy link

hhromic commented Dec 17, 2019

I'm also facing this issue in my Docker Swarm deployment :(
Is there any other workaround that doesn't involve modifying the official Kong Docker image?

How about adding an environment variable that can be used to configure ulimit in docker-entrypoint.sh? For example KONG_ULIMIT=<args> would make docker-entrypoint.sh call ulimit with the given arguments before launching Kong. Then we could pass KONG_ULIMIT="-n 16384" to Docker when creating the container/service or in compose files.

Edit: there is an ulimits directive for Docker compose files, however it is not supported in Swarm Mode. Hence why I think an environment variable is a better solution.

Edit2: because this issue seems more relevant to the Docker images than Kong itself, I'm going to open an issue in docker-kong with a back-reference here. I think is better place for further discussion.

@xmapst
Copy link

xmapst commented Jan 9, 2020

I found that modifying Kong's working relationship in 1.4.2 can solve the problem of excessive memory
Step 1: Copy and modify the original docker-entrypoint.sh script

......
chmod o+w /proc/self/fd/2 || true

    # setting kong work connections
    # DEFAULT_CONNECTIONS=1048576 (1.4.2 work connections default value is 1048576)
    DEFAULT_CONNECTIONS=`ulimit -n`
    if [[ -n $KONG_CONNECTIONS ]]; then
      sed -i "s/worker_connections $DEFAULT_CONNECTIONS/worker_connections $KONG_CONNECTIONS/g" /usr/local/kong/nginx.conf
    fi

    if [ "$(id -u)" != "0" ]; then
......

Step 2: Rebuild image

cat >Dockerfile <<EOF
FROM kong:1.4.2-alpine

COPY docker-entrypoint.sh /docker-entrypoint.sh

ENTRYPOINT ["/docker-entrypoint.sh"]

EXPOSE 8000 8443 8001 8444

STOPSIGNAL SIGQUIT

CMD ["kong", "docker-start"]
EOF
docker build -t kong:1.4.3-alpine .

Kong work connections are passed in through external variables KONG_CONNECTIONS

@bungle
Copy link
Member

bungle commented Jan 10, 2020

As #5390 was merged, I will close this. The new upper cap is 65536.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
task/feature Requests for new features in Kong
Projects
None yet
Development

No branches or pull requests

5 participants