-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ElasticSearch ulimit too low - Amazon linux 2 AMI #1975
Comments
The fix from pires/kubernetes-elasticsearch-cluster#215 (comment) should work if you're using docker. I'm not sure how to tweak that for the default containerd backend though. Which are you using? |
I am using the default backend (containerd) I thought about That but i couldn't find which file is responsible for k3s limits |
I have the exact same issue, any help is greatly appreciated... same image works for microk8s/minikube/docker desktop... I am running WSL2 with openSUSE-Leap-15.2. |
This is a long-standing issue for systems configured with low default limits. Please see kubernetes/kubernetes#3595 |
Meanwhile, the work-around to this is to bump the |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
Environmental Info:
K3s Version:
version v1.18.3+k3s1 (96653e8)
Node(s) CPU architecture, OS, and Version:
Linux xxxxx 4.14.154-128.181.amzn2.x86_64 #1 SMP Sat Nov 16 21:49:00 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
1 master
2 workers
Describe the bug:
I am trying to spin up a elastic search service on the k3s cluster, but I got the message:
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
I tried the init containers with privileged and IP_LOCK flags but no result. Also tried solutions from the k3os projects:
rancher/k3os#87
Steps To Reproduce:
Expected behavior:
ulimit should get higher
Actual behavior:
ulimit is still 4096 for this pod
Additional context / logs:
The text was updated successfully, but these errors were encountered: