Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ElasticSearch ulimit too low - Amazon linux 2 AMI #1975

Closed
rafabios opened this issue Jun 30, 2020 · 6 comments
Closed

ElasticSearch ulimit too low - Amazon linux 2 AMI #1975

rafabios opened this issue Jun 30, 2020 · 6 comments

Comments

@rafabios
Copy link

Environmental Info:
K3s Version:
version v1.18.3+k3s1 (96653e8)

Node(s) CPU architecture, OS, and Version:

Linux xxxxx 4.14.154-128.181.amzn2.x86_64 #1 SMP Sat Nov 16 21:49:00 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:

1 master
2 workers

Describe the bug:

I am trying to spin up a elastic search service on the k3s cluster, but I got the message:
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

I tried the init containers with privileged and IP_LOCK flags but no result. Also tried solutions from the k3os projects:
rancher/k3os#87

Steps To Reproduce:

  • Installed K3s:
  • Applied manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: es-cluster
  name: esnode
  namespace: apps
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  selector:
    matchLabels:
      app: es-cluster
  serviceName: elasticsearch
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: es-cluster
    spec:
      containers:
      - env:
        - name: ES_JAVA_OPTS
          valueFrom:
            configMapKeyRef:
              key: ES_JAVA_OPTS
              name: es-config
        - name: LimitNOFILE
          value: "100000"
        image: elasticsearch:7.8.0
        imagePullPolicy: IfNotPresent
        name: elasticsearch
        ports:
        - containerPort: 9200
          name: es-http
          protocol: TCP
        - containerPort: 9300
          name: es-transport
          protocol: TCP
        resources:
          requests:
            memory: 512M
        securityContext:
          capabilities:
            add:
            - IPC_LOCK
            - SYS_RESOURCE
          privileged: true
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          name: elasticsearch-config
          subPath: elasticsearch.yml
      dnsPolicy: ClusterFirst
      initContainers:
      - image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536 || true"]
        name: init-sysctl
        resources: {}
        securityContext:
          capabilities:
            add:
            - IPC_LOCK
            - SYS_RESOURCE
          privileged: true
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: elasticsearch.yml
            path: elasticsearch.yml
          name: es-config
        name: elasticsearch-config
      volumes:
        - name: elasticsearch-config
          configMap:
            name: es-config
            items:
              - key: elasticsearch.yml
                path: elasticsearch.yml  

Expected behavior:
ulimit should get higher

Actual behavior:
ulimit is still 4096 for this pod

Additional context / logs:

@brandond
Copy link
Member

The fix from pires/kubernetes-elasticsearch-cluster#215 (comment) should work if you're using docker. I'm not sure how to tweak that for the default containerd backend though. Which are you using?

@rafabios
Copy link
Author

I am using the default backend (containerd) I thought about That but i couldn't find which file is responsible for k3s limits

@kkzz8888
Copy link

kkzz8888 commented Sep 3, 2020

I have the exact same issue, any help is greatly appreciated... same image works for microk8s/minikube/docker desktop... I am running WSL2 with openSUSE-Leap-15.2.

@dweomer
Copy link
Contributor

dweomer commented Sep 3, 2020

This is a long-standing issue for systems configured with low default limits. Please see kubernetes/kubernetes#3595

@dweomer
Copy link
Contributor

dweomer commented Sep 3, 2020

This is a long-standing issue for systems configured with low default limits. Please see kubernetes/kubernetes#3595

Meanwhile, the work-around to this is to bump the fs.file-max sysctl on your system. Most installations that aren't primarily concerned with running containers leave this intentionally low as there is less accounting overhead in the kernel and you get noticeably better performance with "limited" cpu/memory on VMs.

@stale
Copy link

stale bot commented Jul 31, 2021

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Jul 31, 2021
@stale stale bot closed this as completed Aug 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants