-
Notifications
You must be signed in to change notification settings - Fork 687
Out of memory issue for master and client nodes #202
Comments
Try increasing es-data nodes to at least 4 GB. |
@rewt thank you very much for your response |
the strange think here is that when it was es-data heap = 1G like es-master and increasing it, it was working fine, but when increasing its heap to 4GB as you recommended @rewt , it returns out of memory issue !!!! |
@mootezbessifi In my experience, I was giving java heap more RAM than container was able to provide, so while pod worked fine for a while, as time went on, Java heap would attempt to allocate more RAM than available - hence java heap errors. I modified resource requests in stateful yaml to include memory req and limits and error, configured java_opts within those limits, and errors resolved. https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ |
@rewt thank you |
@mootezbessifi That is essentially your call and depends what kind of data ES needs to handle. To resolve java heap out of memory errors, start by checking how much RAM the container is currently consuming:
Then be sure ES_JAVA_OPTS does not exceed how much RAM the pod is currently consuming. From there you can set limits for the container based on link above. |
Hi,
i am running a ES cluster on top of k8s (2 es-client, 3 es-master and 3 es-data).
the cluster is used for efk stack.
i configured before the heap size for each one as following:
the cluster was working fine until today when i tried to change the storage with Glusterfs for the es-data and restart all ES deployments.
es-masters and es-clients refuse to run and generate exception OutOfMemory exception.
i was going for es-client beyond 5GB and i still got the same issue
i need help please
The text was updated successfully, but these errors were encountered: