You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Considerations for Pod Resource (Memory and CPU) Requests and Limits
1
+
---
2
+
title: "Considerations for Pod Resource (Memory and CPU) Requests and Limits"
3
+
date: 2020-06-30T08:55:00-05:00
4
+
draft: true
5
+
weight: 40
6
+
---
2
7
The operator creates a pod for each running WebLogic Server instance and each pod will have a container. It.s important that containers have enough resources in order for applications to run efficiently and expeditiously.
3
8
4
9
If a pod is scheduled on a node with limited resources, it.s possible for the node to run out of memory or CPU resources, and for applications to stop working properly or have degraded performance. It.s also possible for a rouge application to use all available memory and/or CPU, which makes other containers running on the same system unresponsive. The same problem can happen if an application has memory leak or bad configuration.
@@ -8,32 +13,56 @@ A pod.s resource requests and limit parameters can be used to solve these proble
8
13
## Pod Quality Of Service (QoS) and Prioritization
9
14
Pod.s Quality of Service (QoS) and priority is determined based on whether pod.s resource requests and limits are configured or not and how they.re configured.
10
15
11
-
Best Effort QoS: If you don.t configure requests and limits, pod receives .best-effort. QoS and pod has the lowest priority. In cases where node runs out of non-shareable resources, kubelet.s out-of-resource eviction policy evicts/kills the pods with best-effort QoS first.
16
+
**Best Effort QoS**: If you don.t configure requests and limits, pod receives .best-effort. QoS and pod has the **lowest priority**. In cases where node runs out of non-shareable resources, kubelet.s out-of-resource eviction policy evicts/kills the pods with best-effort QoS first.
12
17
13
-
Burstable QoS: If you configure both resource requests and limits, and set the requests to be less than the limit, pod.s QoS will be .Burstable.. Similarly when you only configure the resource requests (without limits), the pod QoS is .Burstable.. When the node runs out of non-shareable resources, kubelet will kill .Burstable. Pods only when there are no more .best-effort. pods running. The Burstable pod receives medium priority.
18
+
**Burstable QoS**: If you configure both resource requests and limits, and set the requests to be less than the limit, pod.s QoS will be .Burstable.. Similarly when you only configure the resource requests (without limits), the pod QoS is .Burstable.. When the node runs out of non-shareable resources, kubelet will kill .Burstable. Pods only when there are no more .best-effort. pods running. The Burstable pod receives **medium priority**.
14
19
15
-
Guaranteed QoS: If you set the requests and the limits to equal values, pod will have .Guranteed. QoS and pod will be considered as of the top most priority. These settings indicates that your pod will consume a fixed amount of memory and CPU. With this configuration, if a node runs out of shareable resources, Kubernetes will kill the best-effort and the burstable Pods first before terminating these Guaranteed QoS Pods. These are the highest priority Pods.
20
+
**Guaranteed QoS**: If you set the requests and the limits to equal values, pod will have .Guranteed. QoS and pod will be considered as of the top most priority. These settings indicates that your pod will consume a fixed amount of memory and CPU. With this configuration, if a node runs out of shareable resources, Kubernetes will kill the best-effort and the burstable Pods first before terminating these Guaranteed QoS Pods. These are the **highest priority** pods.
16
21
17
22
## Java heap size and pod memory request/limit considerations
18
23
It.s extremely important to set correct heap size for JVM-based applications. If available memory on node or memory allocated to container is not sufficient for specified JVM heap arguments (and additional off-heap memory), it is possible for WL process to run out of memory. In order to avoid this, you will need to make sure that configured heap sizes are not too big and that the pod is scheduled on the node with sufficient memory.
19
24
With the latest Java version, it.s possible to rely on the default JVM heap settings which are safe but quite conservative. If you configure the memory limit for a container but don.t configure heap sizes (-Xms and -Xmx), JVM will configure max heap size to 25% (1/4th) of container memory limit by default. The minimum heap size is configured to 1.56% (1/64th) of limit value.
20
25
21
-
### Default heap size and resource request values for sample WebLogic Server Pods:
22
-
The samples configure default min and max heap size for WebLogic server java process to 256MB and 512MB respectively. This can be changed using USER_MEM_ARGS environment variable. The default min and max heap size for node-manager process is 64MB and 100MB. This can be changed by using NODEMGR_MEM_ARGS environment variable.
26
+
**Default heap sizes and resource request values for sample WebLogic Server Pods**:
27
+
The WLS samples configure default min and max heap size for WebLogic server java process to 256MB and 512MB respectively. This can be changed using USER_MEM_ARGS environment variable.
The default memory request in samples for WebLogic server pod is 768MB and default CPU request is 250m. This can be changed during domain creation in resources section.
35
+
The default min and max heap size for node-manager process is 64MB and 100MB. This can be changed by using NODEMGR_MEM_ARGS environment variable.
36
+
37
+
The default pod memory request in WLS samples is 768MB and default CPU request is 250m. The requests values can be changed in resources section.
38
+
```
39
+
requests:
40
+
cpu: "250m"
41
+
memory: "768Mi"
42
+
```
25
43
26
44
There.s no memory or CPU limit configured by default in samples and default QoS for WebLogic server pod is Burstable. If your use-case and workload requires higher QoS and priority, this can be achieved by setting memory and CPU limits. You.ll need to run tests and experiment with different memory/CPU limits to determine optimal limit values.
45
+
```
46
+
limits:
47
+
cpu: 2
48
+
memory: "2048Mi"
49
+
```
27
50
28
51
### Configure min/max heap size in percentages using "-XX:MinRAMPercentage" and "-XX:MaxRAMPercentage"
29
-
If you specify pod memory limit, it's recommended to configure heap size as a percentage of the total RAM (memory) specified in the pod memory limit. These parameters allow you to fine-tune the heap size . the meaning of those settings is explained in this excellent answer on StackOverflow. Please note . they set the percentage, not the fixed values. Thanks to it changing container memory settings will not break anything.
52
+
If you specify pod memory limit, it's recommended to configure heap size as a percentage of the total RAM (memory) specified in the pod memory limit. These parameters allow you to fine-tune the heap size. Please note . they set the percentage, not the fixed values. Thanks to it changing container memory settings will not break anything.
When configuring memory limits, it.s important to make sure that the limit is sufficiently big to accommodate the configured heap (and off-heap) requirements, but it's not too big to waste memory resource. Since pod memory will never go above the limit, if JVM's memory usage (sum of heap and native memory) goes above the limit, JVM process will be killed due to out-of-memory error and WebLogic container will be restarted due to liveness probe failure. Additionally there's also a node-manager process that.s running in same container and it has it's own heap and off-heap requirements. You can also fine tune the node manager heap size in percentages by setting "-XX:MinRAMPercentage" and "-XX:MaxRAMPercentage" using .NODEMGR_JAVA_OPTIONS. environment variable.
31
60
32
61
### Using "-Xms" and "-Xmx" parameters when not configuring limits
33
62
In some cases, it.s difficult to come up with a hard limit for the container and you might only want to configure memory requests but not configure memory limits. In such scenarios, you can use traditional approach to set min/max heap size using .-Xms. and .-Xmx..
34
63
35
64
### CPU requests and limits
36
-
It.s important that the containers running WebLogic applications have enough CPU resources, otherwise applications performance can suffer. You also don't want to set CPU requests and limit too high if your application don't need or use it. Since CPU is a shared resource, if the amount of CPU that you reserve is more than required by your application, the CPU cycles will go unused and be wasted. If no CPU request and limit is configured, it can end up using all CPU resources available on node. This can starve other containers from using shareable CPU cycles.
65
+
It.s important that the containers running WebLogic applications have enough CPU resources, otherwise applications performance can suffer. You also don't want to set CPU requests and limit too high if your application don't need or use allocated CPU resources. Since CPU is a shared resource, if the amount of CPU that you reserve is more than required by your application, the CPU cycles will go unused and be wasted. If no CPU request and limit is configured, it can end up using all CPU resources available on node. This can starve other containers from using shareable CPU cycles.
37
66
38
67
One other thing to keep in mind is that if pod CPU limit is not configured, it might lead to incorrect garbage collection (GC) strategy selection. WebLogic self-tuning work-manager uses pod CPU limit to configure the number of threads in a default thread pool. If you don.t specify container CPU limit, the performance might be affected due to incorrect number of GC threads or wrong WebLogic server thread pool size.
39
68
@@ -43,7 +72,7 @@ Just like CPU, if you put a memory request that.s larger than amount of memory o
43
72
## CPU Affinity and lock contention in k8s
44
73
We observed much higher lock contention in k8s env when running some workloads in kubernetes as compared to traditional env. The lock contention seem to be caused by the lack of CPU cache affinity and/or scheduling latency when the workload moves to different CPU cores.
45
74
46
-
In traditional (non-k8s) environment, often tests are run with CPU affinity by binding WLS java process to particular CPU core(s) (using taskset command). This results in reduced lock contention and better performance.
75
+
In traditional (non-k8s) environment, often tests are run with CPU affinity achieved by binding WLS java process to particular CPU core(s) (using taskset command). This results in reduced lock contention and better performance.
47
76
48
77
In k8s environment. when CPU manager policy is configured to be "static" and QOS is "Guaranteed" for WLS pods, we see reduced lock contention and better performance. The default CPU manager policy is "none" (default). Please refer to controlling CPU management policies for more details.
49
78
@@ -52,3 +81,4 @@ In k8s environment. when CPU manager policy is configured to be "static" and QOS
Copy file name to clipboardExpand all lines: kubernetes/samples/scripts/create-fmw-infrastructure-domain/domain-home-in-image/create-domain-inputs.yaml
0 commit comments