Skip to content

Commit a23d465

Browse files
committed
changes for OWLS-80384 - Verify that operator deployment and WebLogic pods have good default cpu/memory resources
1 parent ede68e1 commit a23d465

File tree

20 files changed

+177
-60
lines changed

20 files changed

+177
-60
lines changed
Lines changed: 40 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,9 @@
1-
# Considerations for Pod Resource (Memory and CPU) Requests and Limits
1+
---
2+
title: "Considerations for Pod Resource (Memory and CPU) Requests and Limits"
3+
date: 2020-06-30T08:55:00-05:00
4+
draft: true
5+
weight: 40
6+
---
27
The operator creates a pod for each running WebLogic Server instance and each pod will have a container. It.s important that containers have enough resources in order for applications to run efficiently and expeditiously.
38

49
If a pod is scheduled on a node with limited resources, it.s possible for the node to run out of memory or CPU resources, and for applications to stop working properly or have degraded performance. It.s also possible for a rouge application to use all available memory and/or CPU, which makes other containers running on the same system unresponsive. The same problem can happen if an application has memory leak or bad configuration.
@@ -8,32 +13,56 @@ A pod.s resource requests and limit parameters can be used to solve these proble
813
## Pod Quality Of Service (QoS) and Prioritization
914
Pod.s Quality of Service (QoS) and priority is determined based on whether pod.s resource requests and limits are configured or not and how they.re configured.
1015

11-
Best Effort QoS: If you don.t configure requests and limits, pod receives .best-effort. QoS and pod has the lowest priority. In cases where node runs out of non-shareable resources, kubelet.s out-of-resource eviction policy evicts/kills the pods with best-effort QoS first.
16+
**Best Effort QoS**: If you don.t configure requests and limits, pod receives .best-effort. QoS and pod has the **lowest priority**. In cases where node runs out of non-shareable resources, kubelet.s out-of-resource eviction policy evicts/kills the pods with best-effort QoS first.
1217

13-
Burstable QoS: If you configure both resource requests and limits, and set the requests to be less than the limit, pod.s QoS will be .Burstable.. Similarly when you only configure the resource requests (without limits), the pod QoS is .Burstable.. When the node runs out of non-shareable resources, kubelet will kill .Burstable. Pods only when there are no more .best-effort. pods running. The Burstable pod receives medium priority.
18+
**Burstable QoS**: If you configure both resource requests and limits, and set the requests to be less than the limit, pod.s QoS will be .Burstable.. Similarly when you only configure the resource requests (without limits), the pod QoS is .Burstable.. When the node runs out of non-shareable resources, kubelet will kill .Burstable. Pods only when there are no more .best-effort. pods running. The Burstable pod receives **medium priority**.
1419

15-
Guaranteed QoS: If you set the requests and the limits to equal values, pod will have .Guranteed. QoS and pod will be considered as of the top most priority. These settings indicates that your pod will consume a fixed amount of memory and CPU. With this configuration, if a node runs out of shareable resources, Kubernetes will kill the best-effort and the burstable Pods first before terminating these Guaranteed QoS Pods. These are the highest priority Pods.
20+
**Guaranteed QoS**: If you set the requests and the limits to equal values, pod will have .Guranteed. QoS and pod will be considered as of the top most priority. These settings indicates that your pod will consume a fixed amount of memory and CPU. With this configuration, if a node runs out of shareable resources, Kubernetes will kill the best-effort and the burstable Pods first before terminating these Guaranteed QoS Pods. These are the **highest priority** pods.
1621

1722
## Java heap size and pod memory request/limit considerations
1823
It.s extremely important to set correct heap size for JVM-based applications. If available memory on node or memory allocated to container is not sufficient for specified JVM heap arguments (and additional off-heap memory), it is possible for WL process to run out of memory. In order to avoid this, you will need to make sure that configured heap sizes are not too big and that the pod is scheduled on the node with sufficient memory.
1924
With the latest Java version, it.s possible to rely on the default JVM heap settings which are safe but quite conservative. If you configure the memory limit for a container but don.t configure heap sizes (-Xms and -Xmx), JVM will configure max heap size to 25% (1/4th) of container memory limit by default. The minimum heap size is configured to 1.56% (1/64th) of limit value.
2025

21-
### Default heap size and resource request values for sample WebLogic Server Pods:
22-
The samples configure default min and max heap size for WebLogic server java process to 256MB and 512MB respectively. This can be changed using USER_MEM_ARGS environment variable. The default min and max heap size for node-manager process is 64MB and 100MB. This can be changed by using NODEMGR_MEM_ARGS environment variable.
26+
**Default heap sizes and resource request values for sample WebLogic Server Pods**:
27+
The WLS samples configure default min and max heap size for WebLogic server java process to 256MB and 512MB respectively. This can be changed using USER_MEM_ARGS environment variable.
28+
```
29+
resources:
30+
env:
31+
- name: "USER_MEM_ARGS"
32+
value: "-Xms256m -Xmx512m -Djava.security.egd=file:/dev/./urandom"
33+
```
2334

24-
The default memory request in samples for WebLogic server pod is 768MB and default CPU request is 250m. This can be changed during domain creation in resources section.
35+
The default min and max heap size for node-manager process is 64MB and 100MB. This can be changed by using NODEMGR_MEM_ARGS environment variable.
36+
37+
The default pod memory request in WLS samples is 768MB and default CPU request is 250m. The requests values can be changed in resources section.
38+
```
39+
requests:
40+
cpu: "250m"
41+
memory: "768Mi"
42+
```
2543

2644
There.s no memory or CPU limit configured by default in samples and default QoS for WebLogic server pod is Burstable. If your use-case and workload requires higher QoS and priority, this can be achieved by setting memory and CPU limits. You.ll need to run tests and experiment with different memory/CPU limits to determine optimal limit values.
45+
```
46+
limits:
47+
cpu: 2
48+
memory: "2048Mi"
49+
```
2750

2851
### Configure min/max heap size in percentages using "-XX:MinRAMPercentage" and "-XX:MaxRAMPercentage"
29-
If you specify pod memory limit, it's recommended to configure heap size as a percentage of the total RAM (memory) specified in the pod memory limit. These parameters allow you to fine-tune the heap size . the meaning of those settings is explained in this excellent answer on StackOverflow. Please note . they set the percentage, not the fixed values. Thanks to it changing container memory settings will not break anything.
52+
If you specify pod memory limit, it's recommended to configure heap size as a percentage of the total RAM (memory) specified in the pod memory limit. These parameters allow you to fine-tune the heap size. Please note . they set the percentage, not the fixed values. Thanks to it changing container memory settings will not break anything.
53+
```
54+
resources:
55+
env:
56+
- name: JAVA_OPTIONS
57+
value: "--XX:MinRAMPercentage=25.0 --XX:MaxRAMPercentage=50.0 -Dweblogic.StdoutDebugEnabled=false"
58+
```
3059
When configuring memory limits, it.s important to make sure that the limit is sufficiently big to accommodate the configured heap (and off-heap) requirements, but it's not too big to waste memory resource. Since pod memory will never go above the limit, if JVM's memory usage (sum of heap and native memory) goes above the limit, JVM process will be killed due to out-of-memory error and WebLogic container will be restarted due to liveness probe failure. Additionally there's also a node-manager process that.s running in same container and it has it's own heap and off-heap requirements. You can also fine tune the node manager heap size in percentages by setting "-XX:MinRAMPercentage" and "-XX:MaxRAMPercentage" using .NODEMGR_JAVA_OPTIONS. environment variable.
3160

3261
### Using "-Xms" and "-Xmx" parameters when not configuring limits
3362
In some cases, it.s difficult to come up with a hard limit for the container and you might only want to configure memory requests but not configure memory limits. In such scenarios, you can use traditional approach to set min/max heap size using .-Xms. and .-Xmx..
3463

3564
### CPU requests and limits
36-
It.s important that the containers running WebLogic applications have enough CPU resources, otherwise applications performance can suffer. You also don't want to set CPU requests and limit too high if your application don't need or use it. Since CPU is a shared resource, if the amount of CPU that you reserve is more than required by your application, the CPU cycles will go unused and be wasted. If no CPU request and limit is configured, it can end up using all CPU resources available on node. This can starve other containers from using shareable CPU cycles.
65+
It.s important that the containers running WebLogic applications have enough CPU resources, otherwise applications performance can suffer. You also don't want to set CPU requests and limit too high if your application don't need or use allocated CPU resources. Since CPU is a shared resource, if the amount of CPU that you reserve is more than required by your application, the CPU cycles will go unused and be wasted. If no CPU request and limit is configured, it can end up using all CPU resources available on node. This can starve other containers from using shareable CPU cycles.
3766

3867
One other thing to keep in mind is that if pod CPU limit is not configured, it might lead to incorrect garbage collection (GC) strategy selection. WebLogic self-tuning work-manager uses pod CPU limit to configure the number of threads in a default thread pool. If you don.t specify container CPU limit, the performance might be affected due to incorrect number of GC threads or wrong WebLogic server thread pool size.
3968

@@ -43,7 +72,7 @@ Just like CPU, if you put a memory request that.s larger than amount of memory o
4372
## CPU Affinity and lock contention in k8s
4473
We observed much higher lock contention in k8s env when running some workloads in kubernetes as compared to traditional env. The lock contention seem to be caused by the lack of CPU cache affinity and/or scheduling latency when the workload moves to different CPU cores.
4574

46-
In traditional (non-k8s) environment, often tests are run with CPU affinity by binding WLS java process to particular CPU core(s) (using taskset command). This results in reduced lock contention and better performance.
75+
In traditional (non-k8s) environment, often tests are run with CPU affinity achieved by binding WLS java process to particular CPU core(s) (using taskset command). This results in reduced lock contention and better performance.
4776

4877
In k8s environment. when CPU manager policy is configured to be "static" and QOS is "Guaranteed" for WLS pods, we see reduced lock contention and better performance. The default CPU manager policy is "none" (default). Please refer to controlling CPU management policies for more details.
4978

@@ -52,3 +81,4 @@ In k8s environment. when CPU manager policy is configured to be "static" and QOS
5281
2) https://blog.softwaremill.com/docker-support-in-new-java-8-finally-fd595df0ca54
5382
3) https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
5483
4) https://www.magalix.com/blog/kubernetes-patterns-capacity-planning
84+

kubernetes/samples/scripts/common/domain-template.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ spec:
6363
- name: JAVA_OPTIONS
6464
value: "%JAVA_OPTIONS%"
6565
- name: USER_MEM_ARGS
66-
value: "-Djava.security.egd=file:/dev/./urandom "
66+
value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m "
6767
%OPTIONAL_SERVERPOD_RESOURCES%
6868
%LOG_HOME_ON_PV_PREFIX%volumes:
6969
%LOG_HOME_ON_PV_PREFIX%- name: weblogic-domain-storage-volume

kubernetes/samples/scripts/common/jrf-domain-template.yaml

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,12 @@
33
#
44
# This is an example of how to define a Domain resource.
55
#
6-
apiVersion: "weblogic.oracle/v7"
6+
apiVersion: "weblogic.oracle/v8"
77
kind: Domain
88
metadata:
99
name: %DOMAIN_UID%
1010
namespace: %NAMESPACE%
1111
labels:
12-
weblogic.resourceVersion: domain-v2
1312
weblogic.domainUID: %DOMAIN_UID%
1413
spec:
1514
# The WebLogic Domain Home
@@ -50,11 +49,6 @@ spec:
5049
# data storage directories are determined from the WebLogic domain home configuration.
5150
dataHome: "%DATA_HOME%"
5251

53-
# Istio service mesh support is experimental.
54-
%ISTIO_PREFIX%experimental:
55-
%ISTIO_PREFIX% istio:
56-
%ISTIO_PREFIX% enabled: %ISTIO_ENABLED%
57-
%ISTIO_PREFIX% readinessPort: %ISTIO_READINESS_PORT%
5852

5953
# serverStartPolicy legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
6054
# This determines which WebLogic Servers the Operator will start up when it discovers this Domain
@@ -121,3 +115,10 @@ spec:
121115
replicas: %INITIAL_MANAGED_SERVER_REPLICAS%
122116
# The number of managed servers to start for unlisted clusters
123117
# replicas: 1
118+
119+
# Istio
120+
%ISTIO_PREFIX%configuration:
121+
%ISTIO_PREFIX% istio:
122+
%ISTIO_PREFIX% enabled: %ISTIO_ENABLED%
123+
%ISTIO_PREFIX% readinessPort: %ISTIO_READINESS_PORT%
124+

kubernetes/samples/scripts/create-fmw-infrastructure-domain/domain-home-in-image/create-domain-inputs.yaml

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -142,16 +142,20 @@ domainHomeImageBase: container-registry.oracle.com/middleware/fmw-infrastructure
142142
# which uses WDT, instead of WLST, to generate the domain configuration.
143143
domainHomeImageBuildPath: ./docker-images/OracleFMWInfrastructure/samples/12213-domain-home-in-image
144144

145-
# Uncomment and edit value(s) below to specify the maximum amount of
146-
# compute resources allowed, and minimum amount of compute resources
147-
# required for each server pod.
148-
# These are optional.
145+
# Resource request for each server pod (Memory and CPU). This is minimum amount of compute
146+
# resources required for each server pod. Edit value(s) below as per pod sizing requirements.
147+
# These are optional.
149148
# Please refer to the kubernetes documentation on Managing Compute
150149
# Resources for Containers for details.
151-
#
152-
# serverPodMemoryRequest: "64Mi"
153-
# serverPodCpuRequest: "250m"
154-
# serverPodMemoryLimit: "1Gi"
150+
serverPodMemoryRequest: "1280Mi"
151+
serverPodCpuRequest: "500m"
152+
153+
# Uncomment and edit value(s) below to specify the maximum amount of compute resources allowed
154+
# for each server pod.
155+
# These are optional.
156+
# Please refer to the kubernetes documentation on Managing Compute
157+
# Resources for Containers for details.
158+
# serverPodMemoryLimit: "2Gi"
155159
# serverPodCpuLimit: "1000m"
156160

157161
#

kubernetes/samples/scripts/create-fmw-infrastructure-domain/domain-home-in-image/create-domain.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ function initialize {
141141
validationError "The template file ${domainPropertiesInput} for creating a WebLogic domain was not found"
142142
fi
143143

144-
dcrInput="${scriptDir}/../../common/domain-template.yaml"
144+
dcrInput="${scriptDir}/../../common/jrf-domain-template.yaml"
145145
if [ ! -f ${dcrInput} ]; then
146146
validationError "The template file ${dcrInput} for creating the domain resource was not found"
147147
fi

kubernetes/samples/scripts/create-fmw-infrastructure-domain/domain-home-on-pv/create-domain-inputs.yaml

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -142,16 +142,20 @@ createDomainScriptName: create-domain-job.sh
142142
# so that the Kubernetes pod can use the scripts and supporting files to create a domain home.
143143
createDomainFilesDir: wlst
144144

145-
# Uncomment and edit value(s) below to specify the maximum amount of
146-
# compute resources allowed, and minimum amount of compute resources
147-
# required for each server pod.
148-
# These are optional.
145+
# Resource request for each server pod (Memory and CPU). This is minimum amount of compute
146+
# resources required for each server pod. Edit value(s) below as per pod sizing requirements.
147+
# These are optional.
149148
# Please refer to the kubernetes documentation on Managing Compute
150149
# Resources for Containers for details.
151-
#
152-
# serverPodMemoryRequest: "64Mi"
153-
# serverPodCpuRequest: "250m"
154-
# serverPodMemoryLimit: "1Gi"
150+
serverPodMemoryRequest: "1280Mi"
151+
serverPodCpuRequest: "500m"
152+
153+
# Uncomment and edit value(s) below to specify the maximum amount of compute resources allowed
154+
# for each server pod.
155+
# These are optional.
156+
# Please refer to the kubernetes documentation on Managing Compute
157+
# Resources for Containers for details.
158+
# serverPodMemoryLimit: "2Gi"
155159
# serverPodCpuLimit: "1000m"
156160

157161
#

kubernetes/samples/scripts/create-fmw-infrastructure-domain/domain-home-on-pv/create-domain.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ function initialize {
118118
validationError "The template file ${deleteJobInput} for deleting a WebLogic domain was not found"
119119
fi
120120

121-
dcrInput="${scriptDir}/../../common/domain-template.yaml"
121+
dcrInput="${scriptDir}/../../common/jrf-domain-template.yaml"
122122
if [ ! -f ${dcrInput} ]; then
123123
validationError "The template file ${dcrInput} for creating the domain resource was not found"
124124
fi

kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/create-domain-inputs.yaml

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -159,15 +159,19 @@ domainHomeImageBase: container-registry.oracle.com/middleware/weblogic:12.2.1.4
159159
# which uses WDT, instead of WLST, to generate the domain configuration.
160160
domainHomeImageBuildPath: ./docker-images/OracleWebLogic/samples/12213-domain-home-in-image
161161

162-
# Uncomment and edit value(s) below to specify the maximum amount of
163-
# compute resources allowed, and minimum amount of compute resources
164-
# required for each server pod.
165-
# These are optional.
162+
# Resource request for each server pod (Memory and CPU). This is minimum amount of compute
163+
# resources required for each server pod. Edit value(s) below as per pod sizing requirements.
164+
# These are optional.
165+
# Please refer to the kubernetes documentation on Managing Compute
166+
# Resources for Containers for details.
167+
serverPodMemoryRequest: "768Mi"
168+
serverPodCpuRequest: "250m"
169+
170+
# Uncomment and edit value(s) below to specify the maximum amount of compute resources allowed
171+
# for each server pod.
172+
# These are optional.
166173
# Please refer to the kubernetes documentation on Managing Compute
167174
# Resources for Containers for details.
168-
#
169-
# serverPodMemoryRequest: "64Mi"
170-
# serverPodCpuRequest: "250m"
171175
# serverPodMemoryLimit: "1Gi"
172176
# serverPodCpuLimit: "1000m"
173177

kubernetes/samples/scripts/create-weblogic-domain/domain-home-on-pv/create-domain-inputs.yaml

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -145,16 +145,22 @@ createDomainScriptName: create-domain-job.sh
145145
# Kubernetes config map, which in turn is mounted to the `createDomainScriptsMountPath`,
146146
# so that the Kubernetes pod can use the scripts and supporting files to create a domain home.
147147
createDomainFilesDir: wlst
148+
149+
# Resource request for each server pod (Memory and CPU). This is minimum amount of compute
150+
# resources required for each server pod. Edit value(s) below as per pod sizing requirements.
151+
# These are optional
152+
# Please refer to the kubernetes documentation on Managing Compute
153+
# Resources for Containers for details.
154+
#
155+
serverPodMemoryRequest: "768Mi"
156+
serverPodCpuRequest: "250m"
148157

149-
# Uncomment and edit value(s) below to specify the maximum amount of
150-
# compute resources allowed, and minimum amount of compute resources
151-
# required for each server pod.
158+
# Uncomment and edit value(s) below to specify the maximum amount of compute resources allowed
159+
# for each server pod.
152160
# These are optional.
153161
# Please refer to the kubernetes documentation on Managing Compute
154162
# Resources for Containers for details.
155163
#
156-
# serverPodMemoryRequest: "64Mi"
157-
# serverPodCpuRequest: "250m"
158164
# serverPodMemoryLimit: "1Gi"
159165
# serverPodCpuLimit: "1000m"
160166

kubernetes/samples/scripts/create-weblogic-domain/manually-create-domain/domain.yaml

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,10 @@ spec:
6565
- name: JAVA_OPTIONS
6666
value: "-Dweblogic.StdoutDebugEnabled=false"
6767
- name: USER_MEM_ARGS
68-
value: "-Xms64m -Xmx256m "
68+
value: "-Xms256m -Xmx512m "
69+
requests:
70+
cpu: "250m"
71+
memory: "768Mi"
6972

7073
# If you are storing your domain on a persistent volume (as opposed to inside the Docker image),
7174
# then uncomment this section and provide the PVC details and mount path here (standard images
@@ -116,4 +119,4 @@ spec:
116119
replicas: 2
117120

118121
# The number of managed servers to start for any unlisted clusters
119-
# replicas: 1
122+
# replicas: 1

0 commit comments

Comments
 (0)