Skip to content

Commit b13c280

Browse files
authored
Soa fix 24.2.2 (#205)
* Update README.md * Fix for monitoring services
1 parent 71796c9 commit b13c280

File tree

2 files changed

+26
-84
lines changed

2 files changed

+26
-84
lines changed

OracleSOASuite/kubernetes/monitoring-service/README.md

Lines changed: 26 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -20,22 +20,29 @@ Set up the WebLogic Monitoring Exporter that will collect WebLogic Server metric
2020

2121
## Set up manually
2222

23-
### Deploy Prometheus and Grafana
23+
### Install kube-prometheus-stack
2424

25-
Refer to the compatibility matrix of [Kube Prometheus](https://github.com/coreos/kube-prometheus#kubernetes-compatibility-matrix) and clone the [release](https://github.com/coreos/kube-prometheus/releases) version of the `kube-prometheus` repository according to the Kubernetes version of your cluster.
25+
Refer to [link](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) and install the Kube Prometheus stack.
2626

27-
1. Clone the `kube-prometheus` repository:
27+
1. Get Helm Repository Info for the `kube-prometheus`:
2828
```
29-
$ git clone https://github.com/coreos/kube-prometheus.git
29+
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
30+
$ helm repo update
3031
```
3132
32-
1. Change to folder `kube-prometheus` and enter the following commands to create the namespace and CRDs, and then wait for their availability before creating the remaining resources:
33-
33+
1. Install the helm chart:
3434
```
35-
$ cd kube-prometheus
36-
$ ${KUBERNETES_CLI:-kubectl} create -f manifests/setup
37-
$ until ${KUBERNETES_CLI:-kubectl} get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
38-
$ ${KUBERNETES_CLI:-kubectl} create -f manifests/
35+
namespace=monitoring
36+
release_name=myrelease
37+
prometheusNodePort=32101
38+
alertmanagerNodePort=32102
39+
grafanaNodePort=32100
40+
$ helm install $release_name prometheus-community/kube-prometheus-stack \
41+
--namespace $namespace \
42+
--set prometheus.service.type=NodePort --set prometheus.service.nodePort=${prometheusNodePort} \
43+
--set alertmanager.service.type=NodePort --set alertmanager.service.nodePort=${alertmanagerNodePort} \
44+
--set grafana.adminPassword=admin --set grafana.service.type=NodePort --set grafana.service.nodePort=${grafanaNodePort} \
45+
--wait
3946
```
4047
4148
1. `kube-prometheus` requires all nodes in the Kubernetes cluster to be labeled with `kubernetes.io/os=linux`. If any node is not labeled with this, then you need to label it using the following command:
@@ -44,78 +51,22 @@ Refer to the compatibility matrix of [Kube Prometheus](https://github.com/coreos
4451
$ ${KUBERNETES_CLI:-kubectl} label nodes --all kubernetes.io/os=linux
4552
```
4653
47-
1. Enter the following commands to provide external access for Grafana, Prometheus, and Alertmanager:
48-
49-
```
50-
$ ${KUBERNETES_CLI:-kubectl} patch svc grafana -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32100 }]'
51-
52-
$ ${KUBERNETES_CLI:-kubectl} patch svc prometheus-k8s -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32101 }]'
53-
54-
$ ${KUBERNETES_CLI:-kubectl} patch svc alertmanager-main -n monitoring --type=json -p '[{"op": "replace", "path": "/spec/type", "value": "NodePort" },{"op": "replace", "path": "/spec/ports/0/nodePort", "value": 32102 }]'
55-
```
54+
1. With the nodePort values provided during helm install, monitoring serives will be available at:
5655
57-
Note:
5856
* `32100` is the external port for Grafana
5957
* `32101` is the external port for Prometheus
6058
* `32102` is the external port for Alertmanager
6159
62-
### Generate the WebLogic Monitoring Exporter Deployment Package
63-
64-
The `wls-exporter.war` package need to be updated and created for each listening ports (Administration Server and Managed Servers) in the domain.
65-
Set the below environment values based on your environment and run the script `get-wls-exporter.sh` to generate the required WAR files at `${WORKDIR}/monitoring-service/scripts/wls-exporter-deploy`:
66-
- adminServerPort
67-
- wlsMonitoringExporterTosoaCluster
68-
- soaManagedServerPort
69-
- wlsMonitoringExporterToosbCluster
70-
- osbManagedServerPort
71-
72-
For example:
73-
74-
```
75-
$ cd ${WORKDIR}/monitoring-service/scripts
76-
$ export adminServerPort=7011
77-
$ export wlsMonitoringExporterTosoaCluster=true
78-
$ export soaManagedServerPort=8011
79-
$ export wlsMonitoringExporterToosbCluster=true
80-
$ export osbManagedServerPort=9011
81-
$ sh get-wls-exporter.sh
82-
```
60+
### Use the Monitoring Exporter with WebLogic Kubernetes Operator
8361
84-
Verify whether the required WAR files are generated at `${WORKDIR}/monitoring-service/scripts/wls-exporter-deploy`.
62+
For enabling monitoring exporter, simply add the [monitoringExporter](https://github.com/oracle/weblogic-kubernetes-operator/blob/main/documentation/domains/Domain.md#monitoring-exporter-specification) configuration element in the domain resource.
63+
Sample configuration available at `${WORKDIR}/monitoring-service/config/config.yaml` can be added to your domain using below command:
8564
8665
```
87-
$ ls ${WORKDIR}/monitoring-service/scripts/wls-exporter-deploy
66+
$ kubectl patch domain ${domainUID} -n ${domainNamespace} --patch-file ${WORKDIR}/monitoring-service/config/config.yaml --type=merge
8867
```
8968
90-
### Deploy the WebLogic Monitoring Exporter into the OracleSOASuite domain
91-
92-
Follow these steps to copy and deploy the WebLogic Monitoring Exporter WAR files into the OracleSOASuite Domain.
93-
94-
**Note**: Replace the `<xxxx>` with appropriate values based on your environment:
95-
96-
```
97-
$ cd ${WORKDIR}/monitoring-service/scripts
98-
$ ${KUBERNETES_CLI:-kubectl} cp wls-exporter-deploy <namespace>/<admin_pod_name>:/u01/oracle
99-
$ ${KUBERNETES_CLI:-kubectl} cp deploy-weblogic-monitoring-exporter.py <namespace>/<admin_pod_name>:/u01/oracle/wls-exporter-deploy
100-
$ ${KUBERNETES_CLI:-kubectl} exec -it -n <namespace> <admin_pod_name> -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py \
101-
-domainName <domainUID> -adminServerName <adminServerName> -adminURL <adminURL> \
102-
-soaClusterName <soaClusterName> -wlsMonitoringExporterTosoaCluster <wlsMonitoringExporterTosoaCluster> \
103-
-osbClusterName <osbClusterName> -wlsMonitoringExporterToosbCluster <wlsMonitoringExporterToosbCluster> \
104-
-username <username> -password <password>
105-
```
106-
107-
For example:
108-
109-
```
110-
$ cd ${WORKDIR}/monitoring-service/scripts
111-
$ ${KUBERNETES_CLI:-kubectl} cp wls-exporter-deploy soans/soainfra-adminserver:/u01/oracle
112-
$ ${KUBERNETES_CLI:-kubectl} cp deploy-weblogic-monitoring-exporter.py soans/soainfra-adminserver:/u01/oracle/wls-exporter-deploy
113-
$ ${KUBERNETES_CLI:-kubectl} exec -it -n soans soainfra-adminserver -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/wls-exporter-deploy/deploy-weblogic-monitoring-exporter.py \
114-
-domainName soainfra -adminServerName AdminServer -adminURL soainfra-adminserver:7011 \
115-
-soaClusterName soa_cluster -wlsMonitoringExporterTosoaCluster true \
116-
-osbClusterName osb_cluster -wlsMonitoringExporterToosbCluster true \
117-
-username weblogic -password Welcome1
118-
```
69+
This will trigger the restart of domain. The newly created server pods will have the exporter sidecar. See https://github.com/oracle/weblogic-monitoring-exporter for details.
11970
12071
### Configure Prometheus Operator
12172
@@ -186,21 +137,15 @@ The following parameters can be provided in the inputs file.
186137
| `setupKubePrometheusStack` | Boolean value indicating whether kube-prometheus-stack (Prometheus, Grafana and Alertmanager) to be installed | `true` |
187138
| `additionalParamForKubePrometheusStack` | The script install's kube-prometheus-stack with `service.type` as NodePort and values for `service.nodePort` as per the parameters defined in `monitoring-inputs.yaml`. Use `additionalParamForKubePrometheusStack` parameter to further configure with additional parameters as per [values.yaml](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml). Sample value to disable NodeExporter, Prometheus-Operator TLS support, Admission webhook support for PrometheusRules resources and custom Grafana image repository is `--set nodeExporter.enabled=false --set prometheusOperator.tls.enabled=false --set prometheusOperator.admissionWebhooks.enabled=false --set grafana.image.repository=xxxxxxxxx/grafana/grafana`| |
188139
| `monitoringNamespace` | Kubernetes namespace for monitoring setup. | `monitoring` |
140+
| `monitoringHelmReleaseName` | Helm release name for monitoring resources. | `monitoring` |
189141
| `adminServerName` | Name of the Administration Server. | `AdminServer` |
190-
| `adminServerPort` | Port number for the Administration Server inside the Kubernetes cluster. | `7011` |
191-
| `soaClusterName` | Name of the soaCluster. | `soa_cluster` |
192-
| `soaManagedServerPort` | Port number of the managed servers in the soaCluster. | `8011` |
193-
| `wlsMonitoringExporterTosoaCluster` | Boolean value indicating whether to deploy WebLogic Monitoring Exporter to soaCluster. | `false` |
194-
| `osbClusterName` | Name of the osbCluster. | `osb_cluster` |
195-
| `osbManagedServerPort` | Port number of the managed servers in the osbCluster. | `9011` |
196-
| `wlsMonitoringExporterToosbCluster` | Boolean value indicating whether to deploy WebLogic Monitoring Exporter to osbCluster. | `false` |
197142
| `exposeMonitoringNodePort` | Boolean value indicating if the Monitoring Services (Prometheus, Grafana and Alertmanager) is exposed outside of the Kubernetes cluster. | `false` |
198143
| `prometheusNodePort` | Port number of the Prometheus outside the Kubernetes cluster. | `32101` |
199144
| `grafanaNodePort` | Port number of the Grafana outside the Kubernetes cluster. | `32100` |
200145
| `alertmanagerNodePort` | Port number of the Alertmanager outside the Kubernetes cluster. | `32102` |
201146
| `weblogicCredentialsSecretName` | Name of the Kubernetes secret which has Administration Server's user name and password. | `soainfra-domain-credentials` |
202147
203-
Note that the values specified in the `monitoring-inputs.yaml` file will be used to install kube-prometheus-stack (Prometheus, Grafana and Alertmanager) and deploying WebLogic Monitoring Exporter into the OracleSOASuite domain. Hence make the domain specific values to be same as that used during domain creation.
148+
Note that the values specified in the `monitoring-inputs.yaml` file will be used to install kube-prometheus-stack (Prometheus, Grafana and Alertmanager) and enabling WebLogic Monitoring Exporter into the OracleSOASuite domain. Hence make the domain specific values to be same as that used during domain creation.
204149
205150
### Run the setup monitoring script
206151
@@ -214,13 +159,10 @@ $ ./setup-monitoring.sh \
214159
The script will perform the following steps:
215160

216161
- Helm install `prometheus-community/kube-prometheus-stack` if `setupKubePrometheusStack` is set to `true`.
217-
- Deploys WebLogic Monitoring Exporter to Administration Server.
218-
- Deploys WebLogic Monitoring Exporter to `soaCluster` if `wlsMonitoringExporterTosoaCluster` is set to `true`.
219-
- Deploys WebLogic Monitoring Exporter to `osbCluster` if `wlsMonitoringExporterToosbCluster` is set to `true`.
162+
- Configures Monitoring Exporter as sidecar
220163
- Exposes the Monitoring Services (Prometheus at `32101`, Grafana at `32100` and Alertmanager at `32102`) outside of the Kubernetes cluster if `exposeMonitoringNodePort` is set to `true`.
221164
- Imports the WebLogic Server Grafana Dashboard if `setupKubePrometheusStack` is set to `true`.
222165

223-
224166
### Verify the results
225167
The setup monitoring script will report failure if there was any error. However, verify that required resources were created by the script.
226168

0 commit comments

Comments
 (0)