You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/Tasks/performance-profiling.md
+69-41Lines changed: 69 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
---
2
-
title: "Continuous Profiling"
2
+
title: "Performance Profiling Metrics"
3
3
weight: 10
4
4
description: >
5
-
The goal of this document is to familiarize you with OLM's stance on performance profiling.
5
+
The goal of this document is to familiarize you with the steps to enable and review OLM's performance profiling metrics.
6
6
---
7
7
8
8
## Prerequisites
@@ -11,60 +11,88 @@ description: >
11
11
12
12
## Background
13
13
14
-
OLM utilizes the [pprof package](https://golang.org/pkg/net/http/pprof/) from the standard go library to expose performance profiles for the OLM Operator, the Catalog Operator, and Registry Servers. Due to the sensitive nature of this data, client requests against the pprof endpoint are rejected unless they are made with the certificate data kept in the `pprof-cert secret` in the `olm namespace`.
15
-
Kubernetes does not provide a native way to prevent pods on cluster from iterating over the list of available ports and retrieving the data exposed. Without authetnicating the requests, OLM could leak customer usage statistics on multitenant clusters. If the aforementioned secret does not exist the pprof data will not be accessable.
14
+
OLM utilizes the [pprof package](https://golang.org/pkg/net/http/pprof/) from the standard go library to expose performance profiles for the OLM Operator, the Catalog Operator, and Registry Servers. Due to the sensitive nature of this data, OLM must be configured to use TLS Certificates before performance profiling can be enabled.
16
15
17
-
### Retrieving PProf Data
16
+
Requests against the performance profiling endpoint will be rejected unless the client certificate is validated by OLM. Unfortunately, Kubernetes does not provide a native way to prevent pods on cluster from iterating over the list of available ports and retrieving the data exposed. Without authetnicating the requests, OLM could leak customer usage statistics on multitenant clusters.
18
17
19
-
#### OLM Operator
18
+
This document will dive into the steps to [enable olm performance profiling](enable-performance-profiling) and retrieving pprof data from each component.
20
19
21
-
```bash
22
-
$ go tool pprof http://localhost:8080/debug/pprof/heap #TODO: Replace with actual command
23
-
```
20
+
## Enabling Performance Profiling
24
21
25
-
#### Catalog Operator
22
+
###Creating a Certificate
26
23
27
-
```bash
28
-
$ go tool pprof http://localhost:8080/debug/pprof/heap #TODO: Replace with actual command
29
-
```
24
+
A valid server certiciate must be created for each component before Performance Profiling can be enabled. If you are unfamiliar with certificate generation, I recomend using the [OpenSSL](https://www.openssl.org/) tool-kit and refer to the [request certificate](https://www.openssl.org/docs/manmaster/man1/openssl-req.html) documentation.
30
25
31
-
#### Registry Server
26
+
Once you have generated a private and public key, this data should be stored in a `TLS Secret`:
32
27
33
28
```bash
34
-
$ go tool pprof http://localhost:8080/debug/pprof/heap #TODO: Replace with actual command
29
+
$ export PRIVATE_KEY_FILENAME=private.key # Replace with the name of the file that contains the private key you generated.
30
+
$ export PUBLIC_KEY_FILENAME=certificate.key # Replace with the name of the file that contains the public key you generated.
31
+
32
+
$ cat <<EOF | kubectl apply -f -
33
+
apiVersion: v1
34
+
kind: Secret
35
+
metadata:
36
+
name: olm-serving-secret
37
+
namespace: olm
38
+
type: kubernetes.io/tls
39
+
data:
40
+
tls.key: $(base64 $PRIVATE_KEY_FILENAME)
41
+
tls.crt: $(base64 $PUBLIC_KEY_FILENAME)
42
+
EOF
35
43
```
36
44
37
-
<details>
38
-
<summary>Downstream docs, click to expand!</summary>
39
-
40
-
## Continuous Profiling
41
-
OLM relies on [pprof-dump]() to periodically collect the pprof data and store it in the contents of a `ConfigMap`. The data in these `ConfigMaps` may be referenced when debugging issues.
45
+
### Retrieving the Performance Profile from the OLM Deployment
42
46
43
-
### Default PProf-Dump Settings
47
+
Patch the OLM Deployment's pod template to use the generated TLS secret:
44
48
45
-
OLM configures pprof-dump with the `pprof-dump ConfigMap` setting the follow default configurations:
49
+
- Defining a volume and volumeMount
50
+
- Adding the `client-ca`, `tls-key` and `tls-crt` arguments
51
+
- Replacing all mentions of port `8080` with `8443`
52
+
- Updating the `livenessProbe` and `readinessProbe` to use HTTPS as the scheme.
46
53
47
-
```yaml
48
-
kind: ConfigMap
49
-
metadata:
50
-
name: prof-dump
51
-
namespace: olm
52
-
Data:
53
-
garbageCollection: 60# Delete configmaps older than 60 minutes
54
-
poll: 15# interval in minutes that pprof data is collected and dumped into ConfigMaps
54
+
This can be done with the following commands:
55
+
56
+
```bash
57
+
$ export CERT_PATH=/etc/olm-serving-certs # Define where to mount the certs.
0 commit comments