Skip to content

Commit c49df5a

Browse files
authored
doc: fix description of the PERFORMANCE_PROFILE_INPUT_FILES env var (openshift#399)
* Minor correction in Performance Addon doc Environment variable PERFORMANCE_PROFILE_INPUT_FILES used in render mode should be a comma separated list of PerformanceProfile manifests not a folder. * Solve minor layout issues in Markdown file
1 parent 285eec9 commit c49df5a

File tree

1 file changed

+22
-21
lines changed

1 file changed

+22
-21
lines changed

docs/performanceprofile/performance_controller.md

+22-21
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The `Performance Profile Controller` optimizes OpenShift clusters for applications sensitive to cpu and network latency.
44

5-
![alt text](https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/interactions/diagram.png "How Performance Profile Controller interacts with other components and operators")
5+
!["How Performance Addon Controller interacts with other components and operators"](https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/interactions/diagram.png )
66

77
## PerformanceProfile
88

@@ -12,17 +12,17 @@ for applying various performance tunings to cluster nodes.
1212
The performance profile API is documented in detail in the [Performance Profile](performance_profile.md) doc.
1313
Follow the [API versions](api-versions.md) doc to check the supported API versions.
1414

15-
# Building and pushing the operator images
15+
## Building and pushing the operator images
1616

1717
TBD
1818

19-
# Deploying
19+
## Deploying
2020

2121
If you use your own images, make sure they are made public in your quay.io account!
2222

2323
Deploy a perfomance profile configuration by running:
2424

25-
```
25+
```shell
2626
CLUSTER=manual make cluster-deploy-pao
2727
```
2828

@@ -37,72 +37,74 @@ The deployment will be retried in a loop until everything is deployed successful
3737
In CI the `test/e2e/performanceprofile/cluster-setup/ci-cluster/performance/` dir will be used. The difference is that the CI cluster will deploy
3838
the PerformanceProfile in the test code, while the `manual` cluster includes it in the kustomize based deployment.
3939

40-
4140
Now you need to label the nodes which should be tuned. This can be done with
4241

43-
```
42+
```shell
4443
make cluster-label-worker-cnf
4544
```
4645

4746
This will label 1 worker node with the `worker-cnf` role, and OCP's `Machine Config Operator` will start tuning this node.
4847

49-
In order to wait until MCO is ready, you can watch the `MachineConfigPool` until it is marked as updated with
48+
In order to wait until MCO is ready, you can watch the `MachineConfigPool` until it is marked as updated with
5049

51-
```
50+
```shell
5251
CLUSTER=manual make cluster-wait-for-pao-mcp
5352
```
5453

5554
> Note: Be aware this can take quite a while (many minutes)
5655
5756
> Note: in CI this step is skipped, because the test code will wait for the MCP being up to date.
5857
59-
# Render mode
58+
## Render mode
6059

6160
The operator can render manifests for all the components it supposes to create, based on Given a `PerformanceProfile`
6261

6362
You need to provide the following environment variables
64-
```
65-
export PERFORMANCE_PROFILE_INPUT_FILES=<your PerformanceProfile directory path>
63+
64+
```shell
65+
export PERFORMANCE_PROFILE_INPUT_FILES=<comma separated list of your Performance Profiles>
6666
export ASSET_OUTPUT_DIR=<output path for the rendered manifests>
6767
```
6868

6969
Build and invoke the binary
70-
```
70+
71+
```shell
7172
_output/cluster-node-tuning-operator render
7273
```
7374

7475
Or provide the variables via command line arguments
75-
```
76+
77+
```shell
7678
_output/cluster-node-tuning-operator render --performance-profile-input-files <path> --asset-output-dir<path>
7779
```
7880

79-
# Troubleshooting
81+
## Troubleshooting
8082

8183
When the deployment fails, or the performance tuning does not work as expected, follow the [Troubleshooting Guide](troubleshooting.md)
8284
for debugging the cluster. Please provide as much info from troubleshooting as possible when reporting issues. Thanks!
8385

84-
# Testing
86+
## Testing
8587

86-
## Unit tests
88+
### Unit tests
8789

8890
Unit tests can be executed with `make test-unit`.
8991

90-
## Func tests
92+
### Func tests
9193

9294
The functional tests are located in `/functests`. They can be executed with `make pao-functests-only` on a cluster with a
9395
deployed Performance Profile Controller and configured MCP and nodes. It will create its own Performance profile!
9496

95-
### Latency test
97+
#### Latency test
9698

97-
The latency-test container image gives the possibility to run the latency
99+
The latency-test container image gives the possibility to run the latency
98100
test without need to install go, ginkgo or other go related modules.
99101

100102
The test itself is running the `oslat` `cyclictest` and `hwlatdetect` binaries and verifies if the maximal latency returned by each one of the tools is
101103
less than specified value under the `MAXIMUM_LATENCY`.
102104

103105
To run the latency test inside the container:
104106

105-
```
107+
```shell
106108
docker run --rm -v /kubeconfig:/kubeconfig \
107109
-e KUBECONFIG=/kubeconfig \
108110
-e LATENCY_TEST_RUN=true \
@@ -124,4 +126,3 @@ You can run the container with different ENV variables, but the bare minimum is
124126
- `CYCLICTEST_MAXIMUM_LATENCY` the expected maximum latency for the cyclictest test.
125127
- `HWLATDETECT_MAXIMUM_LATENCY` the expected maximum latency for the hwlatdetect test.
126128
- `MAXIMUM_LATENCY` a unified value for the expected maximum latency for all tests (In case both provided, the specific variables will have precedence over the unified one).
127-

0 commit comments

Comments
 (0)