Skip to content

Commit ef8490a

Browse files
committed
add prometheus
1 parent e09f57f commit ef8490a

File tree

1 file changed

+37
-18
lines changed

1 file changed

+37
-18
lines changed

docs/http/http-installation-guide.md

Lines changed: 37 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ The Nginx Ingress Controller in this Solution is the destination target for traf
203203
The NGINX Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster(s). The installation of the actual Ingress Controller is outside the scope of this guide, but the links to the docs are included for your reference. `The NIC installation using Manifests must follow the documents exactly as written,` as this Solution depends on the `nginx-ingress` namespace and service objects. **Only the very last step is changed.**
204204
>>>>>>> change to NGINX
205205
206-
**NOTE:** This Solution only works with `nginx-ingress from NGINX`. It will `not` work with the K8s Community version of Ingress, called ingress-nginx.
206+
**NOTE:** This Solution only works with `nginx-ingress from NGINX`. It will not work with the K8s Community version of Ingress, called ingress-nginx.
207207

208208
If you are unsure which Ingress Controller you are running, check out the blog on nginx.com:
209209

@@ -984,7 +984,7 @@ https://www.nginx.com/free-trial-request/
984984
- Plus Dashboard enabled, used for testing, monitoring, and visualization of the Solution working.
985985
- The `http` context is used for MultiCluster Loadbalancing, for HTTP/S processing, Split Clients ratio, and prometheus exporting.
986986
- Plus KeyValue store is configured, to hold the dynamic Split ratio metadata.
987-
- Plus Zone Sync on Port 9001 is configured, to synchronize the dynamic KVstore data between multiple NGINX LB Servers.
987+
- Plus Zone Sync on Port 9001 is configured, to synchronize the dynamic KeyVal data between multiple NGINX LB Servers.
988988
989989
<br/>
990990
@@ -1295,7 +1295,7 @@ server {
12951295
12961296
```
12971297
1298-
- High Availability: If you have 2 or more NGINX Plus LB Servers, you can use Zone Sync to synchronize the Split Key Value Store data between the NGINX Servers automatically. Use the `zonesync.conf` example file provided, change the IP addresses to match your NGINX LB Servers. Place this file in /etc/nginx/stream folder, and reload NGINX. Note: This example does not provide any security for the Zone Sync traffic, secure as necessary with TLS or IP allowlist.
1298+
- High Availability: If you have 2 or more NGINX Plus LB Servers, you can use Zone Sync to synchronize the KeyValue SplitRatio data between the NGINX Servers automatically. Use the `zonesync.conf` example file provided, change the IP addresses to match your NGINX LB Servers. Place this file in /etc/nginx/stream folder, and reload NGINX. Note: This example does not provide any security for the Zone Sync traffic, secure as necessary with TLS or IP allowlist.
12991299
13001300
```bash
13011301
cat zonesync.conf
@@ -1314,15 +1314,15 @@ server {
13141314
13151315
listen 9001;
13161316
1317-
# cluster of 2 nodes
1317+
# Zone Sync with 2 nodes
13181318
zone_sync_server 10.1.1.4:9001;
13191319
zone_sync_server 10.1.1.5:9001;
13201320
13211321
}
13221322
13231323
```
13241324
1325-
Watching the NGINX Plus Dashboard, you will see messages sent/received if Zone Synch is operating correctly:
1325+
Watching the NGINX Plus Dashboard, Cluster Tab, you will see messages sent/received if Zone Sync is operating correctly:
13261326
13271327
![Zone Sync](../media/nkl-zone-sync.png)
13281328
@@ -1336,7 +1336,7 @@ Watching the NGINX Plus Dashboard, you will see messages sent/received if Zone S
13361336
13371337
<br/>
13381338
1339-
This is the new K8s Controller from NGINX, which is configured to watch the k8s environment, the `nginx-ingress Service` object, and send API updates to the NGINX LB Server when there are changes. It only requires three things.
1339+
### This is the new K8s Controller from NGINX, which is configured to watch the k8s environment, the `nginx-ingress Service` object, and send API updates to the NGINX LB Server when there are changes. It only requires three things:
13401340
13411341
- New kubernetes namespace and RBAC
13421342
- NKL ConfigMap, to configure the Controller
@@ -1421,7 +1421,7 @@ kubectl get svc nginx-ingress -n nginx-ingress
14211421
![NGINX Ingress NodePort Service](../media/nkl-cluster1-nodeport.png)
14221422
![NGINX Ingress NodePort Service](../media/nkl-cluster1-upstreams.png)
14231423
1424-
### NodePort is 443:30267, K8s Workers are 10.1.1.8 and .10.
1424+
### NodePort mapping is 443:30267, K8s Workers are 10.1.1.8 and .10.
14251425
14261426
<br/>
14271427
<br/>
@@ -1466,6 +1466,7 @@ Cluster2 Worker Node addresses are:
14661466
- 10.1.1.11
14671467
- 10.1.1.12
14681468
1469+
<<<<<<< HEAD
14691470
<<<<<<< HEAD
14701471
Note: K8s Control Nodes are excluded from the list intentionally.
14711472
@@ -1525,6 +1526,9 @@ Note: K8s Control Nodes are excluded from the list intentionally.
15251526
`The NKL Controller detects this change, and modifies the LB Server(s) upstreams to match.` The Dashboard will show you the new Port numbers, matching the new LoadBalancer or NodePort definitions. The NKL logs show these messages, confirming the changes:
15261527
=======
15271528
Notice: K8s Control Nodes are excluded from the list intentionally.
1529+
=======
1530+
Note: K8s Control Nodes are excluded from the list intentionally.
1531+
>>>>>>> add prometheus
15281532
15291533
<br/>
15301534
@@ -1546,7 +1550,7 @@ Using a Terminal and `./kube Context set for Cluster1`, delete the `nginx-ingres
15461550
kubectl delete -f nodeport-cluster1.yaml
15471551
```
15481552
1549-
Now the `nginx-ingress` Service is gone, and the Cluster1 upstream list will now be empty in the Dashboard.
1553+
Now the `nginx-ingress` Service is gone, and the Cluster1 upstream list will now be empty in the Dashboard. The NKL Logs will show that it has `DELETED` the upstream servers!
15501554
15511555
![NGINX No Cluster1 NodePort](../media/nkl-cluster1-delete-nodeport.png)
15521556
Legend:
@@ -1655,27 +1659,38 @@ The only tool you need for this, is an HTTP load generation tool. WRK, running
16551659
## 7. Testing MultiCluster Loadbalancing with HTTP Split Clients
16561660
>>>>>>> change to NGINX
16571661
1662+
<br/>
1663+
16581664
In this section, you will generate some HTTP load on the NGINX LB Server, and watch as it sends traffic to both Clusters. Then you will `dynamically change the Split ratio`, and watch NGINX send different traffic levels to each cluster.
16591665
16601666
The only tool you need for this, is an HTTP load generation tool. WRK, running in a docker container outside the cluster is what is shown here.
16611667
16621668
Start WRK, on a client outside the cluster. This command runs WRK for 15 minutes, targets the NGINX LB Server URL of https://10.1.1.4/coffee. The host header is required, cafe.example.com, as NGINX is configured for this server_name. (And so is the NGINX Ingress Controller).
16631669
1670+
In these test examples, the Nginx LB Servers and IPs in the hosts file are:
1671+
1672+
```bash
1673+
cat /etc/hosts
1674+
1675+
nginxlb 10.1.1.4
1676+
nginxlb2 10.1.1.5
1677+
```
1678+
16641679
```bash
16651680
docker run --rm williamyeh/wrk -t2 -c200 -d15m -H 'Host: cafe.example.com' --timeout 2s https://10.1.1.4/coffee
16661681
```
16671682
16681683
![nkl Clusters 50-50](../media/nkl-clusters-50.png)
16691684
1670-
You see the traffic is load balanced between cluster1 and cluster2 at 50/50 ratio.
1685+
You see the traffic is load balanced between cluster1 and cluster2 at 50:50 ratio.
16711686
1672-
Add a record to the KV store, by sending an API command to NGINX Plus:
1687+
Add a record to the KeyValue store, by sending an API command to NGINX Plus:
16731688
16741689
```bash
16751690
curl -iX POST -d '{"cafe.example.com":50}' http://nginxlb:9000/api/8/http/keyvals/split
16761691
```
16771692
1678-
Verify the API record is there, on both NGINX LB Servers:
1693+
Verify the `split KeyVal record` is there, on both NGINX LB Servers:
16791694
```bash
16801695
curl http://nginxlb:9000/api/8/http/keyvals/split
16811696
curl http://nginxlb2:9000/api/8/http/keyvals/split
@@ -1685,11 +1700,11 @@ curl http://nginxlb2:9000/api/8/http/keyvals/split
16851700
16861701
If the KV data is missing on one LB Server, your Zone Sync must be fixed.
16871702
1688-
>Notice the difference in HTTP Response Times, Cluster2 is running much faster than Cluster1 ! (The Red and Green highlights on the Dashboard)
1703+
>Notice the difference in HTTP Response Times in the Dashboard, highlighted in Red and Green: Cluster2 is responding much faster than Cluster1! (The Red and Green highlights on the Dashboard).
16891704
1690-
So, you decide to send less traffic to Cluster1, and more to Cluster2. You will set the HTTP Split ratio to 10/90 = 10% to Cluster1, 90% to Cluster2.
1705+
So, you decide to send less traffic to Cluster1, and more to Cluster2. You will set the HTTP Split ratio to 10:90 = 10% to Cluster1, 90% to Cluster2.
16911706
1692-
Remember: This Solution example configures NGINX for Cluster1 to use the Split value, and the remaining percentage of traffic is sent to Cluster2.
1707+
Remember: This Split Clients example configures NGINX for Cluster1 to use the Split KeyValue, and the remaining percentage of traffic is sent to Cluster2.
16931708
16941709
Change the KV Split Ratio to 10:
16951710
```bash
@@ -1718,8 +1733,6 @@ The Completes the Testing Section.
17181733
<br/>
17191734
>>>>>>> change to NGINX
17201735
1721-
Prometheus | Grafana
1722-
17231736
![](../media/prometheus-icon.png) |![](../media/grafana-icon.png)
17241737
--- | ---
17251738
@@ -1857,7 +1870,13 @@ scrape_configs:
18571870
sudo docker run --restart always --network="host" -d -p 9090:9090 --name=prometheus -v ~/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
18581871
```
18591872
1860-
Prometheus Web Console access to the data is on <monitor-server-ip:9090>.
1873+
Prometheus Web Console access to the data is on http://<monitor-server-ip:9090>.
1874+
1875+
Explore some of the metrics available. Try a query for `nginxplus_upstream_server_response_time`:
1876+
1877+
![NGINX Prom HTTP Requests](../media/prometheus-upstreams.png)
1878+
1879+
>Wow, look at the variance in performance!
18611880
18621881
<br/>
18631882
@@ -1875,7 +1894,7 @@ docker volume create grafana-storage
18751894
sudo docker run --restart always -d -p 3000:3000 --name=grafana -v grafana-storage:/var/lib/grafana grafana/grafana
18761895
```
18771896
1878-
Web console access to Grafana is on <monitor-server-ip:3000>. Login is admin/admin.
1897+
Web console access to Grafana is on http://<monitor-server-ip:3000>. Login is admin/admin.
18791898
18801899
You can import the provided `grafana-dashboard.json` file to see the NGINX Plus `Cluster1 and 2 statistics` HTTP RPS and Upstream Response Times.
18811900

0 commit comments

Comments
 (0)