You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# NGINX Loadbalancer for Kubernetes - HTTP MultiCluster LB Solution
8
9
>>>>>>> change NKL to NLK
10
+
=======
11
+
# NGINX Loadbalancer for Kubernetes - HTTP MultiCluster LB Solution
12
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
9
13
10
14
<br/>
11
15
12
16
## This is the `HTTP Installation Guide` for the NGINX Loadbalancer for Kubernetes Controller Solution. It contains detailed instructions for implementing the different components for the Solution.
### This Solution from NGINX provides Enterprise class features which address common challenges with networking, traffic management, and High Availability for On-Premises Kubernetes Clusters.
51
62
52
63
1. Provides a `replacement Loadbalancer Service.` The Loadbalancer Service is a key component provided by most Cloud Providers. However, when running a K8s Cluster On Premises, the `Loadbalancer Service is not available.`
53
64
This Solution provides a replacement, using an NGINX Server, and a new K8s Controller from NGINX. These two components work together to watch the `nginx-ingress Service` in the cluster, and immediately update the NGINX LB Server when changes occur.
54
65
66
+
<<<<<<< HEAD
55
67
<<<<<<< HEAD
56
68
2. Provides `MultiCluster Load Balancing`, traffic steering, health checks, TLS termination, advanced LB algorithms, and enhanced metrics.
57
69
@@ -111,6 +123,8 @@ This Solution from Nginx provides Enterprise class features which address common
111
123
This Solution provides a replacement, using an NGINX Server, and a new K8s Controller from NGINX. These two components work together to watch the `nginx-ingress Service` in the cluster, and immediately update the NGINX LB Server when changes occur.
112
124
=======
113
125
>>>>>>> added numbering
126
+
=======
127
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
114
128
2. Provides `MultiCluster Load Balancing`, traffic steering, health checks, TLS termination, advanced LB algorithms, and enhanced metrics.
115
129
116
130
3. Provides dynamic, ratio-based Load Balancing for Multiple Clusters. This allows for advanced traffic steering, and operation efficiency with no Reloads or downtime.
@@ -158,12 +172,16 @@ This Solution provides a replacement, using an NGINX Server, and a new K8s Contr
158
172
159
173
<br/>
160
174
175
+
<<<<<<< HEAD
161
176
<<<<<<< HEAD
162
177
## Kubernetes Clusters
163
178
>>>>>>> added Grafana
164
179
=======
165
180
### Kubernetes Clusters
166
181
>>>>>>> change to NGINX
182
+
=======
183
+
### Kubernetes Clusters
184
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
167
185
168
186
<br/>
169
187
@@ -175,6 +193,7 @@ This Solution provides a replacement, using an NGINX Server, and a new K8s Contr
175
193
<<<<<<< HEAD
176
194
<<<<<<< HEAD
177
195
<<<<<<< HEAD
196
+
<<<<<<< HEAD
178
197
A standard K8s cluster is all that is required, two or more Clusters if you want the `Active/Active MultiCluster Load Balancing Solution` using HTTP Split Clients. There must be enough resources available to run the NGINX Ingress Controller, and the NGINX Kubernetes Loadbalancer Controller, and test application like the Cafe Demo. You must have administrative access to be able to create the namespace, services, and deployments for this Solution. This Solution was tested on Kubernetes version 1.23.
179
198
180
199
<br/>
@@ -928,14 +947,57 @@ This is not part of the actual Solution, but it is useful to have a well-known a
928
947
=======
929
948
This is not a component of the actual Solution, but it is useful to have a well-known application running in the cluster, as a known-good target fortest commands. The example provided here is used by the Solution to demonstrate proper traffic flows.
930
949
>>>>>>> update install guides
950
+
=======
951
+
A standard K8s cluster is all that is required, two or more Clusters if you want the `Active/Active MultiCluster Load Balancing Solution` using HTTP Split Clients. There must be enough resources available to run the NGINX Ingress Controller, and the NGINX Loadbalancer for Kubernetes Controller, and test application like the Cafe Demo. You must have administrative access to be able to create the namespace, services, and deployments for this Solution. This Solution was tested on Kubernetes version 1.23.
952
+
953
+
<br/>
954
+
955
+
## 1. Install NGINX Ingress Controller
956
+
957
+
<br/>
958
+
959
+

960
+
961
+
<br/>
962
+
963
+
The NGINX Ingress Controller in this Solution is the destination target for traffic (north-south) that is being sent to the cluster(s). The installation of the actual Ingress Controller is outside the scope of this guide, but the links to the docs are included for your reference. The `NIC installation using Manifests` must follow the documents exactly as written, as this Solution depends on the `nginx-ingress` namespace and service objects. **Only the very last step is changed.**
964
+
965
+
1. Follow these instructions to deploy the Nginx Ingress Controller into your cluster: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
966
+
967
+
**NOTE:** This Solution only works with `nginx-ingress` from NGINX. It will not work with the K8s Community version of Ingress, called ingress-nginx.
968
+
969
+
1. If you are unsure which Ingress Controller you are running, check out the blog on nginx.com for more information:
>Important! Do not complete the very last step in the NIC deployment with Manifests, `do not deploy the loadbalancer.yaml or nodeport.yaml Service file!` You will apply a different loadbalancer or nodeport Service manifest later, after the NLK Controller is up and running. `The nginx-ingress Service file must be changed` - it is not the default file.
976
+
977
+
<br/>
978
+
979
+
## 2. Install NGINX Cafe Demo Application
980
+
981
+
<br/>
982
+
983
+

984
+
985
+
<br/>
986
+
987
+
This is not a component of the actual Solution, but it is useful to have a well-known application running in the cluster, as a known-good target fortest commands. The example provided here is used by the Solution to demonstrate proper traffic flows.
988
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
931
989
932
990
Note: If you choose a different Application to test with, `the NGINX configurations and health checks provided here may not work,` and will need to be modified to work correctly.
933
991
934
992
<br/>
935
993
994
+
<<<<<<< HEAD
936
995
- Use the provided Cafe Demo manifests in the docs/cafe-demo folder:
937
996
=======
938
997
>>>>>>> added step numbers
998
+
=======
999
+
1. Use the provided Cafe Demo manifests in the docs/cafe-demo folder:
1000
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
939
1001
940
1002
```bash
941
1003
kubectl apply -f cafe-secret.yaml
@@ -1572,6 +1634,7 @@ Use the `loadbalancer-cluster1.yaml` or `nodeport-cluster1.yaml` manifest file t
1572
1634
1573
1635
<br/>
1574
1636
1637
+
<<<<<<< HEAD
1575
1638
<<<<<<< HEAD
1576
1639
<<<<<<< HEAD
1577
1640
When you are finished, the Nginx Plus Dashboard on the LB Server should look similar to the following image:
@@ -1582,6 +1645,9 @@ When you are finished, the NGINX Plus Dashboard on the LB Server should look sim
1582
1645
=======
1583
1646
When you are finished, the NGINX Plus Dashboard on the LB Server(s) should look similar to the following image:
1584
1647
>>>>>>> added step numbers
1648
+
=======
1649
+
When you are finished, the NGINX Plus Dashboard on the LB Server(s) should look similar to the following image:
@@ -1594,6 +1660,7 @@ Important items for reference:
1594
1660
<<<<<<< HEAD
1595
1661
<<<<<<< HEAD
1596
1662
<<<<<<< HEAD
1663
+
<<<<<<< HEAD
1597
1664
>Note: In this example, there is a 3-Node K8s cluster, with one Control Node, and 2 Worker Nodes. The NKL Controller only configures NGINX upstreams with `Worker Node` IP addresses, from Cluster1, which are:
1598
1665
=======
1599
1666
>Note: In this example, there is a 3-Node K8s cluster, with one Control Node, and 2 Worker Nodes. The NKL Controller only configures Nginx with `Worker Node` IP addresses, from Cluster1, which are:
@@ -1607,6 +1674,9 @@ Important items for reference:
1607
1674
=======
1608
1675
>Note: In this example, there is a 3-Node K8s cluster, with one Control Node, and 2 Worker Nodes. The NLK Controller only configures NGINX upstreams with `Worker Node` IP addresses, from Cluster1, which are:
1609
1676
>>>>>>> change NKL to NLK
1677
+
=======
1678
+
>Note: In this example, there is a 3-Node K8s cluster, with one Control Node, and 2 Worker Nodes. The NLK Controller only configures NGINX upstreams with `Worker Node` IP addresses, from Cluster1, which are:
Note: K8s Control Nodes are excluded from the list intentionally.
1620
1693
1621
1694
<br/>
1622
1695
1623
1696
1. Configure DNS, or the local hosts file, for cafe.example.com > NGINXLB Server IP Address. In this example:
1697
+
<<<<<<< HEAD
1624
1698
<<<<<<< HEAD
1625
1699
1626
1700
```bash
@@ -1684,6 +1758,8 @@ Note: K8s Control Nodes are excluded from the list intentionally.
1684
1758
Configure DNS, or the local hosts file, for cafe.example.com > NGINXLB Server IP Address. In this example:
1685
1759
=======
1686
1760
>>>>>>> added step numbers
1761
+
=======
1762
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
1687
1763
1688
1764
```bash
1689
1765
cat /etc/hosts
@@ -1737,6 +1813,7 @@ Configure DNS, or the local hosts file, for cafe.example.com > NGINXLB Server IP
1737
1813
<<<<<<< HEAD
1738
1814
<<<<<<< HEAD
1739
1815
<<<<<<< HEAD
1816
+
<<<<<<< HEAD
1740
1817
`The NKL Controller detects this change, and modifies the LB Server upstreams.` The Dashboard will show you the new Port numbers, matching the new NodePort definitions. The NKL logs show these messages, confirming the changes:
1741
1818
>>>>>>> added Grafana
1742
1819
=======
@@ -1748,6 +1825,9 @@ Configure DNS, or the local hosts file, for cafe.example.com > NGINXLB Server IP
1748
1825
=======
1749
1826
`The NLK Controller detects this change, and modifies the LB Server(s) upstreams to match.` The Dashboard will show you the new Port numbers, matching the new LoadBalancer or NodePort definitions. The NLK logs show these messages, confirming the changes:
1750
1827
>>>>>>> change NKL to NLK
1828
+
=======
1829
+
`The NLK Controller detects this change, and modifies the LB Server(s) upstreams to match.` The Dashboard will show you the new Port numbers, matching the new LoadBalancer or NodePort definitions. The NLK logs show these messages, confirming the changes:
@@ -1756,14 +1836,18 @@ Configure DNS, or the local hosts file, for cafe.example.com > NGINXLB Server IP
1756
1836
<<<<<<< HEAD
1757
1837
<<<<<<< HEAD
1758
1838
<<<<<<< HEAD
1839
+
<<<<<<< HEAD
1759
1840
=======
1760
1841
>>>>>>> add loadbalancer files
1842
+
=======
1843
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
1761
1844
### MultiCluster Solution
1762
1845
1763
1846
If you plan to implement and test the MultiCluster Load Balancing Solution, repeat all the steps to configure the second K8s cluster, identical to the first Cluster1 steps.
1764
1847
- There is only one change - you MUST use the appropriate `loadbalancer-clusterX.yaml` or `nodeport-clusterX.yaml` manifest to match the appropriate cluster.
1765
1848
- Don't forget to check and set your ./kube Config Context when you change clusters!
1766
1849
<<<<<<< HEAD
1850
+
<<<<<<< HEAD
1767
1851
- The NKL Controller in Cluster2 should be updating the `cluster2-https` upstreams.
1768
1852
<<<<<<< HEAD
1769
1853
@@ -1844,6 +1928,9 @@ The only tool you need for this, is an HTTP load generation tool. WRK, running
1844
1928
=======
1845
1929
- The NLK Controller in Cluster2 should be updating the `cluster2-https` upstreams.
1846
1930
>>>>>>> change NKL to NLK
1931
+
=======
1932
+
- The NLK Controller in Cluster2 should be updating the `cluster2-https` upstreams.
1933
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
1847
1934
1848
1935
<br/>
1849
1936
@@ -1919,6 +2006,7 @@ The Completes the Testing Section.
1919
2006
1920
2007
## 9. Prometheus and Grafana Servers
1921
2008
2009
+
<<<<<<< HEAD
1922
2010
<<<<<<< HEAD
1923
2011
>>>>>>> added Grafana
1924
2012
=======
@@ -1938,6 +2026,8 @@ Here are the instructions to run 2 Docker containers on a Monitor Server, which
@@ -1952,6 +2042,7 @@ Here are the instructions to run 2 Docker containers on a Monitor Server, which
1952
2042
### Prometheus
1953
2043
1954
2044
1. Configure your Prometheus server to collect NGINX Plus statistics from the scraper page. Use the prometheus.yml file provided, edit the IP addresses to match your NGINX LB Server(s).
2045
+
<<<<<<< HEAD
1955
2046
<<<<<<< HEAD
1956
2047
1957
2048
```bash
@@ -2036,6 +2127,8 @@ Here are the instructions to run 2 Docker containers on a Monitor Server, which
2036
2127
- Configure your Prometheus server to collect NGINX Plus statistics from the scraper page. Use the prometheus.yml file provided, edit the IP addresses to match your NGINX LB Server(s).
2037
2128
=======
2038
2129
>>>>>>> added step numbers
2130
+
=======
2131
+
>>>>>>> 89f7bcc4786e8d14d6416fa64d4b049a11fbe655
2039
2132
2040
2133
```bash
2041
2134
cat prometheus.yaml
@@ -2109,12 +2202,16 @@ Here are the instructions to run 2 Docker containers on a Monitor Server, which
Copy file name to clipboardExpand all lines: docs/http/http-multicluster-overview.md
+17Lines changed: 17 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,7 @@
7
7
8
8
<br/>
9
9
10
+
<<<<<<< HEAD
10
11
<<<<<<< HEAD
11
12
<<<<<<< HEAD
12
13
>With the NGINX Plus Servers located external to the Cluster, using NGINX's advanced HTTP/S features provide Enterprise class traffic management solutions.
@@ -16,6 +17,9 @@
16
17
=======
17
18
>With the NGINX Plus Servers located external to the Cluster, using NGINX's advanced HTTP/S features provide Enterprise class traffic management solutions.
18
19
>>>>>>> added step numbers
20
+
=======
21
+
>With the NGINX Plus Servers located external to the Cluster, using NGINX's advanced HTTP/S features provide Enterprise class traffic management solutions.
0 commit comments