The NGINX K8s Loadbalancer, or NKL, is a Kubernetes controller that provides TCP load balancing external to a Kubernetes cluster running on-premise.
- A Kubernetes cluster running on-premise.
- One or more NGINX Plus hosts running outside your Kubernetes cluster (NGINX Plus hosts must have the ability to route traffic to the cluster).
There is a more detailed Installation Reference available in the docs/
directory.
NKL provides a simple, easy-to-manage way to automate load balancing for your Kubernetes applications by leveraging NGINX Plus hosts running outside your cluster.
NKL installs easily, has a small footprint, and is easy to configure and manage.
NKL does not require learning a custom object model, you only have to understand NGINX configuration to get the most out of this solution.
There is thorough documentation available with the specifics in the docs/
directory.
tl;dr:
NKL is a Kubernetes controller that monitors Services and Nodes in your cluster, and then sends API calls to an external NGINX Plus server to manage NGINX Plus Upstream servers automatically.
That's all well and good, but what does it mean? Kubernetes clusters require some tooling to handling routing traffic from the outside world (e.g.: the Internet, corporate network, etc.) to the cluster. This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate worker node which then forwards the traffic to the appropriate Service / Pod.
If you are using a hosted Kubernetes solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. Those services will create a load balancer for you. You can use the cloud provider's API to manage the load balancer, or you can use the cloud provider's web console.
If you are running Kubernetes on-premise and will need to manage your own load balancer, NKL can help.
NKL itself does not perform load balancing. Rather, NKL allows you to manage Service resources within your cluster to update your load balancers, with tooling you are most likely already using.
There are few bits of administrivia to get out of the way before you can start leveraging NKL for your load balancing needs.
As noted above, NKL is intended for when you have one or more Kubernetes clusters running on-premise. In addition to this, you need to have at least one NGINX Plus host running outside your cluster (Please refer to the Roadmap for information about other load balancer servers).
As with everything Kubernetes, NKL requires RBAC permissions to function properly. The necessary resources are defined in the various YAML files in deployment/rbac/
.
For convenience, two scripts are included, apply.sh
, and unapply.sh
. These scripts will apply or remove the RBAC resources, respectively.
The permissions required by NKL are modest. NKL requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. The Services and ConfigMap are restricted to a specific namespace (default: "nkl"). The Nodes resource is cluster-wide.
NKL is configured via a ConfigMap, the default settings are found in deployment/configmap.yaml
. Presently there is a single configuration value exposed in the ConfigMap, nginx-hosts
.
This contains a comma-separated list of NGINX Plus hosts that NKL will maintain.
You will need to update this ConfigMap to reflect the NGINX Plus hosts you wish to manage.
If you were to deploy the ConfigMap and start NKL without updating the nginx-hosts
value, don't fear; the ConfigMap resource is monitored for changes and NKL will update the NGINX Plus hosts accordingly when the resource is changed, no restart required.
There is an extensive Installation Reference available in the docs/
directory.
Please refer to that for detailed instructions on how to deploy NKL and run a demo application.
Versioning is a work in progress. The CI/CD pipeline is being developed and will be used to build and publish NKL images to the Container Registry. Once in place, semantic versioning will be used for published images.
To get NKL up and running in ten steps or fewer, follow these instructions (NOTE, all the aforementioned prerequisites must be met for this to work).
There is a much more detailed Installation Reference available in the docs/
directory.
- Clone this repo (optional, you can simply copy the
deployments/
directory)
git clone git@github.com:nginxinc/nginx-k8s-loadbalancer.git
- Apply the Namespace
kubectl apply -f deployments/namespace.yaml
- Apply the RBAC resources
./deployments/rbac/apply.sh
- Update / Apply the ConfigMap (For best results update the
nginx-hosts
values first)
kubectl apply -f deployments/configmap.yaml
- Apply the Deployment
kubectl apply -f deployments/deployment.yaml
- Check the logs
kubectl -n nkl get pods | grep deployment | cut -f1 -d" " | xargs kubectl logs -n nkl --follow $1
At this point NKL should be up and running. Now would be a great time to go over to the Installation Reference and follow the instructions to deploy a demo application.
Presently NKL includes a fair amount of logging. This is intended to be used for debugging purposes. There are plans to add more robust monitoring and alerting in the future.
As a rule, we support the use of OpenTelemetry for observability, and we will be adding support in the near future.
Presently we are not accepting pull requests. However, we welcome your feedback and suggestions. Please open an issue to let us know what you think!
One way to contribute is to help us test NKL. We are looking for people to test NKL in a variety of environments.
If you are curious about the implementation, you should certainly browse the code, but first you might wish to refer to the design document. Some of the design decisions are explained there.
While NKL was initially written specifically for NGINX Plus, we recognize there are other load-balancers that can be supported.
To this end, NKL has been architected to be extensible to support other "Border Servers". Border Servers are the term NKL uses to describe load-balancers, reverse proxies, etc. that run outside the cluster and handle routing outside traffic to your cluster.
While we have identified a few potential targets, we are open to suggestions. Please open an issue to share your thoughts on potential implementations.
We look forward to building a community around NKL and value all feedback and suggestions. Varying perspectives and embracing diverse ideas will be key to NKL becoming a solution that is useful to the community. We will consider it a success when we are able to accept pull requests from the community.
- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc.
- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc.
© F5, Inc. 2023
(but don't let that scare you, we're really nice people...)