Skip to content

Latest commit

 

History

History
160 lines (106 loc) · 9.1 KB

README.md

File metadata and controls

160 lines (106 loc) · 9.1 KB

The GCE ingress controller was moved to github.com/kubernetes/ingress-gce.


NGINX Ingress Controller

Build Status Coverage Status Go Report Card

Description

This repository contains the NGINX controller built around the Kubernetes Ingress resource that uses ConfigMap to store the NGINX configuration.

Learn more about using Ingress on k8s.io

What is an Ingress Controller?

Configuring a webserver or loadbalancer is harder than it should be. Most webserver configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part you can apply the same logic to them and achieve a desired result.

The Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress.

An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.

Contents

Conventions

Anytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}" and create the secret via kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}

Requirements

The default backend is a service of handling all url paths and hosts the nginx controller doesn't understand, i.e., all the request that are not mapped with an Ingress Basically a default backend exposes two URLs:

  • /healthz that returns 200
  • / that returns 404

The location 404-server contains the image of the default backend and custom-error-pages an example that shows how it is possible to customize

Annotation ingress.class

If you have multiple Ingress controllers in a single cluster, you can pick one by specifying the ingress.class annotation, eg creating an Ingress with an annotation like

metadata:
  name: foo
  annotations:
    kubernetes.io/ingress.class: "gce"

will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like

metadata:
  name: foo
  annotations:
    kubernetes.io/ingress.class: "nginx"

will target the nginx controller, forcing the GCE controller to ignore it.

Note: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress.

Customizing NGINX

There are three ways to customize NGINX:

  1. ConfigMap: using a Configmap to set global configurations in NGINX.
  2. Annotations: use this if you want a specific configuration for a particular Ingress rule.
  3. Custom template: when more specific settings are required, like open_file_cache, adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.

Source IP address

By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer.

If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.

Another option is to enable proxy protocol using use-proxy-protocol: "true".

In this mode NGINX do not uses the content of the header to get the source IP address of the connection.

Proxy Protocol

If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.

Amongst others ELBs in AWS and HAProxy support Proxy Protocol.

Running multiple ingress controllers

If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress, you need to specify the annotation kubernetes.io/ingress.class: "nginx" in all ingresses that you would like this controller to claim. Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying the wrong value will result in all ingress controllers ignoring the ingress. Multiple ingress controllers running in the same cluster was not supported in Kubernetes versions < 1.3.

Websockets

Support for websockets is provided by NGINX out of the box. No special configuration required.

The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout.

The default value of this settings is 60 seconds. A more adequate value to support websockets is a value higher than one hour (3600).

Optimizing TLS Time To First Byte (TTTFB)

NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.

This improves the Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k).

Retries in non-idempotent methods

Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.

Disabling NGINX ingress controller

Setting the annotation kubernetes.io/ingress.class to any value other than "nginx" or the empty string, will force the NGINX Ingress controller to ignore your Ingress.

Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.

Limitations

  • Ingress rules for TLS require the definition of the field host

Why endpoints and not services

The NGINX ingress controller does not uses Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.