The aim of the NGINX Gateway Fabric is to become a fundamental infrastructure component within a Kubernetes cluster, serving as both an ingress and egress point for traffic directed towards the services (applications) running within or outside the cluster. Leveraging NGINX as a data plane technology, it harnesses the well-established reputation of NGINX as an open-source project widely recognized for its role as a web server, proxy, load balancer, and content cache. NGINX is renowned for its stability, high performance, security, and rich feature set, positioning it as a critical infrastructure tool. Notably, once properly configured and operational, NGINX requires minimal attention, making it reliable and steady software.
The NGINX Gateway Fabric aims to embody the same qualities as NGINX and become familiar, trustworthy and reliable software. The principles outlined below serve as a guide for engineering the NGINX Gateway Fabric with the intention of achieving this goal.
We are security first. We prioritize security from the outset, thoroughly evaluating each design and feature with a focus on security. We proactively identify and safeguard assets at the early stages of our processes, ensuring their protection throughout the development lifecycle. We adhere to best practices for secure design, including proper authentication, authorization, and encryption mechanisms.
As a critical infrastructure component, we must be highly available. We design and review features with redundancy and fault tolerance in mind. We regularly test the NGINX Gateway Fabric's availability by simulating failure scenarios and conducting load testing. We work to identify potential weaknesses and bottlenecks, and address them to ensure high availability under various conditions.
We must be highly performant and lightweight. We fine-tune the NGINX configuration to maximize performance without requiring custom configuration. We strive to minimize our memory and CPU footprint, enabling efficient resource allocation and reducing unnecessary processing overhead. We use profiling tools on our code to identify bottlenecks and improve performance.
We design with resilience in mind. This includes gracefully handling failures, such as Pod restarts or network interruptions, as well as leveraging Kubernetes features like health checks, readiness probes, and container restart policies.
We provide comprehensive logging, metrics, and tracing capabilities to gain insights into our behavior and performance. We prioritize Kubernetes-native observability tools like Prometheus, Grafana, and distributed tracing systems to help users monitor the health of NGINX Gateway Fabric and to assist in diagnosing issues.
NGINX Gateway Fabric must be easy and intuitive to use. This means that it should be easy to install, easy to configure, and easy to monitor. Its defaults should be sane and should lead to "out-of-box" success. The documentation should be clear and provide meaningful examples that customer's can use to inform their deployments and configurations.