Using a router and routes is the most common way to access the cluster. A router is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications.
An administrator can create a entry, and then set up a router. Afterward, the users can self-service the edge router without having to contact the administrators. The router has controls to allow the administrator to specify whether the users can self-provision host names, or if they must fit a pattern the administrator defines. The other solutions require the administrator to do the provisioning, or they require that the administrator delegates a lot of privilege.
When a set of routes is created in various projects, the overall set of routes is available to the set of routers. Each router admits (or selects) routes from the set of routes. By default, all routers admit all routes.
Routers that have permission to view all of the labels in all namespaces can select routes to admit based on the labels. This is called router sharding. This is useful when balancing incoming traffic load among a set of routers and when isolating traffic to a specific router. For example, company A goes to one router and company B to another.
Since a router runs on a specific node, when it or the node fails traffic ingress stops. The impact of this can be reduced by creating redundant routers on different nodes and using high availability to switch the router IP address when a node fails.
Administrators can assign a list of externalIPs, for which nodes in the cluster will also accept traffic for the service. These IPs are not managed by {product-title} and administrators are responsible for ensuring that traffic arrives at a node with this IP. A common example of this is configuring a service.
The supplied list of IP addresses is used for load balancing incoming requests. The service port is opened on the externalIPs on all nodes running kube-proxy.
Note
|
ExternalIPs require elevated permissions to assign, and manual tracking of the IP:ports that are in use. |
Traffic from hosts that are external to the cluster that is going to the external IP address ends up on a node in the cluster. When it arrives, the service that set up the external IP redirects it to a service endpoint that it is managing. The service load balances among its endpoints.
An externally visible IP for the service can be configured in several ways:
-
Manually configure the ExternalIP with a known external IP address.
-
Configure ExternalIP to a VIP that is generated from (VRRP).
-
In a cloud (AWS or GCE) use
type=LoadBalancer
-
In a non-cloud environment, configure an ingressIP range (
ingressIPNetworkCIDR
),service.type=LoadBalancer
andservice.port.ingressIP
.
The administrator must ensure the external IPs are routed to the nodes and local firewall rules on all nodes allow access to the open port.
$ oc create -f - <<INGRESS apiVersion: v1 kind: Service metadata: name: postgresql-ingress spec: externalIPs: - 10.9.54.100 (1) ports: - port: 5432 protocol: TCP selector: name: postgresql type: LoadBalancer INGRESS
-
The IP address assigned to the external IP.
The external IP address is not managed by the underlying Kubernetes infrastructure and must be maintained and provided by a cluster administrator. While external IPs provide a solution for accessing services on the {product-title} cluster, there are several shortcomings:
-
The cluster administrator user must ensure the external IP is not in any range that the cluster is configured to use. The user needs to work with the administrator that controls the network external to the cluster to assign the external IP address and ensure traffic to the IP can get to the nodes in the cluster. The cluster administrator must ensure firewall rules permit the packets to get into the nodes for the ports they want to expose.
-
There are no protections in place to restrict the usage of a particular external address configured within the cluster. This allows the potential for a single address to be used by multiple services targeting the same port. When this situation occurs, the service which requested the port first is given use of the port.
Ingress IP Self-Service provides a solution.
Ingress IP Self-Service streamlines the allocation of External IPs for accessing
services in the cluster. Cluster administrators can
designate
a range of addresses using a CIDR notation, which allows an application user to
make a request against the cluster for an external IP address. This range should
not conflict with the range reserved for ExternalIP
addresses. When a service is configured with the type LoadBalancer
, an
External IP address is automatically assigned from the designated range.
Note
|
Ingress IP Self-Service is only applicable in non-cloud based environments. |
A common use case for Ingress IP Self-Service is the ability to provide database services, such as PostgreSQL, to clients outside of the {product-title} cluster.
The ability for cluster administrators to automatically allocate External IP
addresses using the edge router is enabled by default within {product-title} and
configured to use the 172.46.0.0/16
range. An alternate range can be specified
by configuring the
ingressIPNetworkCIDR
parameter in the
/etc/origin/master-config.yaml
file:
networkConfig: ingressIPNetworkCIDR: 10.9.54.0/25
Restart the {product-title} master service to apply the changes.
To expose a PostgreSQL as a service for external consumption, the application must be first deployed into the cluster. Create a new project or use and existing project and instantiate one of the PostgreSQL templates.
Caution
|
The postgresql-ephemeral template does not make use of persistent storage. Once the application is scaled down or destroyed, any existing data will be lost. To use persistent storage, specify the postgresql-persistent template instead. |
After instantiating the template, a ClusterIP-based service and
DeploymentConfig
is created and a new pod containing PostgreSQL will be
started.
To allow the cluster to automatically assign an IP address for a service, create a service definition similar to the following that will create a new Ingress service:
$ oc create -f - <<INGRESS apiVersion: v1 kind: Service metadata: name: postgresql-ingress spec: ports: - name: postgresql port: 5432 type: LoadBalancer (1) selector: name: postgresql INGRESS
-
The
LoadBalancer
type of service will make the request for an external service on behalf of the application user.
Alternatively, the oc expose
command can be used to create the service:
$ oc expose dc postgresql --type=LoadBalancer --name=postgresql-ingress
Once the service is created, the external IP address is automatically allocated by the cluster and can be confirmed by running:
$ oc get svc postgresql-ingress
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgresql-ingress 172.30.74.106 10.9.54.100,10.9.54.100 5432/TCP 30s
Specifying the type LoadBalancer
also configures the service with a nodePort
value. nodePort
exposes the service port on all nodes in the cluster. Any packet
that arrives on any node in the cluster targeting the nodePort
ends up in the
service. Then, it is load balanced to the service’s endpoints.
To discover the node port automatically assigned, run:
$ oc export svc postgresql-ingress
apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: postgresql-persistent template: postgresql-persistent-template name: postgresql-ingress spec: ports: - nodePort: 32439 (1) port: 5432 protocol: TCP targetPort: 5432 selector: name: postgresql sessionAffinity: None type: LoadBalancer
-
Automatically assigned port.
A PostgreSQL client can now be configured to connect directly to any node using
the value of the assigned nodePort
. A nodePort
works with any IP address
that allows traffic to terminate at any node in the cluster.
Instead of connecting directly to individual nodes, you can use one of {product-title}'s strategies by deploying the IP failover router to provide access services configured with external IP addresses. This allows cluster administrators the flexibility of defining the edge router points within a cluster, and making the service highly available.
Note
|
Nodes that have IP failover routers deployed to them must be in the same Layer 2 switching domain for ARP broadcasts to communicate to switches what appropriate port the destination should flow to. |
Caution
|
is limited to a maximum of 255 VIPs. This is a limitation of the Virtual Router Redundancy Protocol (VRRP). The VIPs do not have to be sequential. |
Use NodePorts to expose the service nodePort on all nodes in the cluster. NodePorts allow load
balancing across an entire cluster and are used by the LoadBalancer service type
when running on a cloud provider. You can specify a specific nodePort (if not
already in use) or allow the system to assign you one. When the nodePort is
assigned, all nodes in the cluster will allow incoming traffic to that port, and
then direct the traffic from that port to a local or remote endpoint under the
service. By default, {product-title} allocates ports between 30000
and 32000
as NodePorts. For very high density environments, you may need to increase the
range of ports. {product-title} automatically opens the nodePort range during
installation for firewalls and cloud security groups for new clusters.
The administrator must ensure the external IPs are routed to the nodes and local firewall rules on all nodes allow access to the open port.
NodePorts and externalIP are independent and both can be used concurrently.
Note
|
When enabled, a service NodePort listens on all nodes in the cluster. The service usually has multiple endpoints if there are more than one replicas (domain controller or replication controller) running. If traffic can get to any node in the cluster, the nodePort connects and traffic goes to one of the endpoints. |
-
Create a new database pod with MySQL:
$ oc new-app mysql --env=MYSQL_USER=user \ --env=MYSQL_PASSWORD=pass --env=MYSQL_DATABASE=testdb -l db=mysql
-
Edit the service object definition to include nodePort:
kind: "Service" apiVersion: "v1" metadata: name: "mysql" labels: name: "mysql" spec: selector: db: "mysql" deploymentconfig: "mysql" type: "NodePort" sessionAffinity: "None" status: ...
-
Get the automatically assigned nodePort numder:
$ oc get svc mysql -o yaml |grep nodePort nodePort: 30234
-
Verify service access on all nodes:
$ mysql -u user -ppass testdb -h 192.168.133.3 --port=30234 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.45 MySQL Community Server (GPL) Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [testdb]> Bye $ mysql -u user -ppass testdb -h 192.168.133.4 --port=30234 Welcome to the MariaDB monitor. Commands end with ; or \g. ...
improves the chances that an IP address will remain active, by assigning a virtual IP address to the host in a configured pool of hosts. If the host goes down, the virtual IP address is automatically transferred to another host in the pool.