diff --git a/Documentation/concepts/security/proxy/envoy.rst b/Documentation/concepts/security/proxy/envoy.rst index aac3797e72349..9a79420372a8b 100644 --- a/Documentation/concepts/security/proxy/envoy.rst +++ b/Documentation/concepts/security/proxy/envoy.rst @@ -529,7 +529,7 @@ and adding the ''--debug-verbose=flow'' flag. $ sudo service cilium stop - $ sudo /usr/bin/cilium-agent --debug --ipv4-range 10.11.0.0/16 --kvstore-opt consul.address=192.168.33.11:8500 --kvstore consul -t vxlan --fixed-identity-mapping=128=kv-store --fixed-identity-mapping=129=kube-dns --debug-verbose=flow + $ sudo /usr/bin/cilium-agent --debug --ipv4-range 10.11.0.0/16 --kvstore-opt consul.address=192.168.60.11:8500 --kvstore consul -t vxlan --fixed-identity-mapping=128=kv-store --fixed-identity-mapping=129=kube-dns --debug-verbose=flow Step 13: Add Runtime Tests diff --git a/Documentation/gettingstarted/egress-gateway.rst b/Documentation/gettingstarted/egress-gateway.rst index ca1e88eea0e35..e4ad18753a4e9 100644 --- a/Documentation/gettingstarted/egress-gateway.rst +++ b/Documentation/gettingstarted/egress-gateway.rst @@ -88,7 +88,7 @@ cluster, and use it as the destination of the egress traffic. Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2021-04-04 21:58:57 UTC; 1min 3s ago [...] - $ curl http://192.168.33.13:80 # Assume 192.168.33.13 is the external IP of the node + $ curl http://192.168.60.13:80 # Assume 192.168.60.13 is the external IP of the node [...] Welcome to nginx! [...] @@ -106,7 +106,7 @@ the configurations specified in the CiliumEgressNATPolicy. NAME READY STATUS RESTARTS AGE pod/mediabot 1/1 Running 0 14s - $ kubectl exec mediabot -- curl http://192.168.33.13:80 + $ kubectl exec mediabot -- curl http://192.168.60.13:80 [...] @@ -118,9 +118,9 @@ will contain something like the following: $ tail /var/log/nginx/access.log [...] - 192.168.33.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" + 192.168.60.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" -In the previous example, the client pod is running on the node ``192.168.33.11``, so the result makes sense. +In the previous example, the client pod is running on the node ``192.168.60.11``, so the result makes sense. This is the default Kubernetes behavior without egress NAT. Configure Egress IPs @@ -128,7 +128,7 @@ Configure Egress IPs Deploy the following deployment to assign additional egress IP to the gateway node. The node that runs the pod will have additional IP addresses configured on the external interface (``enp0s8`` as in the example), -and become the egress gateway. In the following example, ``192.168.33.100`` and ``192.168.33.101`` becomes +and become the egress gateway. In the following example, ``192.168.60.100`` and ``192.168.60.101`` becomes the egress IP which can be consumed by Egress NAT Policy. Please make sure these IP addresses are routable on the interface they are assigned to, otherwise the return traffic won't be able to route back. @@ -139,8 +139,8 @@ Create Egress NAT Policy Apply the following Egress NAT Policy, which basically means: when the pod is running in the namespace ``default`` and the pod itself has label ``org: empire`` and ``class: mediabot``, if it's trying to talk to -IP CIDR ``192.168.33.13/32``, then use egress IP ``192.168.33.100``. In this example, it tells Cilium to -forward the packet from client pod to the gateway node with egress IP ``192.168.33.100``, and masquerade +IP CIDR ``192.168.60.13/32``, then use egress IP ``192.168.60.100``. In this example, it tells Cilium to +forward the packet from client pod to the gateway node with egress IP ``192.168.60.100``, and masquerade with that IP address. .. literalinclude:: ../../examples/kubernetes-egress-gateway/egress-nat-policy.yaml @@ -149,17 +149,17 @@ Let's switch back to the client pod and verify it works. .. code-block:: shell-session - $ kubectl exec mediabot -- curl http://192.168.33.13:80 + $ kubectl exec mediabot -- curl http://192.168.60.13:80 [...] Verify access log from nginx node or service of your chose that the request is coming from egress IP now instead of one of the nodes in Kubernetes cluster. In the nginx's case, you will see logs like the -following shows that the request is coming from ``192.168.33.100`` now, instead of ``192.168.33.11``. +following shows that the request is coming from ``192.168.60.100`` now, instead of ``192.168.60.11``. .. code-block:: shell-session $ tail /var/log/nginx/access.log [...] - 192.168.33.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" + 192.168.60.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" diff --git a/Documentation/gettingstarted/encryption-wireguard.rst b/Documentation/gettingstarted/encryption-wireguard.rst index 44382aa793778..0c7778c70efe9 100644 --- a/Documentation/gettingstarted/encryption-wireguard.rst +++ b/Documentation/gettingstarted/encryption-wireguard.rst @@ -154,7 +154,7 @@ commands can be helpful: "10.154.1.107/32", "10.154.1.195/32" ], - "endpoint": "192.168.34.12:51871", + "endpoint": "192.168.61.12:51871", "last-handshake-time": "2021-05-05T12:31:24.418Z", "public-key": "RcYfs/GEkcnnv6moK5A1pKnd+YYUue21jO9I08Bv0zo=" } @@ -179,7 +179,7 @@ commands can be helpful: "10.154.2.103/32", "10.154.2.142/32" ], - "endpoint": "192.168.34.11:51871", + "endpoint": "192.168.61.11:51871", "last-handshake-time": "2021-05-05T12:31:24.631Z", "public-key": "DrAc2EloK45yqAcjhxerQKwoYUbLDjyrWgt9UXImbEY=" } @@ -228,4 +228,4 @@ The current status of these limitations is tracked in :gh-issue:`15462`. Legal ===== -"WireGuard" is a registered trademark of Jason A. Donenfeld. \ No newline at end of file +"WireGuard" is a registered trademark of Jason A. Donenfeld. diff --git a/Documentation/gettingstarted/host-firewall.rst b/Documentation/gettingstarted/host-firewall.rst index 67b8bc1b6c251..124b419f35285 100644 --- a/Documentation/gettingstarted/host-firewall.rst +++ b/Documentation/gettingstarted/host-firewall.rst @@ -122,8 +122,8 @@ breakages. .. code-block:: shell-session $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium monitor -t policy-verdict --related-to $HOST_EP_ID - Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.33.12 -> 192.168.33.11 EchoRequest - Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.33.12:37278 -> 192.168.33.11:2379 tcp SYN + Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.60.12 -> 192.168.60.11 EchoRequest + Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.60.12:37278 -> 192.168.60.11:2379 tcp SYN Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action audit, match none, 10.0.2.2:47500 -> 10.0.2.15:6443 tcp SYN For details on how to derive the network policies from the output of ``cilium diff --git a/Documentation/gettingstarted/ipam-cluster-pool.rst b/Documentation/gettingstarted/ipam-cluster-pool.rst index 1584f1e0606dc..459457070dbf9 100644 --- a/Documentation/gettingstarted/ipam-cluster-pool.rst +++ b/Documentation/gettingstarted/ipam-cluster-pool.rst @@ -45,7 +45,7 @@ Validate installation .. code-block:: shell-session $ cilium status --all-addresses - KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.33.11:2379 - 3.3.12 (Leader) + KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.60.11:2379 - 3.3.12 (Leader) [...] IPAM: IPv4: 2/256 allocated, Allocated addresses: diff --git a/Documentation/gettingstarted/ipam-crd.rst b/Documentation/gettingstarted/ipam-crd.rst index be06ebfdc5b29..f720098bdcc1f 100644 --- a/Documentation/gettingstarted/ipam-crd.rst +++ b/Documentation/gettingstarted/ipam-crd.rst @@ -61,7 +61,7 @@ Create a CiliumNode CR .. code-block:: shell-session $ cilium status --all-addresses - KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.33.11:2379 - 3.3.12 (Leader) + KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.60.11:2379 - 3.3.12 (Leader) [...] IPAM: IPv4: 2/4 allocated, Allocated addresses: diff --git a/Documentation/gettingstarted/local-redirect-policy.rst b/Documentation/gettingstarted/local-redirect-policy.rst index bdc17f8b560af..7e13a5fb0f2d3 100644 --- a/Documentation/gettingstarted/local-redirect-policy.rst +++ b/Documentation/gettingstarted/local-redirect-policy.rst @@ -449,7 +449,7 @@ security credentials for pods. You can verify this by running a curl command to the AWS metadata server from one of the application pods, and tcpdump command on the same EKS cluster node as the pod. Following is an example output, where ``192.169.98.118`` is the ip - address of an application pod, and ``192.168.33.99`` is the ip address of the + address of an application pod, and ``192.168.60.99`` is the ip address of the kiam agent running on the same node as the application pod. .. code-block:: shell-session @@ -467,11 +467,11 @@ security credentials for pods. .. code-block:: shell-session - $ sudo tcpdump -i any -enn "(port 8181) and (host 192.168.33.99 and 192.168.98.118)" + $ sudo tcpdump -i any -enn "(port 8181) and (host 192.168.60.99 and 192.168.98.118)" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes - 05:16:05.229597 In de:e4:e9:94:b5:9f ethertype IPv4 (0x0800), length 76: 192.168.98.118.47934 > 192.168.33.99.8181: Flags [S], seq 669026791, win 62727, options [mss 8961,sackOK,TS val 2539579886 ecr 0,nop,wscale 7], length 0 - 05:16:05.229657 Out 56:8f:62:18:6f:85 ethertype IPv4 (0x0800), length 76: 192.168.33.99.8181 > 192.168.98.118.47934: Flags [S.], seq 2355192249, ack 669026792, win 62643, options [mss 8961,sackOK,TS val 4263010641 ecr 2539579886,nop,wscale 7], length 0 + 05:16:05.229597 In de:e4:e9:94:b5:9f ethertype IPv4 (0x0800), length 76: 192.168.98.118.47934 > 192.168.60.99.8181: Flags [S], seq 669026791, win 62727, options [mss 8961,sackOK,TS val 2539579886 ecr 0,nop,wscale 7], length 0 + 05:16:05.229657 Out 56:8f:62:18:6f:85 ethertype IPv4 (0x0800), length 76: 192.168.60.99.8181 > 192.168.98.118.47934: Flags [S.], seq 2355192249, ack 669026792, win 62643, options [mss 8961,sackOK,TS val 4263010641 ecr 2539579886,nop,wscale 7], length 0 Miscellaneous ============= diff --git a/Documentation/operations/troubleshooting.rst b/Documentation/operations/troubleshooting.rst index d98b5673dac46..35b0a7cd07a0c 100644 --- a/Documentation/operations/troubleshooting.rst +++ b/Documentation/operations/troubleshooting.rst @@ -126,7 +126,7 @@ e.g.: .. code-block:: shell-session $ cilium status - KVStore: Ok etcd: 1/1 connected: https://192.168.33.11:2379 - 3.2.7 (Leader) + KVStore: Ok etcd: 1/1 connected: https://192.168.60.11:2379 - 3.2.7 (Leader) ContainerRuntime: Ok Kubernetes: Ok OK Kubernetes APIs: ["core/v1::Endpoint", "extensions/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service"] @@ -586,7 +586,7 @@ Understanding etcd status The etcd status is reported when running ``cilium status``. The following line represents the status of etcd:: - KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https://192.168.33.11:2379 - 3.4.9 (Leader) + KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https://192.168.60.11:2379 - 3.4.9 (Leader) OK: The overall status. Either ``OK`` or ``Failure``. @@ -606,7 +606,7 @@ has-quorum: consecutive-errors: Number of consecutive quorum errors. Only printed if errors are present. -https://192.168.33.11:2379 - 3.4.9 (Leader): +https://192.168.60.11:2379 - 3.4.9 (Leader): List of all etcd endpoints stating the etcd version and whether the particular endpoint is currently the elected leader. If an etcd endpoint cannot be reached, the error is shown. @@ -644,7 +644,7 @@ cluster size. The larger the cluster, the longer the `interval Example of a status with a quorum failure which has not yet reached the threshold:: - KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https://192.168.33.11:2379 - 3.4.9 (Leader) + KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https://192.168.60.11:2379 - 3.4.9 (Leader) Example of a status with the number of quorum failures exceeding the threshold:: @@ -842,7 +842,7 @@ State Propagation }, endpoints: (map[k8s.ServiceID]*k8s.Endpoints) (len=2) { (k8s.ServiceID) kube-system/kube-dns: (*k8s.Endpoints)(0xc0000103c0)(10.16.127.105:53/TCP,10.16.127.105:53/UDP,10.16.127.105:9153/TCP), - (k8s.ServiceID) default/kubernetes: (*k8s.Endpoints)(0xc0000103f8)(192.168.33.11:6443/TCP) + (k8s.ServiceID) default/kubernetes: (*k8s.Endpoints)(0xc0000103f8)(192.168.60.11:6443/TCP) }, externalEndpoints: (map[k8s.ServiceID]k8s.externalEndpoints) { } diff --git a/Documentation/operations/upgrade.rst b/Documentation/operations/upgrade.rst index 5ee2e84f39191..410e57f7c37cc 100644 --- a/Documentation/operations/upgrade.rst +++ b/Documentation/operations/upgrade.rst @@ -1379,7 +1379,7 @@ Export the current ConfigMap etcd-config: |- --- endpoints: - - https://192.168.33.11:2379 + - https://192.168.60.11:2379 # # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line # and create a kubernetes secret by following the tutorial in @@ -1440,7 +1440,7 @@ new options while keeping the configuration that we wanted: etcd-config: |- --- endpoints: - - https://192.168.33.11:2379 + - https://192.168.60.11:2379 # # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line # and create a kubernetes secret by following the tutorial in @@ -1609,13 +1609,13 @@ Example migration $ kubectl exec -n kube-system cilium-preflight-1234 -- cilium preflight migrate-identity INFO[0000] Setting up kvstore client - INFO[0000] Connecting to etcd server... config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.33.11:2379]" subsys=kvstore + INFO[0000] Connecting to etcd server... config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" subsys=kvstore INFO[0000] Setting up kubernetes client - INFO[0000] Establishing connection to apiserver host="https://192.168.33.11:6443" subsys=k8s + INFO[0000] Establishing connection to apiserver host="https://192.168.60.11:6443" subsys=k8s INFO[0000] Connected to apiserver subsys=k8s INFO[0000] Got lease ID 29c66c67db8870c8 subsys=kvstore INFO[0000] Got lock lease ID 29c66c67db8870ca subsys=kvstore - INFO[0000] Successfully verified version of etcd endpoint config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.33.11:2379]" etcdEndpoint="https://192.168.33.11:2379" subsys=kvstore version=3.3.13 + INFO[0000] Successfully verified version of etcd endpoint config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" etcdEndpoint="https://192.168.60.11:2379" subsys=kvstore version=3.3.13 INFO[0000] CRD (CustomResourceDefinition) is installed and up-to-date name=CiliumNetworkPolicy/v2 subsys=k8s INFO[0000] Updating CRD (CustomResourceDefinition)... name=v2.CiliumEndpoint subsys=k8s INFO[0001] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumEndpoint subsys=k8s diff --git a/Vagrantfile b/Vagrantfile index af5d3ee3adbf0..20bbdb24485c6 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -283,9 +283,9 @@ Vagrant.configure(2) do |config| config.vm.synced_folder cilium_dir, cilium_path, type: "nfs", nfs_udp: false # Don't forget to enable this ports on your host before starting the VM # in order to have nfs working - # iptables -I INPUT -p tcp -s 192.168.34.0/24 --dport 111 -j ACCEPT - # iptables -I INPUT -p tcp -s 192.168.34.0/24 --dport 2049 -j ACCEPT - # iptables -I INPUT -p tcp -s 192.168.34.0/24 --dport 20048 -j ACCEPT + # iptables -I INPUT -p tcp -s 192.168.61.0/24 --dport 111 -j ACCEPT + # iptables -I INPUT -p tcp -s 192.168.61.0/24 --dport 2049 -j ACCEPT + # iptables -I INPUT -p tcp -s 192.168.61.0/24 --dport 20048 -j ACCEPT # if using nftables, in Fedora (with firewalld), use: # nft -f ./contrib/vagrant/nftables.rules diff --git a/clustermesh-apiserver/tls.rst b/clustermesh-apiserver/tls.rst index 8fb278cda43d5..ff58de39f5c5a 100644 --- a/clustermesh-apiserver/tls.rst +++ b/clustermesh-apiserver/tls.rst @@ -40,7 +40,7 @@ using an externally accessible service IP from your cluster: :: - 192.168.36.11 clustermesh-apiserver.cilium.io + 192.168.56.11 clustermesh-apiserver.cilium.io Manual instructions using openssl ================================= @@ -217,7 +217,7 @@ externally accessible service IP from your cluster: :: - 192.168.36.11 clustermesh-apiserver.ciliumn.io + 192.168.56.11 clustermesh-apiserver.ciliumn.io Starting Cilium in a Container in a VM ====================================== @@ -228,10 +228,10 @@ $ docker run -d --name cilium --restart always --privileged --cap-add ALL --log- --volume /home/vagrant/cilium/etcd:/var/lib/cilium/etcd -/usr/bin/cilium-agent --kvstore etcd --kvstore-opt etcd.config=/var/lib/cilium/etcd/config.yaml --ipv4-node 192.168.36.10 --join-cluster +/usr/bin/cilium-agent --kvstore etcd --kvstore-opt etcd.config=/var/lib/cilium/etcd/config.yaml --ipv4-node 192.168.56.10 --join-cluster sudo mount bpffs -t bpf /sys/fs/bpf ---add-host clustermesh-apiserver.cilium.io:192.168.36.11 +--add-host clustermesh-apiserver.cilium.io:192.168.56.11 --network host --privileged --cap-add ALL diff --git a/contrib/vagrant/nftables.rules b/contrib/vagrant/nftables.rules index a11c64ea13b5d..e959b0b9fb8eb 100644 --- a/contrib/vagrant/nftables.rules +++ b/contrib/vagrant/nftables.rules @@ -1,3 +1,3 @@ -insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.34.0/24 tcp dport 20048 ct state { 0x8, 0x40 } accept -insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.34.0/24 tcp dport 2049 ct state { 0x8, 0x40 } accept -insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.34.0/24 tcp dport 111 ct state { 0x8, 0x40 } accept +insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.61.0/24 tcp dport 20048 ct state { 0x8, 0x40 } accept +insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.61.0/24 tcp dport 2049 ct state { 0x8, 0x40 } accept +insert rule inet firewalld filter_IN_public_allow ip saddr 192.168.61.0/24 tcp dport 111 ct state { 0x8, 0x40 } accept diff --git a/contrib/vagrant/scripts/helpers.bash b/contrib/vagrant/scripts/helpers.bash index 835885ea8e042..56b3be0b9e7ee 100644 --- a/contrib/vagrant/scripts/helpers.bash +++ b/contrib/vagrant/scripts/helpers.bash @@ -31,7 +31,7 @@ if [[ -n "${IPV6_EXT}" ]]; then # controllers_ips[1] contains the IP without brackets controllers_ips=( "[${master_ip}]" "${master_ip}" ) else - master_ip=${MASTER_IPV4:-"192.168.33.11"} + master_ip=${MASTER_IPV4:-"192.168.60.11"} controllers_ips=( "${master_ip}" "${master_ip}" ) fi diff --git a/contrib/vagrant/start.sh b/contrib/vagrant/start.sh index 2c2258076553b..377f7930ca898 100755 --- a/contrib/vagrant/start.sh +++ b/contrib/vagrant/start.sh @@ -8,30 +8,30 @@ chmod a+x "$dir/restart.sh" # Master's IPv4 address. Workers' IPv4 address will have their IP incremented by # 1. The netmask used will be /24 -export 'MASTER_IPV4'=${MASTER_IPV4:-"192.168.33.11"} +export 'MASTER_IPV4'=${MASTER_IPV4:-"192.168.60.11"} # NFS address is only set if NFS option is active. This will create a new # network interface for each VM with starting on this IP. This IP will be # available to reach from the host. -export 'MASTER_IPV4_NFS'=${MASTER_IPV4_NFS:-"192.168.34.11"} +export 'MASTER_IPV4_NFS'=${MASTER_IPV4_NFS:-"192.168.61.11"} # Enable IPv4 mode. It's enabled by default since it's required for several # runtime tests. export 'IPV4'=${IPV4:-1} # Exposed IPv6 node CIDR, only set if IPV4 is disabled. Each node will be setup # with a IPv6 network available from the host with $IPV6_PUBLIC_CIDR + -# 6to4($MASTER_IPV4). For IPv4 "192.168.33.11" we will have for example: +# 6to4($MASTER_IPV4). For IPv4 "192.168.60.11" we will have for example: # master : FD00::B/16 # worker 1: FD00::C/16 # The netmask used will be /16 export 'IPV6_PUBLIC_CIDR'=${IPV4+"FD00::"} # Internal IPv6 node CIDR, always set up by default. Each node will be setup # with a IPv6 network available from the host with IPV6_INTERNAL_CIDR + -# 6to4($MASTER_IPV4). For IPv4 "192.168.33.11" we will have for example: +# 6to4($MASTER_IPV4). For IPv4 "192.168.60.11" we will have for example: # master : FD01::B/16 # worker 1: FD01::C/16 # The netmask used will be /16 export 'IPV6_INTERNAL_CIDR'=${IPV4+"FD01::"} # Cilium IPv6 node CIDR. Each node will be setup with IPv6 network of -# $CILIUM_IPV6_NODE_CIDR + 6to4($MASTER_IPV4). For IPv4 "192.168.33.11" we will +# $CILIUM_IPV6_NODE_CIDR + 6to4($MASTER_IPV4). For IPv4 "192.168.60.11" we will # have for example: # master : FD02::0:0:0/96 # worker 1: FD02::1:0:0/96 diff --git a/examples/kubernetes-egress-gateway/egress-ip-deployment.yaml b/examples/kubernetes-egress-gateway/egress-ip-deployment.yaml index 0343a4f67ce6e..59fc7465ec2c1 100644 --- a/examples/kubernetes-egress-gateway/egress-ip-deployment.yaml +++ b/examples/kubernetes-egress-gateway/egress-ip-deployment.yaml @@ -39,7 +39,7 @@ spec: privileged: true env: - name: EGRESS_IPS - value: "192.168.33.100/24 192.168.33.101/24" + value: "192.168.60.100/24 192.168.60.101/24" args: - "for i in $EGRESS_IPS; do ip address add $i dev enp0s8; done; sleep 10000000" lifecycle: diff --git a/examples/kubernetes-egress-gateway/egress-nat-policy.yaml b/examples/kubernetes-egress-gateway/egress-nat-policy.yaml index 4a505c4087bca..1abb4c4d50625 100644 --- a/examples/kubernetes-egress-gateway/egress-nat-policy.yaml +++ b/examples/kubernetes-egress-gateway/egress-nat-policy.yaml @@ -15,5 +15,5 @@ spec: # matchLabels: # ns: default destinationCIDRs: - - 192.168.33.13/32 - egressSourceIP: "192.168.33.100" + - 192.168.60.13/32 + egressSourceIP: "192.168.60.100" diff --git a/examples/policies/host/lock-down-dev-vms-cidr-node.yaml b/examples/policies/host/lock-down-dev-vms-cidr-node.yaml index 62a5d1b888d50..afb3fe4b0c5ee 100644 --- a/examples/policies/host/lock-down-dev-vms-cidr-node.yaml +++ b/examples/policies/host/lock-down-dev-vms-cidr-node.yaml @@ -12,7 +12,7 @@ spec: - fromEntities: - health - fromCIDR: - - 192.168.33.0/24 + - 192.168.60.0/24 # SSH access to the VMs - fromEntities: @@ -23,7 +23,7 @@ spec: protocol: TCP - fromCIDR: - - 192.168.33.0/24 + - 192.168.60.0/24 toPorts: - ports: # VXLAN tunnels between nodes @@ -56,7 +56,7 @@ spec: - port: "4240" protocol: TCP - fromCIDR: - - 192.168.33.0/24 + - 192.168.60.0/24 toPorts: - ports: - port: "4240" @@ -86,7 +86,7 @@ spec: - toEntities: - health - toCIDR: - - 192.168.33.0/24 + - 192.168.60.0/24 # DNS traffic to kube-dns - toEndpoints: @@ -103,7 +103,7 @@ spec: protocol: UDP - toCIDR: - - 192.168.33.0/24 + - 192.168.60.0/24 toPorts: - ports: # VXLAN tunnels between nodes @@ -126,7 +126,7 @@ spec: - port: "4240" protocol: TCP - toCIDR: - - 192.168.33.0/24 + - 192.168.60.0/24 toPorts: - ports: - port: "4240" diff --git a/pkg/hubble/parser/threefour/parser_test.go b/pkg/hubble/parser/threefour/parser_test.go index bb966a501fd67..0b395639e2a2f 100644 --- a/pkg/hubble/parser/threefour/parser_test.go +++ b/pkg/hubble/parser/threefour/parser_test.go @@ -59,7 +59,7 @@ func TestL34DecodeEmpty(t *testing.T) { func TestL34Decode(t *testing.T) { //SOURCE DESTINATION TYPE SUMMARY - //192.168.33.11:6443(sun-sr-https) 10.16.236.178:54222 L3/4 TCP Flags: ACK + //192.168.60.11:6443(sun-sr-https) 10.16.236.178:54222 L3/4 TCP Flags: ACK d := []byte{ 4, 7, 0, 0, 7, 124, 26, 57, 66, 0, 0, 0, 66, 0, 0, 0, // NOTIFY_CAPTURE_HDR 1, 0, 0, 0, // source labels @@ -70,7 +70,7 @@ func TestL34Decode(t *testing.T) { 0, 0, 0, 0, // ifindex 246, 141, 178, 45, 33, 217, 246, 141, 178, 45, 33, 217, 8, 0, 69, 0, 0, 52, 234, 28, 64, 0, 64, 6, 120, 49, 192, - 168, 33, 11, 10, 16, 236, 178, 25, 43, 211, 206, 42, 239, 210, 28, 180, + 168, 60, 11, 10, 16, 236, 178, 25, 43, 211, 206, 42, 239, 210, 28, 180, 152, 129, 103, 128, 16, 1, 152, 216, 156, 0, 0, 1, 1, 8, 10, 0, 90, 176, 98, 0, 90, 176, 97, 0, 0} @@ -101,8 +101,8 @@ func TestL34Decode(t *testing.T) { OnGetNamesOf: func(epID uint32, ip net.IP) (names []string) { if epID == 1234 { switch { - case ip.Equal(net.ParseIP("192.168.33.11")): - return []string{"host-192.168.33.11"} + case ip.Equal(net.ParseIP("192.168.60.11")): + return []string{"host-192.168.60.11"} } } return nil @@ -110,17 +110,17 @@ func TestL34Decode(t *testing.T) { } ipGetter := &testutils.FakeIPGetter{ OnGetK8sMetadata: func(ip net.IP) *ipcache.K8sMetadata { - if ip.String() == "192.168.33.11" { + if ip.String() == "192.168.60.11" { return &ipcache.K8sMetadata{ Namespace: "remote", - PodName: "pod-192.168.33.11", + PodName: "pod-192.168.60.11", } } return nil }, OnLookupSecIDByIP: func(ip net.IP) (ipcache.Identity, bool) { // pretend IP belongs to a pod on a remote node - if ip.String() == "192.168.33.11" { + if ip.String() == "192.168.60.11" { // This numeric identity will be ignored because the above // TraceNotify event already contains the source identity return ipcache.Identity{ @@ -133,7 +133,7 @@ func TestL34Decode(t *testing.T) { } serviceGetter := &testutils.FakeServiceGetter{ OnGetServiceByAddr: func(ip net.IP, port uint16) *flowpb.Service { - if ip.Equal(net.ParseIP("192.168.33.11")) && (port == 6443) { + if ip.Equal(net.ParseIP("192.168.60.11")) && (port == 6443) { return &flowpb.Service{ Name: "service-1234", Namespace: "remote", @@ -156,11 +156,11 @@ func TestL34Decode(t *testing.T) { err = parser.Decode(d, f) require.NoError(t, err) - assert.Equal(t, []string{"host-192.168.33.11"}, f.GetSourceNames()) - assert.Equal(t, "192.168.33.11", f.GetIP().GetSource()) + assert.Equal(t, []string{"host-192.168.60.11"}, f.GetSourceNames()) + assert.Equal(t, "192.168.60.11", f.GetIP().GetSource()) assert.True(t, f.GetIP().GetEncrypted()) assert.Equal(t, uint32(6443), f.L4.GetTCP().GetSourcePort()) - assert.Equal(t, "pod-192.168.33.11", f.GetSource().GetPodName()) + assert.Equal(t, "pod-192.168.60.11", f.GetSource().GetPodName()) assert.Equal(t, "remote", f.GetSource().GetNamespace()) assert.Equal(t, "service-1234", f.GetSourceService().GetName()) assert.Equal(t, "remote", f.GetSourceService().GetNamespace()) @@ -248,7 +248,7 @@ func BenchmarkL34Decode(b *testing.B) { d := []byte{4, 7, 0, 0, 7, 124, 26, 57, 66, 0, 0, 0, 66, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 246, 141, 178, 45, 33, 217, 246, 141, 178, 45, 33, 217, 8, 0, 69, 0, 0, 52, 234, 28, 64, 0, 64, 6, 120, 49, 192, - 168, 33, 11, 10, 16, 236, 178, 25, 43, 211, 206, 42, 239, 210, 28, 180, 152, + 168, 60, 11, 10, 16, 236, 178, 25, 43, 211, 206, 42, 239, 210, 28, 180, 152, 129, 103, 128, 16, 1, 152, 216, 156, 0, 0, 1, 1, 8, 10, 0, 90, 176, 98, 0, 90, 176, 97, 0, 0} diff --git a/pkg/maps/ctmap/ctmap_privileged_test.go b/pkg/maps/ctmap/ctmap_privileged_test.go index d37161c0acc6a..52372bcd1d4a9 100644 --- a/pkg/maps/ctmap/ctmap_privileged_test.go +++ b/pkg/maps/ctmap/ctmap_privileged_test.go @@ -116,15 +116,15 @@ func (k *CTMapTestSuite) TestCtGcIcmp(c *C) { defer ctMap.Map.Unpin() // Create the following entries and check that they get GC-ed: - // - CT: ICMP OUT 192.168.34.11:38193 -> 192.168.34.12:0 <..> - // - NAT: ICMP IN 192.168.34.12:0 -> 192.168.34.11:38193 XLATE_DST <..> - // ICMP OUT 192.168.34.11:38193 -> 192.168.34.12:0 XLATE_SRC <..> + // - CT: ICMP OUT 192.168.61.11:38193 -> 192.168.61.12:0 <..> + // - NAT: ICMP IN 192.168.61.12:0 -> 192.168.61.11:38193 XLATE_DST <..> + // ICMP OUT 192.168.61.11:38193 -> 192.168.61.12:0 XLATE_SRC <..> ctKey := &CtKey4Global{ tuple.TupleKey4Global{ tuple.TupleKey4{ - SourceAddr: types.IPv4{192, 168, 34, 12}, - DestAddr: types.IPv4{192, 168, 34, 11}, + SourceAddr: types.IPv4{192, 168, 61, 12}, + DestAddr: types.IPv4{192, 168, 61, 11}, SourcePort: 0x3195, DestPort: 0, NextHeader: u8proto.ICMP, @@ -144,8 +144,8 @@ func (k *CTMapTestSuite) TestCtGcIcmp(c *C) { natKey := &nat.NatKey4{ tuple.TupleKey4Global{ tuple.TupleKey4{ - DestAddr: types.IPv4{192, 168, 34, 12}, - SourceAddr: types.IPv4{192, 168, 34, 11}, + DestAddr: types.IPv4{192, 168, 61, 12}, + SourceAddr: types.IPv4{192, 168, 61, 11}, DestPort: 0, SourcePort: 0x3195, NextHeader: u8proto.ICMP, @@ -156,7 +156,7 @@ func (k *CTMapTestSuite) TestCtGcIcmp(c *C) { natVal := &nat.NatEntry4{ Created: 37400, HostLocal: 1, - Addr: types.IPv4{192, 168, 34, 11}, + Addr: types.IPv4{192, 168, 61, 11}, Port: 0x3195, } err = bpf.UpdateElement(natMap.Map.GetFd(), natMap.Map.Name(), unsafe.Pointer(natKey), @@ -165,8 +165,8 @@ func (k *CTMapTestSuite) TestCtGcIcmp(c *C) { natKey = &nat.NatKey4{ tuple.TupleKey4Global{ tuple.TupleKey4{ - SourceAddr: types.IPv4{192, 168, 34, 12}, - DestAddr: types.IPv4{192, 168, 34, 11}, + SourceAddr: types.IPv4{192, 168, 61, 12}, + DestAddr: types.IPv4{192, 168, 61, 11}, SourcePort: 0, DestPort: 0x3195, NextHeader: u8proto.ICMP, @@ -177,7 +177,7 @@ func (k *CTMapTestSuite) TestCtGcIcmp(c *C) { natVal = &nat.NatEntry4{ Created: 37400, HostLocal: 1, - Addr: types.IPv4{192, 168, 34, 11}, + Addr: types.IPv4{192, 168, 61, 11}, Port: 0x3195, } err = bpf.UpdateElement(natMap.Map.GetFd(), natMap.Map.Name(), unsafe.Pointer(natKey), @@ -247,9 +247,9 @@ func (k *CTMapTestSuite) TestOrphanNatGC(c *C) { // to show for completion): // // - NodePort request from outside (subject to NodePort SNAT): - // CT: TCP OUT 192.168.34.1:63000 -> 10.0.1.99:80 - // NAT: TCP IN 10.0.1.99:80 -> 10.0.0.134:63000 XLATE_DST 192.168.34.1:63000 - // NAT: TCP OUT 192.168.34.1:63000 -> 10.0.1.99:80 XLATE_SRC 10.0.0.134:63000 + // CT: TCP OUT 192.168.61.1:63000 -> 10.0.1.99:80 + // NAT: TCP IN 10.0.1.99:80 -> 10.0.0.134:63000 XLATE_DST 192.168.61.1:63000 + // NAT: TCP OUT 192.168.61.1:63000 -> 10.0.1.99:80 XLATE_SRC 10.0.0.134:63000 // // - Local endpoint request to outside (subject to BPF-masq): // CT: TCP OUT 10.0.1.99:34520 -> 1.1.1.1:80 diff --git a/pkg/node/address_test.go b/pkg/node/address_test.go index 76561964ee407..f368ad436502d 100644 --- a/pkg/node/address_test.go +++ b/pkg/node/address_test.go @@ -155,7 +155,7 @@ func (s *NodeSuite) Test_getCiliumHostIPsFromFile(c *C) { cilium.v6.internal.str f00d::a00:0:0:a4ad cilium.v6.nodeport.str [] - cilium.v4.external.str 192.168.33.11 + cilium.v4.external.str 192.168.60.11 cilium.v4.internal.str 10.0.0.2 cilium.v4.nodeport.str [] diff --git a/pkg/wireguard/agent/agent_test.go b/pkg/wireguard/agent/agent_test.go index 412d416200230..456ba5c276644 100644 --- a/pkg/wireguard/agent/agent_test.go +++ b/pkg/wireguard/agent/agent_test.go @@ -50,12 +50,12 @@ func (f *fakeWgClient) ConfigureDevice(name string, cfg wgtypes.Config) error { var ( k8s1NodeName = "k8s1" k8s1PubKey = "YKQF5gwcQrsZWzxGd4ive+IeCOXjPN4aS9jiMSpAlCg=" - k8s1NodeIPv4 = net.ParseIP("192.168.33.11") + k8s1NodeIPv4 = net.ParseIP("192.168.60.11") k8s1NodeIPv6 = net.ParseIP("fd01::b") k8s2NodeName = "k8s2" k8s2PubKey = "lH+Xsa0JClu1syeBVbXN0LZNQVB6rTPBzbzWOHwQLW4=" - k8s2NodeIPv4 = net.ParseIP("192.168.33.12") + k8s2NodeIPv4 = net.ParseIP("192.168.60.12") k8s2NodeIPv6 = net.ParseIP("fd01::c") pod1IPv4Str = "10.0.0.1" diff --git a/test/Vagrantfile b/test/Vagrantfile index 372176d689643..88c113371c5af 100644 --- a/test/Vagrantfile +++ b/test/Vagrantfile @@ -100,10 +100,10 @@ Vagrant.configure("2") do |config| server.vm.hostname = "runtime" server.vm.network "private_network", - ip: "192.168.36.10", + ip: "192.168.56.10", virtualbox__intnet: "cilium-k8s#{$BUILD_NUMBER}-#{$JOB_NAME}-#{$K8S_VERSION}" server.vm.network "private_network", - ip: "192.168.37.10", + ip: "192.168.57.10", virtualbox__intnet: "cilium-k8s-2#{$BUILD_NUMBER}-#{$JOB_NAME}-#{$K8S_VERSION}" # @TODO: Clean this one when https://github.com/hashicorp/vagrant/issues/9822 is fixed. @@ -119,7 +119,7 @@ Vagrant.configure("2") do |config| # This network is only used by NFS if $NFS # This network is only used by NFS - server.vm.network "private_network", ip: "192.168.38.10" + server.vm.network "private_network", ip: "192.168.58.10" server.vm.synced_folder cilium_dir, cilium_path, type: "nfs", nfs_udp: false, mount_options: $NFS_OPTS else server.vm.synced_folder cilium_dir, cilium_path @@ -172,10 +172,10 @@ Vagrant.configure("2") do |config| auto_correct: true end server.vm.network "private_network", - ip: "192.168.36.1#{i}", + ip: "192.168.56.1#{i}", virtualbox__intnet: "cilium-k8s#{$BUILD_NUMBER}-#{$JOB_NAME}-#{$K8S_VERSION}" server.vm.network "private_network", - ip: "192.168.37.1#{i}", + ip: "192.168.57.1#{i}", virtualbox__intnet: "cilium-k8s-2#{$BUILD_NUMBER}-#{$JOB_NAME}-#{$K8S_VERSION}" # @TODO: Clean this one when https://github.com/hashicorp/vagrant/issues/9822 is fixed. @@ -190,7 +190,7 @@ Vagrant.configure("2") do |config| if $NFS # This network is only used by NFS - server.vm.network "private_network", ip: "192.168.38.1#{i}" + server.vm.network "private_network", ip: "192.168.58.1#{i}" server.vm.synced_folder cilium_dir, cilium_path, type: "nfs", nfs_udp: false, mount_options: $NFS_OPTS else server.vm.synced_folder cilium_dir, cilium_path @@ -203,7 +203,7 @@ Vagrant.configure("2") do |config| server.vm.provision "shell" do |sh| sh.path = "./provision/k8s_install.sh" sh.args = [ - "k8s#{i}", "192.168.36.1#{i}", "#{$K8S_VERSION}", + "k8s#{i}", "192.168.56.1#{i}", "#{$K8S_VERSION}", "#{$IPv6}", "#{$CONTAINER_RUNTIME}", "#{$CNI_INTEGRATION}"] sh.env = {"CILIUM_IMAGE" => "#{$CILIUM_IMAGE}", "CILIUM_TAG" => "#{$CILIUM_TAG}", diff --git a/test/k8sT/manifests/ccnp-host-ingress-from-cidr-to-ports.yaml b/test/k8sT/manifests/ccnp-host-ingress-from-cidr-to-ports.yaml index 2d34a0c7c3edc..728833a17aacd 100644 --- a/test/k8sT/manifests/ccnp-host-ingress-from-cidr-to-ports.yaml +++ b/test/k8sT/manifests/ccnp-host-ingress-from-cidr-to-ports.yaml @@ -11,4 +11,4 @@ spec: - port: "80" protocol: TCP fromCIDR: - - 192.168.36.13/32 + - 192.168.56.13/32 diff --git a/test/k8sT/manifests/ccnp-host-policy-nodeport-tests.yaml b/test/k8sT/manifests/ccnp-host-policy-nodeport-tests.yaml index a1b04a575079e..0374ddcbae9cc 100644 --- a/test/k8sT/manifests/ccnp-host-policy-nodeport-tests.yaml +++ b/test/k8sT/manifests/ccnp-host-policy-nodeport-tests.yaml @@ -24,7 +24,7 @@ spec: - remote-node # Kubelet to node without Cilium - toCIDR: - - 192.168.36.13/32 + - 192.168.56.13/32 toPorts: - ports: - port: "10250" diff --git a/test/k8sT/manifests/cnp-ingress-from-cidr-to-ports.yaml b/test/k8sT/manifests/cnp-ingress-from-cidr-to-ports.yaml index 2bd417f03be86..ee750e7bc10c1 100644 --- a/test/k8sT/manifests/cnp-ingress-from-cidr-to-ports.yaml +++ b/test/k8sT/manifests/cnp-ingress-from-cidr-to-ports.yaml @@ -13,4 +13,4 @@ spec: - port: "80" protocol: TCP fromCIDR: - - 192.168.36.13/32 + - 192.168.56.13/32 diff --git a/test/k8sT/manifests/externalIPs/README.md b/test/k8sT/manifests/externalIPs/README.md index 1856946d1c449..ce3932348f8a5 100644 --- a/test/k8sT/manifests/externalIPs/README.md +++ b/test/k8sT/manifests/externalIPs/README.md @@ -29,7 +29,7 @@ kubectl apply -f test/k8sT/manifests/externalIPs ``` We now have a 2 externalIPs services exposed in both nodes. We have 3 externalIPs -configured on each service, 2 of those IPs (`192.168.33.11` and `192.168.34.11`) +configured on each service, 2 of those IPs (`192.168.60.11` and `192.168.61.11`) should belong to k8s1, the 3rd (`192.0.2.233`) represent a externalIP that is routable in the cluster. @@ -46,7 +46,7 @@ to the cluster running those 2 nodes: TODO: provide a way to run the script automatically for a 3rd host. Execute the same command **without** the `-c` flag in the host that is hosting -both VMs. **Do not forget** to run `sudo ip route add 192.0.2.0/24 via 192.168.34.11` so +both VMs. **Do not forget** to run `sudo ip route add 192.0.2.0/24 via 192.168.61.11` so you can actually make requests to `k8s1` with the destination IP `192.0.2.233` Also, **DO NOT FORGET** the remove the route after being done with the test in diff --git a/test/k8sT/manifests/externalIPs/matrix.bash b/test/k8sT/manifests/externalIPs/matrix.bash index 75db250587c3d..509b4c30a3154 100644 --- a/test/k8sT/manifests/externalIPs/matrix.bash +++ b/test/k8sT/manifests/externalIPs/matrix.bash @@ -108,8 +108,8 @@ if [[ -n "${namespace}" && -z "${ips}" ]]; then fi done svcs=$(kubectl get svc -n ${namespace} -o jsonpath="{range .items[*]}{.metadata.name}{'-cluster-ip='}{.spec.clusterIP}{'\n'}{end}") - svcs_str="svc-a-external-ips-k8s1-public=192.0.2.233 svc-a-external-ips-k8s1-host-public=192.168.34.11 svc-a-external-ips-k8s1-host-private=192.168.33.11 \ -svc-b-external-ips-k8s1-public=192.0.2.233 svc-b-external-ips-k8s1-host-public=192.168.34.11 svc-b-external-ips-k8s1-host-private=192.168.33.11 \ + svcs_str="svc-a-external-ips-k8s1-public=192.0.2.233 svc-a-external-ips-k8s1-host-public=192.168.61.11 svc-a-external-ips-k8s1-host-private=192.168.60.11 \ +svc-b-external-ips-k8s1-public=192.0.2.233 svc-b-external-ips-k8s1-host-public=192.168.61.11 svc-b-external-ips-k8s1-host-private=192.168.60.11 \ localhost=127.0.0.1 " for svc in ${svcs}; do diff --git a/test/k8sT/manifests/externalIPs/node_to_node.go b/test/k8sT/manifests/externalIPs/node_to_node.go index 6a5f8c7feae5c..3144601fd6cd0 100644 --- a/test/k8sT/manifests/externalIPs/node_to_node.go +++ b/test/k8sT/manifests/externalIPs/node_to_node.go @@ -58,49 +58,49 @@ var ( "svc-a-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", }, @@ -108,49 +108,49 @@ var ( "svc-a-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", }, @@ -208,49 +208,49 @@ var ( "svc-b-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", }, @@ -258,49 +258,49 @@ var ( "svc-b-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", }, diff --git a/test/k8sT/manifests/externalIPs/other_node_to_node.go b/test/k8sT/manifests/externalIPs/other_node_to_node.go index bad29e3c8e51f..e150846eb6e58 100644 --- a/test/k8sT/manifests/externalIPs/other_node_to_node.go +++ b/test/k8sT/manifests/externalIPs/other_node_to_node.go @@ -58,49 +58,49 @@ var ( "svc-a-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", SkipReason: "Because we SNAT the request. @dborkmann will fix it", @@ -109,65 +109,65 @@ var ( "svc-a-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, }, @@ -224,49 +224,49 @@ var ( "svc-b-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", SkipReason: "Because we SNAT the request. @dborkmann will fix it", @@ -275,65 +275,65 @@ var ( "svc-b-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, }, diff --git a/test/k8sT/manifests/externalIPs/pod_other_node_to_node.go b/test/k8sT/manifests/externalIPs/pod_other_node_to_node.go index 7919dc483b72e..711f78ecc7f77 100644 --- a/test/k8sT/manifests/externalIPs/pod_other_node_to_node.go +++ b/test/k8sT/manifests/externalIPs/pod_other_node_to_node.go @@ -58,49 +58,49 @@ var ( "svc-a-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", SkipReason: "Because we SNAT the request. @dborkmann will fix it", @@ -109,59 +109,59 @@ var ( "svc-a-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, }, @@ -218,49 +218,49 @@ var ( "svc-b-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", SkipReason: "Because we SNAT the request. @dborkmann will fix it", @@ -269,59 +269,59 @@ var ( "svc-b-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", SkipReason: "on the receiving node we only install a BPF program " + - "on the interface with the IP 192.168.34.11 so we can't translate " + + "on the interface with the IP 192.168.61.11 so we can't translate " + "traffic incoming into this interface", }, }, diff --git a/test/k8sT/manifests/externalIPs/pod_to_node.go b/test/k8sT/manifests/externalIPs/pod_to_node.go index aa1971386fe97..3f050794857ae 100644 --- a/test/k8sT/manifests/externalIPs/pod_to_node.go +++ b/test/k8sT/manifests/externalIPs/pod_to_node.go @@ -58,49 +58,49 @@ var ( "svc-a-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", }, @@ -108,49 +108,49 @@ var ( "svc-a-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-a-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", }, @@ -208,49 +208,49 @@ var ( "svc-b-external-ips-k8s1-host-public": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-a-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-b-external-ips-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-svc-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-c-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-d-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-public:svc-e-node-port-node-port", - IP: "192.168.34.11", + IP: "192.168.61.11", Port: "30005", Expected: "app6", }, @@ -258,49 +258,49 @@ var ( "svc-b-external-ips-k8s1-host-private": { "svc-a-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-a-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "82", Expected: "app1", }, "svc-b-external-ips-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-b-external-ips-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30002", Expected: "app1", }, "svc-c-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "83", Expected: "connection refused", }, "svc-d-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "84", Expected: "connection refused", }, "svc-e-node-port-svc-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-svc-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "85", Expected: "connection refused", }, "svc-c-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-c-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30003", Expected: "app2", }, "svc-d-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-d-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30004", Expected: "app4", }, "svc-e-node-port-node-port": { Description: "svc-b-external-ips-k8s1-host-private:svc-e-node-port-node-port", - IP: "192.168.33.11", + IP: "192.168.60.11", Port: "30005", Expected: "app6", }, diff --git a/test/k8sT/manifests/externalIPs/svcs/svc-a-external-ips.yaml b/test/k8sT/manifests/externalIPs/svcs/svc-a-external-ips.yaml index 3b18690177181..8eaa8a587e5b2 100644 --- a/test/k8sT/manifests/externalIPs/svcs/svc-a-external-ips.yaml +++ b/test/k8sT/manifests/externalIPs/svcs/svc-a-external-ips.yaml @@ -10,8 +10,8 @@ spec: id: app1 externalIPs: - 192.0.2.233 - - 192.168.34.11 - - 192.168.33.11 + - 192.168.61.11 + - 192.168.60.11 ports: - protocol: TCP port: 82 diff --git a/test/k8sT/manifests/externalIPs/svcs/svc-b-external-ips.yaml b/test/k8sT/manifests/externalIPs/svcs/svc-b-external-ips.yaml index 1f70f4ffbff03..d940d5f0c20f3 100644 --- a/test/k8sT/manifests/externalIPs/svcs/svc-b-external-ips.yaml +++ b/test/k8sT/manifests/externalIPs/svcs/svc-b-external-ips.yaml @@ -10,8 +10,8 @@ spec: id: app1 externalIPs: - 192.0.2.233 - - 192.168.34.11 - - 192.168.33.11 + - 192.168.61.11 + - 192.168.60.11 ports: - protocol: TCP port: 30002 diff --git a/test/kubernetes-test.sh b/test/kubernetes-test.sh index 4f8ae0069b88f..40bd1bd916635 100755 --- a/test/kubernetes-test.sh +++ b/test/kubernetes-test.sh @@ -78,9 +78,9 @@ GO111MODULE=off make ginkgo GO111MODULE=off make WHAT='test/e2e/e2e.test' export KUBECTL_PATH=/usr/bin/kubectl -export KUBE_MASTER=192.168.36.11 -export KUBE_MASTER_IP=192.168.36.11 -export KUBE_MASTER_URL="https://192.168.36.11:6443" +export KUBE_MASTER=192.168.56.11 +export KUBE_MASTER_IP=192.168.56.11 +export KUBE_MASTER_URL="https://192.168.56.11:6443" echo "Running upstream services conformance tests" ${HOME}/go/bin/kubetest --provider=local --test \ diff --git a/test/provision/externalworkload_install.sh b/test/provision/externalworkload_install.sh index 75008e139fadf..8b1eeb7452ee4 100755 --- a/test/provision/externalworkload_install.sh +++ b/test/provision/externalworkload_install.sh @@ -6,4 +6,4 @@ cd /home/vagrant/go/src/github.com/cilium/cilium # Build docker image make docker-cilium-image -CLUSTER_ADDR=192.168.36.11:32379 HOST_IP=192.168.36.10 CILIUM_IMAGE=cilium/cilium:latest contrib/k8s/install-external-workload.sh +CLUSTER_ADDR=192.168.56.11:32379 HOST_IP=192.168.56.10 CILIUM_IMAGE=cilium/cilium:latest contrib/k8s/install-external-workload.sh diff --git a/test/provision/k8s_install.sh b/test/provision/k8s_install.sh index 783095f9889ab..8bedda70beec7 100755 --- a/test/provision/k8s_install.sh +++ b/test/provision/k8s_install.sh @@ -22,7 +22,7 @@ IPv6=$4 CONTAINER_RUNTIME=$5 # Kubeadm default parameters -export KUBEADM_ADDR='192.168.36.11' +export KUBEADM_ADDR='192.168.56.11' export KUBEADM_POD_CIDR='10.10.0.0/16' export KUBEADM_V1BETA2_POD_CIDR='10.10.0.0/16,fd02::/112' export KUBEADM_SVC_CIDR='10.96.0.0/12' @@ -82,12 +82,12 @@ cat <> /etc/hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters -192.168.36.11 k8s1 -192.168.36.12 k8s2 -192.168.36.13 k8s3 -192.168.36.14 k8s4 -192.168.36.15 k8s5 -192.168.36.16 k8s6 +192.168.56.11 k8s1 +192.168.56.12 k8s2 +192.168.56.13 k8s3 +192.168.56.14 k8s4 +192.168.56.15 k8s5 +192.168.56.16 k8s6 EOF # Configure default IPv6 route without this connectivity from host to diff --git a/tools/dev-doctor/rootcmd.go b/tools/dev-doctor/rootcmd.go index 67089d99ff8e8..399efff50d548 100644 --- a/tools/dev-doctor/rootcmd.go +++ b/tools/dev-doctor/rootcmd.go @@ -189,13 +189,13 @@ func rootCmdRun(cmd *cobra.Command, args []string) { checks = append(checks, etcNFSConfCheck{}, &iptablesRuleCheck{ - rule: []string{"INPUT", "-p", "tcp", "-s", "192.168.34.0/24", "--dport", "111", "-j", "ACCEPT"}, + rule: []string{"INPUT", "-p", "tcp", "-s", "192.168.61.0/24", "--dport", "111", "-j", "ACCEPT"}, }, &iptablesRuleCheck{ - rule: []string{"INPUT", "-p", "tcp", "-s", "192.168.34.0/24", "--dport", "2049", "-j", "ACCEPT"}, + rule: []string{"INPUT", "-p", "tcp", "-s", "192.168.61.0/24", "--dport", "2049", "-j", "ACCEPT"}, }, &iptablesRuleCheck{ - rule: []string{"INPUT", "-p", "tcp", "-s", "192.168.34.0/24", "--dport", "20048", "-j", "ACCEPT"}, + rule: []string{"INPUT", "-p", "tcp", "-s", "192.168.61.0/24", "--dport", "20048", "-j", "ACCEPT"}, }, ) }