netassert: network security testing for DevSecOps workflows
NOTE: this framework is in beta state as we move towards our first 1.0 release. Please file any issues you find and note the version used.
This is a security testing framework for fast, safe iteration on firewall, routing, and NACL rules for Kubernetes (Network Policies, services) and non-containerised hosts (cloud provider instances, VMs, bare metal). It aggressively parallelises nmap to test outbound network connections and ports from any accessible host, container, or Kubernetes pod by joining the same network namespace as the instance under test.
The alternative is to exec into a container and curl, or spin up new pods with the same selectors and curl from there. This has lots of problems (extra tools in container image, or tool installation despite immutable root filesystems, or egress prevention). netassert aims to fix this:
- does not rely on a dedicated tool speaking the correct target protocol (e.g. doesn't need
curl, GRPC client, etc) - does not bloat the pod under test or increase the pod's attack surface with non-production tooling
- works with
FROM scratchcontainers - is parallelised to run in near-constant time for large or small test suites
- does not appear to the Kubernetes API server that it's changing the system under test
- uses TCP/IP (layers 3 and 4) so does not show up in HTTP logs (e.g.
nginxaccess logs) - produces TAP output for humans and build servers
More information and background in this presentation from Configuration Management Camp 2018.
$ ./netassert --help
Usage: netassert [options] [filename]
Options:
--offline Assume image is already on target nodes
--image Name of test image (for private/offline registies)
--debug More debug
-h --help Display this messagenetassert takes a single YAML file as input. This file lists the hosts to test from, and describes the hosts and ports that it should be able to reach.
It can test from any reachable host, and from Kubernetes pods.
A simple example:
host: # used for ssh-accessible hosts
localhost: # host to run test from, can be anything accessible via SSH
8.8.8.8: UDP:53 # host and ports to test for accessA full example:
host: # used for ssh-accessible hosts
localhost: # host to run test from, can be a remote host
8.8.8.8: UDP:53 # host and ports to test from localhost
google.co.uk: 443 # if no protocol is specified then TCP is implied
control-plane.io: 80, 81, 443, 22 # ports can be comma or space delimited
kubernetes.io: # this can be anything SSH can access
- 443 # ports can be provided as a list
- 80
localhost: # this tests ports on the local machine
- 22
- -999 # ports can be negated with `-`, this checks that 999 TCP is not open
- -TCP:30731 # TCP is implied, but can be specified
- -UDP:1234 # UDP must be explicitly stated, otherwise TCP assumed
- -UDP:555
control-plane.io: # this must be accessible via ssh (perhaps via ssh-agent), or `localhost`
8.8.8.8: UDP:53 # this tests 8.8.8.8:53 is accesible from control-plane.io
8.8.4.4: UDP:53 # this tests 8.8.4.4:53 is accesible from control-plane.io
google.com: 443 # this tests google.com:443 is accesible from control-plane.io
k8s: # used for Kubernetes pods
deployment: # only deployments currently supported
test-frontend: # pod name, defaults to `default` namespace
test-microservice: 80 # `test-microservice` is the DNS name of the target service
test-database: -80 # test-frontend should not be able to access test-database port 80
new-namespace:test-microservice: # `new-namespace` is the namespace name
test-database.new-namespace: 80 # longer DNS names can be used for other namespaces
test-frontend.default: 80
default:test-database:
test-frontend.default.svc.cluster.local: 80 # full DNS names can be used
test-microservice.default.svc.cluster.local: -80To test that localhost can reach 8.8.8.8 and 8.8.4.4 on port 53 UDP:
host:
localhost:
8.8.8.8: UDP:53
8.8.4.4: UDP:53What this test does:
- Starts on the test runner host
- Pull the test container
- Check port
UDP:53is open on8.8.8.8and8.8.4.4 - Shows TAP results
Test that control-plane.io can reach github.com:
host:
control-plane.io:
github.com:
- 22
- 443What this test does:
- Starts on the test runner host
- SSH to
control-plane.io - Pull the test container
- Check ports
22and443are open - Returns TAP results to the test runner host
host:
localhost:
control-plane.io:
- 22
control-plane.io:
github.com:
- 22Test that a pod can reach 8.8.8.8:
k8s:
deployment:
some-namespace:my-pod:
8.8.8.8: UDP:53Test that my-pod in namespace default can reach other-pod in other-namespace, and that other-pod cannot reach
my-pod:
k8s:
deployment:
default:my-pod:
other-namespace:other-pod: 80
other-namespace:other-pod:
default:my-pod: -80- from test host:
nettest test/test-k8s.yaml - look up deployments, pods, and namespaces to test in Kube API
- for each pod, SSH to a worker node running an instance
- connect a test container to the container's network namespace
- run that pod's test suite from inside the network namespace
- report results via TAP
- test host gathers TAP results and reports
- the same process applies to non-Kubernetes instances accessible via ssh
for DEPLOYMENT_TYPE in \
frontend \
microservice \
database\
; do
DEPLOYMENT="test-${DEPLOYMENT_TYPE}"
kubectl run "${DEPLOYMENT}" \
--image=busybox \
--labels=app=web,role="${DEPLOYMENT_TYPE}" \
--requests='cpu=10m,memory=32Mi' \
--expose \
--port 80 \
-- sh -c "while true; do { printf 'HTTP/1.1 200 OK\r\n\n I am a ${DEPLOYMENT_TYPE}\n'; } | nc -l -p 80; done"
kubectl scale deployment "${DEPLOYMENT}" --replicas=3
doneAs we haven't applied network policies, this should FAIL.
./netassert test/test-k8s.yamlkubectl apply -f resource/net-pol/web-deny-all.yaml
kubectl apply -f resource/net-pol/test-services-allow.yaml
Now that we've applied the policies that these tests reflect, this should pass:
./netassert test/test-k8s.yamlFor manual verification of the test results we can exec and curl in the pods under test (see [why] above for reasons that this is a bad idea).
kubectl exec -it test-frontend-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-microservice
kubectl exec -it test-microservice-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-database
kubectl exec -it test-database-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-frontend
These should all pass as they have equivalent network policies.
The network policies do not allow the frontend pods to communicate with the database pods.
Let's verify that manually - this should FAIL:
kubectl exec -it test-frontend-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-database