Table of Contents
This assumes you already read the development guide to
install go and configure your git client. All command examples are
relative to the kubernetes
root directory.
Before sending pull requests you should at least make sure your changes have passed both unit and integration tests.
Kubernetes only merges pull requests when unit, integration, and e2e tests are passing, so it is often a good idea to make sure the e2e tests work as well.
- Unit tests should be fully hermetic
- Only access resources in the test binary.
- All packages and any significant files require unit tests.
- The preferred method of testing multiple scenarios or input is
table driven testing
- Example: TestNamespaceAuthorization
- Unit tests must pass on macOS and Windows platforms.
- Tests using linux-specific features must be skipped or compiled out.
- Skipped is better, compiled out is required when it won't compile.
- Concurrent unit test runs must pass.
- See coding conventions.
make test
is the entrypoint for running the unit tests that ensures that
GOPATH
is set up correctly.
cd kubernetes
make test # Run all unit tests.
If any unit test fails with a timeout panic (see #1594) on the testing package, you can increase the KUBE_TIMEOUT
value as shown below.
make test KUBE_TIMEOUT="-timeout=300s"
You can set go flags by setting the
GOFLAGS
environment variable.
make test
accepts packages as arguments; the k8s.io/kubernetes
prefix is
added automatically to these:
make test WHAT=./pkg/kubelet # run tests for pkg/kubelet
To run tests for a package and all of its subpackages, you need to append ...
to the package path:
make test WHAT=./pkg/api/... # run tests for pkg/api and all its subpackages
To run multiple targets you need quotes:
make test WHAT="./pkg/kubelet ./pkg/scheduler" # run tests for pkg/kubelet and pkg/scheduler
In a shell, it's often handy to use brace expansion:
make test WHAT=./pkg/{kubelet,scheduler} # run tests for pkg/kubelet and pkg/scheduler
You can set the test args using the KUBE_TEST_ARGS
environment variable.
You can use this to pass the -run
argument to go test
, which accepts a
regular expression for the name of the test that should be run.
# Runs TestValidatePod in pkg/api/validation with the verbose flag set
make test WHAT=./pkg/apis/core/validation GOFLAGS="-v" KUBE_TEST_ARGS='-run ^TestValidatePod$'
# Runs tests that match the regex ValidatePod|ValidateConfigMap in pkg/api/validation
make test WHAT=./pkg/apis/core/validation GOFLAGS="-v" KUBE_TEST_ARGS="-run ValidatePod\|ValidateConfigMap$"
For other supported test flags, see the golang documentation.
Running the same tests repeatedly is one way to root out flakes. You can do this efficiently.
# Have 2 workers run all tests 5 times each (10 total iterations).
make test PARALLEL=2 ITERATION=5
For more advanced ideas please see flaky-tests.md.
Currently, collecting coverage is only supported for the Go unit tests.
To run all unit tests and generate an HTML coverage report, run the following:
make test KUBE_COVER=y
At the end of the run, an HTML report will be generated with the path printed to stdout.
To run tests and collect coverage in only one package, pass its relative path
under the kubernetes
directory as an argument, for example:
make test WHAT=./pkg/kubectl KUBE_COVER=y
Multiple arguments can be passed, in which case the coverage results will be combined for all tests run.
To run benchmark tests, you'll typically use something like:
make test WHAT=./pkg/scheduler/internal/cache KUBE_TEST_ARGS='-benchmem -run=XXX -bench=BenchmarkExpirePods'
This will do the following:
-run=XXX
is a regular expression filter on the name of test cases to run. Go will execute both the tests matching the-bench
regex and the-run
regex. Since we only want to execute benchmark tests, we set the-run
regex to XXX, which will not match any tests.-bench=Benchmark
will run test methods with Benchmark in the name
- See
grep -nr Benchmark .
for examples
-benchmem
enables memory allocation stats
See go help test
and go help testflag
for additional info.
You can optionally use go test
to run unit tests. For example:
cd kubernetes
# Run unit tests in the kubelet package
go test ./pkg/kubelet
# Run all unit tests found within ./pkg/api and its subdirectories
go test ./pkg/api/...
# Run a specific unit test within a package
go test ./pkg/apis/core/validation -v -run ^TestValidatePods$
# Run benchmark tests
go test ./pkg/scheduler/internal/cache -benchmem -run=XXX -bench=Benchmark
When running tests contained within a staging module, you first need to change to the staging module's subdirectory and then run the tests, like this:
cd kubernetes/staging/src/k8s.io/kubectl
# Run all unit tests within the kubectl staging module
go test ./...
Please refer to Integration Testing in Kubernetes.
Please refer to End-to-End Testing in Kubernetes.
Once you open a PR, prow
runs pre-submit tests in CI. You can find more about prow
in kubernetes/test-infra and in this blog post on automation involved in testing PRs to Kubernetes.
If you are not a Kubernetes org member, another org member will need to run /ok-to-test
on your PR.
Find out more about other commands you can use to interact with prow through GitHub comments.
Click on Details
to look at artifacts produced by the test and the cluster under test, to help you debug the failure. These artifacts include:
- test results
- metadata on the test run (including versions of binaries used, test duration)
- output from tests that have failed
- build log showing the full test run
- logs from the cluster under test (k8s components such as kubelet and apiserver, possibly other logs such as etcd and kernel)
- junit xml files
- test coverage files
If the failure seems unrelated to the change you're submitting:
-
Is it a flake?
- Check if a GitHub issue is already open for that flake
- If not, open a new one (like this example) and label it
kind/flake
- If yes, any help troubleshooting and resolving it is very appreciated. Look at Helping with known flakes for how to do it.
- If not, open a new one (like this example) and label it
- Run
/retest
on your PR to re-trigger the tests
- Check if a GitHub issue is already open for that flake
-
Is it a failure that shouldn't be happening (in other words; is the test expectation now wrong)?
- Get in touch with the SIG that your PR is labeled after
- preferably as a comment on your PR, by tagging the GitHub team (for example a reviewers team for the SIG)
- write your reasoning as to why you think the test is now outdated and should be changed
- if you don't get a response in 24 hours, engage with the SIG on their channel on the Kubernetes slack and/or attend one of the SIG meetings to ask for input.
- Get in touch with the SIG that your PR is labeled after
For known flakes (i.e. with open GitHub issues against them), the community deeply values help in troubleshooting and resolving them. Starting points could be:
- add logs from the failed run you experienced, and any other context to the existing discussion
- if you spot a pattern or identify a root cause, notify or collaborate with the SIG that owns that area to resolve them
- Figure out corresponding SIG from test name/description
- Mention the SIG's GitHub handle on the issue, optionally
cc
the SIG's chair(s) (locate them under kubernetes/community/sig-<name>) - Optionally (or if you haven't heard back on the issue after 24h) reach out to the SIG on slack
testgrid
is a visualization of the Kubernetes CI status.
It is useful as a way to:
- see the run history of a test you are debugging (access it starting from a gubernator report for that test)
- get an overview of the project's general health
- You can learn more about Testgrid from the Kubecon NA San Diego Contributor Summit
testgrid
is organised in:
- tests
- collection of assertions in a test file
- each test is typically owned by a single SIG
- each test is represented as a row on the grid
- jobs
- collection of tests
- each job is typically owned by a single SIG
- each job is represented as a tab
- dashboards
- collection of jobs
- each dashboard is represented as a button
- some dashboards collect jobs/tests in the domain of a specific SIG (named after and owned by those SIGs), and dashboards to monitor project wide health (owned by SIG-release)
All new PRs for tests should attempt to follow these steps in order to help enable a smooth review process:
-
The problem statement should clearly describe the intended purpose of the test and why it is needed.
-
Get some agreement on how to design your test from the relevant SIG.
-
Create the PR.
-
Raise awareness of your PR to respective communities (eg. via mailing lists, Slack channels, Github mentions).