-
Couldn't load subscription status.
- Fork 1k
Description
How are we supposed to verify that the expected values used by unittests is correct?
As an example: kubeflow/kubeflow#3986 appears to be due to a bug in the kustomize manifest resulting in environment variables not being set correctly to the variable value; instead they remained set to the variable name e.g. `$(userid-prefix)
I would expect this is something we test in the unittest.
So my expectation would be that I can look at the unittest and verify by inspection that the expected value is what it should be.
But when I look at the unittest
| value: $(userid-header) |
It looks like we are specifying the kustomize files (i.e. the input) and not the expected output.
For example here:
manifests/tests/jupyter-web-app-base_test.go
Line 242 in 76bd875
| imagePullPolicy: $(policy) |
We are setting
image: gcr.io/kubeflow-images-public/jupyter-web-app:v0.5.0
imagePullPolicy: $(policy)
The expected value for imagePullPolicy should not be $(policy). So either that is the input value and not the expected value or else our tests are wrong.
Can someone familiar with the unittest infrastructure help educate me?
My expectation is that unittests would work as follows
- A golden set of manifests for different scenarios would be checked in somewhere (either as go code or as separate files in a "golden") directory
- Thus I can look at those golden files to see if the expected value is what it should be
- The tests would be comparing the generated files to the golden files.
- There might be some scripts or utilities to regenerate the golden files (e.g. by running kustomize)
- During code review it should be easy to inspect the diff to the golden files to allow manual verification of the new golden data