Description
This is an ongoing issue we need to work on or would gladly accept contributions for anyone looking to get some experience with testing open-source.
-
We have some guaranteed randomness with the configuration we tests which causes the code coverage to fluctuate. This causes issue in that we have a simple change such as a docstring change DOC: rename OSX -> macOS as it is the new name #1349 causes our code coverage test to fail, due to a
-0.03%
change in coverage.- We would still like some non-determinism in the configurations we try while testing, therefore we must also have deterministic tests that ensure all parts of our pipeline with all configurations are tested properly.
-
In general, we are hovering around
88%
coverage, these fluctuations are relatively minor and in no way account for the other12%
. I would roughly estimate that we could quite easily reach5%
with ensuring more components of the system are tested. I estimate the remaining5%
is testing the various branches ofif/else
, error validation and maybe~1-2%
untestable code we do not need to be concerned with (abstract classes etc...)
Please check out our contribution guide, pytest-coverage
and our code coverage statistics reported from our automatic unittests if you'd like to get started!