Skip to content

Test Refactor #8185

Open
@ericspod

Description

@ericspod

The tests directory has gotten large over time as we've added new components with their test cases. It's absolutely a good thing to have thorough testing but as a almost-flat directory it's hard to find things and it's cumbersome to view so many files in IDEs. Further there are likely many areas of refactoring that we can do to reduce duplicate code and introduce more helper routines to do common tasks. I would suggest a few things to improve our tests:

  • Break the contents of tests into subdirectories mirroring those in the monai directory. Tests for transforms would go under transforms, those for networks under networks, etc. It may be necessary to have more directory structure under these as well but this doesn't need to be overcomplicated.
  • Common patterns and ways of generating test data can be factored out into utils.
  • A common pattern is to generate test cases for use with parameterized in deeply-nested for loops, eg.:
TEST_CASES=[]
for arg1 in [2,4]:
    for arg2 in [8,16]:
        TEST_CASES.append([{"arg1": arg1, "arg2": arg2}, arg1, arg2])

A simple routine for doing product over dictionaries can reduce this code significantly:

def dict_product(**items):  # should be put in core utils somewhere
    keys=items.keys()
    values=items.values()
    for pvalues in product(*values):
        yield dict(zip(keys, pvalues))
...
TEST_CASES=[[d, d["arg1"], d["arg2"]] for d in dict_product(arg1=[2,4], arg2=[8,16])]
  • A number of tests use large data items or perform quite a few iterations of training. These are slow running so it would be good to go through the slower tests to see if any speedup can be done without losing testing value.
  • Similarly how synthetic data is created should be re-evaluated to see what commonality can be factored out and standardised.
  • Many tests are silent while others have output whether they pass or not. My feeling was always that tests should be silent unless they fail then they should be loud about why. This keeps the clutter down so it's clear when tests are passing, and when they don't their output is easier to spot and understand. This is maybe personal taste but should we rethink this behaviour?

Metadata

Metadata

Labels

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions