Skip to content

Current state of Partialtesting #8

@MGough

Description

@MGough

I've been using partialtesting and wanted to raise a few issues & open some discussion here as it's rather quiet. I think there's some points that people should be aware of and consider before using this tool. It would be interesting to know if anyone is using it (or similar alternatives) on larger projects.

Here are the things I've noticed:

Non-python files triggering full test run
This is documented, non-project files result in a full test run. There's no way to ignore certain files, should a config file change trigger a full test run? Probably. Should a README file change trigger a full test run? Probably not. It seems like it would be useful for the user to specify which files should cause this.

Can't ignore specific tests
Similar to the above, if I have some tests which aren't part of my regular test suite (perhaps they are integration tests to run against a specific environment) then there's no easy way to just instruct partialtesting to ignore them.

CI Pipeline integration
This is always going to be tricky. If two PRs are opened which have some overlap and both pipelines pass, when one is merged into master the coverage file there will change, and the partialtest in the unmerged PR is no longer valid.

Partial/Full/No Test
The way to identify whether to run a partial, full or no test is one of the following:

  • No file created
  • Empty file created
  • File with contents created

It would be much clearer and consistent to handle if a file was always created containing a known & consistent format, like JSON to be parsed with jq.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions