Support permanently skipping tests on a specific system #331
Description
Proposal
Problem statement
Sometimes it is not possible to run a specific test on a system because of environmental reasons. These can be skipped by using cargo test -- --skip some_test_name
, but this must be specified every time tests are run.
Motivating examples or use cases
cargo
has a test cargo_metadata_non_utf8
that tests it can be used while in a non-utf8 directory. That test is already cfg
gated because:
Creating non-utf8 path is an OS-specific pain, so let's run this only on linux, where arbitrary bytes work.
But even this claim of working on linux is specific to some linux filesystems, attempting to run this test when the target-dir is on zfs with utf8only=on
fails:
failed to mkdir_p /home/nemo157/sources/cargo/target/tmp/cit/t1946/foo/�/./src: Invalid or incomplete multibyte or wide character (os error 84)
This is a fundamental system restriction which means this test will never work here and should always be skipped.
Solution sketch
Extend libtest
's --skip
CLI arg with a RUST_TEST_SKIP
environment variable, a comma separated list of filters that will be added on to those parsed from the args, allowing it to be permanently set in the working tree via utilities like direnv
.
Alternatives
Extend libtest
's in-process API to allow this test to mark itself as unsupported when it gets this error code while setting up. I believe this would be a good solution to additionally have, and more useful in this specific case, but I think both have their uses and adding the environment variable is easier to get done first.
Links and related work
rust-lang/rust@master...Nemo157:rust:rust-test-skip-env-var
What happens now?
This issue contains an API change proposal (or ACP) and is part of the libs-api team feature lifecycle. Once this issue is filed, the libs-api team will review open proposals as capability becomes available. Current response times do not have a clear estimate, but may be up to several months.
Possible responses
The libs team may respond in various different ways. First, the team will consider the problem (this doesn't require any concrete solution or alternatives to have been proposed):
- We think this problem seems worth solving, and the standard library might be the right place to solve it.
- We think that this probably doesn't belong in the standard library.
Second, if there's a concrete solution:
- We think this specific solution looks roughly right, approved, you or someone else should implement this. (Further review will still happen on the subsequent implementation PR.)
- We're not sure this is the right solution, and the alternatives or other materials don't give us enough information to be sure about that. Here are some questions we have that aren't answered, or rough ideas about alternatives we'd want to see discussed.