Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src/goTestExplorer: improve handling of tests in files with build tags #1673

Open
firelizzard18 opened this issue Aug 9, 2021 · 10 comments
Open
Labels
go-test issues related to go test support (test output, test explorer, ...)

Comments

@firelizzard18
Copy link
Contributor

This is related to a feature added by #1590, which has not been merged as of the time of writing.

Is your feature request related to a problem? Please describe.
Tests may be defined in an arch- or OS-specific file, or a file with build tags. Thus, these tests will only be run if the corresponding build tags are set. The test explorer identifies tests semantically - any func TestXxx(*testing.T) within a xxx_test.go in the workspace is detected as a test. However, the test explorer has no knowledge of tags, so tests within a conditionally built file may not be run.

Describe the solution you'd like
I am opening this issue to discuss options. At the moment, I do not see an ideal solution.

@Psykar also suggested setting these tests to 'Skipped'. Go tests can be skipped via (*testing.T).Skip, so IMO overloading that state is not a good default UX, though it would be reasonable to make that configurable. However, determining what tests should be 'skipped' in this way is non-trivial (see additional context).

microsoft/vscode#129456 discusses adding tags. This may provide a better mechanism for handling conditionally built tests.

Additional context
@Psykar mentioned using a distinct icon/state to indicate "this test was not run because of build tags". This is not an option, as test states are constrained by VSCode.

Tests are run executed with go test -json -run ^TestXxx|TestYyy|...$ ./pkg. That emits events such as { "Action": "run", "Test": "TestXxx", ... }. go test provides no indication that a test does not exist. The only indication is a lack of events. Thus, go test cannot be used to differentiate between "there is no TestXxx" and "TestXxx requires additional build tags".

@Psykar
Copy link

Psykar commented Aug 9, 2021

I guess another option is to raise the possibility of a custom status upstream, but it doesn't look like that would fit very well with the current architecture.
Another option would be to hide them (although it's sounding like that would require attempting to run them first which isn't ideal)

In terms of detection, is it possible to do a comparison between expected test runs vs. reported test runs to then do something with the diff between these? I'm guessing the difficulty there is test events coming in are not actually matched up to a specific 'test request' but are async?

@connor4312
Copy link

Fyi I expect tags to land, at least in proposed, this week

@firelizzard18
Copy link
Contributor Author

I'm not sure the extension can control whether a given test is hidden (as opposed to just completely omitting it).

Once the go test invocation is complete, I can detect tests that didn't run by comparing the list of tests that generated events against the list of tests I expected. So I could add some logic for, "If a test was requested but did not generate any events, mark it skipped" (or some other state). There's also an error field on the test item, which shows up under the item in the UI.

I expect tags will be helpful for this case, but run profiles don't map well to Go build tags. I can use tags to 'disable' OS-specific tests when on another OS, etc. But creating a separate run profile for every possible permutation of custom tags would be unreasonable. Probably what I'll do is add configuration so the user can define profiles and tags. Something like:

{
  "go.testProfiles": {
    "foo_bar": { "label": "Foo Bar", "buildTags": ["foo", "bar"] }
  }
}

Then, if I can find a way to determine if a file will be compiled for a given set of tags, I can tag tests appropriately. Go 1.17 is introducing go:build directives, which are plain boolean expressions. It wouldn't be too hard to evaluate those directly. I do not want to write a parser and evaluator for +build directives, but it may be feasible to write a small Go program to convert one to the other. Or, it may be feasible to write a program to check, "Given this set of tags, will this file be compiled?".

If a test is tagged and no run profile matches that tag, the test should not be runnable, meaning the run icon will not show up next to the test. You could also make a (VSCode) feature request for some other UI indication, such as graying out the test.

@connor4312
Copy link

connor4312 commented Aug 9, 2021

You could also make a (VSCode) feature request for some other UI indication, such as graying out the test.

I'm hesitant to do this since publish-only controllers can exist and should not have their tests greyed out. However if you're interested please do open an issue for further discussion, I won't hijack this one too much 😛

@firelizzard18
Copy link
Contributor Author

@connor4312 I appreciate the feedback 🙏 More context means better solutions, generally.

@Psykar
Copy link

Psykar commented Aug 10, 2021

But creating a separate run profile for every possible permutation of custom tags would be unreasonable.

Completely agree. We have some random build tags for integration tests that we don't want to run on every go test ./... for example.

Probably what I'll do is add configuration so the user can define profiles and tags.

Seems a good solution - although there already is go.buildTags
If you have a way of detecting what will or won't compile given go.buildTags that would solve my use case at least without the need for separate profiles (as I wouldn't be running integration tests from vscode anyway)

meaning the run icon will not show up next to the test

works for me - makes it obvious enough what's going on :)

@firelizzard18
Copy link
Contributor Author

A comment from @hyangah on the test explorer CL:

[Question:] we plan to replace GoDocumentSymbolProvider with gopls' document symbol provider - as you know, gopls doesn't currently provide info about a document if it needs different GOOS/GOARCH or build tags. Can it be an issue potentially when we just replace GoDocumentSymbolProvider?

@Psykar, how would it affect your experience if tests 'hidden' behind unsatisfied build tags simply did not appear in the test explorer?

@Psykar
Copy link

Psykar commented Aug 11, 2021

how would it affect your experience if tests 'hidden' behind unsatisfied build tags simply did not appear in the test explorer?

Fine by me and matches the experience with other parts of vscode (eg. if you don't have the build tags set for a file, then a lot of the formatting / syntax error stuff doesn't work either)

@stamblerre stamblerre modified the milestones: Untriaged, Backlog Aug 12, 2021
@firelizzard18 firelizzard18 mentioned this issue Aug 18, 2021
12 tasks
@firelizzard18
Copy link
Contributor Author

Test tags microsoft/vscode#129456 are available in Insiders. However, I'm not sure it's worth the effort to introduce features to deal with test tags, since it will be useless once symbol processing is switched to gopls. Theoretically gopls will support test tags in the future, but that may be a while.

@hyangah hyangah added the go-test issues related to go test support (test output, test explorer, ...) label Dec 5, 2023
@firelizzard18
Copy link
Contributor Author

#3523 will migrate the test explorer to use gopls. It's possible I will include build tags in that, but probably not. It should be easier to do with test discovery handled by gopls, but it might require a gopls change such as including build tags in the gopls.packages response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go-test issues related to go test support (test output, test explorer, ...)
Projects
Status: No status
Development

No branches or pull requests

6 participants