Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish test report #5030

Merged
merged 14 commits into from
Dec 24, 2023
Merged

Publish test report #5030

merged 14 commits into from
Dec 24, 2023

Conversation

albertteoh
Copy link
Contributor

Which problem is this PR solving?

Description of the changes

How was this change tested?

Checklist

Albert Teoh added 8 commits December 24, 2023 16:30
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
@albertteoh albertteoh added the changelog:ci Change related to continuous integration / testing label Dec 24, 2023
Copy link

codecov bot commented Dec 24, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (9bd8317) 95.59% compared to head (e5827a6) 95.60%.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5030   +/-   ##
=======================================
  Coverage   95.59%   95.60%           
=======================================
  Files         319      319           
  Lines       18794    18794           
=======================================
+ Hits        17967    17968    +1     
  Misses        663      663           
+ Partials      164      163    -1     
Flag Coverage Δ
cassandra-3.x 25.61% <ø> (ø)
cassandra-4.x 25.61% <ø> (ø)
elasticsearch-5.x 19.88% <ø> (ø)
elasticsearch-6.x 19.88% <ø> (+0.01%) ⬆️
elasticsearch-7.x 20.02% <ø> (+0.01%) ⬆️
elasticsearch-8.x 20.09% <ø> (-0.02%) ⬇️
grpc-badger 19.50% <ø> (ø)
kafka 14.10% <ø> (ø)
opensearch-1.x 20.00% <ø> (-0.02%) ⬇️
opensearch-2.x 20.02% <ø> (+0.01%) ⬆️
unittests 93.34% <ø> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@albertteoh
Copy link
Contributor Author

albertteoh commented Dec 24, 2023

I'd like to try publishing test reports for forks by following these instructions: https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.12.0/README.md#support-fork-repositories-and-dependabot-branches.

@albertteoh albertteoh marked this pull request as ready for review December 24, 2023 16:53
@albertteoh albertteoh requested a review from a team as a code owner December 24, 2023 16:53
.github/workflows/ci-unit-tests.yml Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
.github/workflows/ci-unit-tests-go-tip.yml Outdated Show resolved Hide resolved
Signed-off-by: Albert Teoh <albert@packsmith.io>
@albertteoh
Copy link
Contributor Author

I'd like to try publishing test reports for forks by following these instructions: https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.12.0/README.md#support-fork-repositories-and-dependabot-branches.

I'll investigate/address this in a separate PR.

@yurishkuro
Copy link
Member

I'll investigate/address this in a separate PR.

@yurishkuro
Copy link
Member

I am not sure I follow what's that about. This PR is from a fork and the test report was uploaded (albeit quite minimalistic, I am curious to see failures) -- https://github.com/jaegertracing/jaeger/actions/runs/7315855796?pr=5030#summary-19929936699

@@ -45,3 +61,13 @@ jobs:
flags: unittests
fail_ci_if_error: true
token: ${{ env.CODECOV_TOKEN }}

event_file:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, good catch, that was me experimenting with https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.12.0/README.md#support-fork-repositories-and-dependabot-branches and was an accidental half-baked change. Reverted in 396fb72 and aa357c1.

That may have been why we could see the test report in https://github.com/jaegertracing/jaeger/actions/runs/7315855796?pr=5030#summary-19929936699, but I'll check once the test CI job is done.

Albert Teoh added 2 commits December 25, 2023 04:17
Signed-off-by: Albert Teoh <albert@packsmith.io>
Signed-off-by: Albert Teoh <albert@packsmith.io>
@albertteoh
Copy link
Contributor Author

albertteoh commented Dec 24, 2023

I am not sure I follow what's that about.

So the reason for wanting to explore that was because of this message in the publish test report job logs:

This action is running on a pull_request event for a fork repository. Pull request comments and check runs cannot be created, so disabling these features. To fully run the action on fork repository pull requests, see https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.12.0/README.md#support-fork-repositories-and-dependabot-branches

It seems this relates to an additional summary of the test report in the PR itself: https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.12.0/README.md#pull-request-comment

as well as a summary test-report in the build checks: https://github.com/EnricoMi/publish-unit-test-result-action/blob/v2.12.0/README.md#commit-and-pull-request-checks.

Is that something you think we should continue exploring?

It just feels like the test report (in its current state) is a little hidden because one has to click into the CI job details to see the summary.

Albert Teoh and others added 2 commits December 25, 2023 04:43
@yurishkuro
Copy link
Member

Is that something you think we should continue exploring?

So the alternative (improvement) is it would post test results as a comment in the PR? I think it's useful if it can be conditional on test failures - when no failures, it's fine to only have the report in the workflow summary, to reduce the noise. But it's not a strong opinion.

@yurishkuro yurishkuro enabled auto-merge (squash) December 24, 2023 17:52
@yurishkuro yurishkuro merged commit cf917dd into jaegertracing:main Dec 24, 2023
37 checks passed
@albertteoh albertteoh deleted the test-report branch December 24, 2023 17:56
@albertteoh
Copy link
Contributor Author

So the alternative (improvement) is it would post test results as a comment in the PR?

Yeah, that's my understanding of the feature. I think it may even be useful on test success because it also reports how many tests were removed/added from the current PR.

@albertteoh
Copy link
Contributor Author

Thanks for the review!

@albertteoh albertteoh mentioned this pull request Dec 24, 2023
2 tasks
@@ -154,7 +154,7 @@ index-rollover-integration-test: docker-images-elastic

.PHONY: cover
cover: nocover
$(GOTEST) -tags=memory_storage_integration -timeout 5m -coverprofile cover.out ./...
$(GOTEST) -tags=memory_storage_integration -timeout 5m -coverprofile cover.out ./... > test-results.json
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albertteoh what makes it a JSON file? If I change it to | tee test-results.json then I see the standard text output:

=== RUN   TestRegisterStaticHandler/basePath=/
=== RUN   TestRegisterStaticHandler/basePath=/jaeger
--- PASS: TestRegisterStaticHandler (0.03s)
    --- PASS: TestRegisterStaticHandler/basePath= (0.01s)

If it is meant to be std output, then I would suggest changing to |tee because it makes the CI logs show the test progress.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test-ci make target defines GOTEST as: GOTEST := $(GOTEST_QUIET) -json; so it's the extra -json flag that makes the output json: https://github.com/jaegertracing/jaeger/blob/main/Makefile#L460.

So the full command would resemble (with | tee):

go test -race -json -tags=memory_storage_integration -timeout 5m -coverprofile cover.out ./... | tee test-results.json

Adding a | jq . to the end of that, and the output looks like:

{
  "Time": "2023-12-27T21:40:21.085452+11:00",
  "Action": "pass",
  "Package": "github.com/jaegertracing/jaeger/storage/spanstore/metrics",
  "Elapsed": 3.848
}

It's a good callout though, because I hadn't considered the impact of this change on the stdout display of test results which are useful for debugging unit test runs. Maybe test summaries are not worth the trade-off for harder to read test output?

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I think jq will be overkill, but changing to tee is good. The test-ci is not really meant to be run locally, but with -json flag the output is still readable

{"Time":"2023-12-27T10:16:06.335612-05:00","Action":"start","Package":"github.com/jaegertracing/jaeger"}
{"Time":"2023-12-27T10:16:06.72618-05:00","Action":"run","Package":"github.com/jaegertracing/jaeger","Test":"TestDummy"}

(btw jq wouldn't work at all because ^ is not a well-formed JSON, it has no top-level container).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, I've created a PR to use | tee: #5050.

(btw jq wouldn't work at all because ^ is not a well-formed JSON, it has no top-level container)

Running it locally, jq seemed to be okay with it; it just outputs separate JSON blobs:

go test -race -json -tags=memory_storage_integration -timeout 5m -coverprofile cover.out ./... | tee test-results.json | jq .
{
  "Time": "2023-12-28T07:02:35.584992+11:00",
  "Action": "output",
  "Package": "github.com/jaegertracing/jaeger",
  "Test": "TestDummy",
  "Output": "=== RUN   TestDummy\n"
}
{
  "Time": "2023-12-28T07:02:35.585097+11:00",
  "Action": "output",
  "Package": "github.com/jaegertracing/jaeger",
  "Test": "TestDummy",
  "Output": "--- PASS: TestDummy (0.00s)\n"
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's also the question of pipe buffering, I think it gets worse so we'd less the output less frequently (but maybe I was just impatient). Anyway, I think one line is fine for now. I am more concerned with now failing the rest of the workflow when test results cannot be uploaded for whatever reason.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am more concerned with now failing the rest of the workflow when test results cannot be uploaded for whatever reason.

There are no upload/download steps in our test summary reporting workflows at the moment. You're right though, because I vaguely recall codecov upload failures may have failed the rest of the workflow in the past.

If we do want test summary reports on our PRs as comments and in the check runs, then artifact uploads and downloads would be required. Is this feature desirable enough to warrant introducing uploads? If not, I can hit the pause button on my WIP.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel like it's worth the hustle. The fact that test results are already showing in the summary is good, although if all it does is show the count (not the details of the failures) then it's pretty useless. Adding comments and checks is nice to have, but not at the expense of stability of the CI, which has been happening in the past few days due to these changes.

@yurishkuro
Copy link
Member

@albertteoh I just had a test run fail and the summary only shows this:
image
This is pretty useless, because JSON formatting of the output makes it impossible to inspect the test failures and this summary doesn't provide any details. I resorted to commending out the JSON format so that I can at least see readable failures.

Do we still need to do something extra? I would've thought the permissions issue is solved since the action does post to the Summary.

@albertteoh
Copy link
Contributor Author

This is pretty useless, because JSON formatting of the output makes it impossible to inspect the test failures and this summary doesn't provide any details. I resorted to commending out the JSON format so that I can at least see readable failures.

I tried forcing a test failure on my WIP and here's the output PR comment: albertteoh#135 (comment)

Following that link takes the user to: https://github.com/albertteoh/jaeger/runs/20037918781

Screenshot 2023-12-30 at 12 08 27 pm

@yurishkuro
Copy link
Member

+1. Confusing workflow, but ok.

@albertteoh
Copy link
Contributor Author

+1. Confusing workflow, but ok.

PTAL #5035. Note this involves upload and download actions as discussed earlier.

yurishkuro pushed a commit that referenced this pull request Jan 2, 2024
## Which problem is this PR solving?
- Relates to #2859
- Continues from #5030

## Description of the changes
- Enables a test report summary in the PR comment and check summary.

## How was this change tested?
- Confirm upload job is running successfully in my fork:
https://github.com/albertteoh/jaeger/actions/runs/7316648725.
- Can only know if it's working once the PR is created.

## Checklist
- [x] I have read
https://github.com/jaegertracing/jaeger/blob/master/CONTRIBUTING_GUIDELINES.md
- [x] I have signed all commits

---------

Signed-off-by: Albert Teoh <albert@packsmith.io>
Co-authored-by: Albert Teoh <albert@packsmith.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog:ci Change related to continuous integration / testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Publish test report
2 participants