-
-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EPIC: Provide AQA Test metrics per release #5121
Comments
Thanks @jiekang ! Adding some comments and linking some relevant issues related to this:
Noting an important distinction that some of the metrics under discussion will track the health of the underlying infrastructure and availability of machine resources rather than specifically the "health" of the test suites being run. We will aim to differentiate such information, in order to know where best to 'course-correct' and where to apply improvement efforts. Related to this is the enhancement issue for RSR: |
Additional metrics worth tracking (not necessarily to be considered under this issue, just jotting some down so as not to lose them):
|
Yes, excellent point. There is some cross-boundary overlap as test execution success is sometimes closely linked to stable & consistent infrastructure configuration. I've updated the original comment to note this distinction as I think we should understand the health of the overall system.
|
Thanks for noting all these; I can see value in all of them! |
Related to test execution stats gathering: https://github.com/smlambert/aqastats Related to differentiating between infra issue and TBD issue (one that needs more triage to figure out if product|test|infra issue): |
Additional notes:
Related issue: |
As discussed in PMC call today, I will create a new repo to encompass moving over the scorecards scripts (from smlambert/scorecards, and an adapted version of scripts from smlambert/aqastats) and new metrics we will design and intend to add for all Adoptium sub-projects as shown in the Adoptium project hierarchy below: |
A first draft on data to be collected per release:
|
Clarifying: I think there is another set of data that has been discussed for gathering that doesn't fit into the same bucket, but is definitely still under consideration. E.g. Test effectiveness, Related issue reporting (age, etc.), repository activity, contribution statistics, etc. |
Also immediately after posting I think Platform and Version hierarchy should be swapped for Machines Available data to make sense. |
:) Appreciate your initial care and thoughts on this feature @jiekang ! Thank you! |
So with the hierarchy flipped it is:
|
Just noting the code is in development here: https://github.com/jiekang/scorecard/tree/trss-statistics It's now fully functional with a diff command to compare between two releases. Remaining items:
|
Adding ideas:
|
/assign |
1 similar comment
/assign |
This issue tracks the efforts to provide more metrics on the AQAvit test runs on a per release basis.
The project as a whole currently tracks some useful release metrics via scorecards by Shelley:
https://github.com/adoptium/adoptium/wiki/Adoptium-Release-Scorecards
https://github.com/smlambert/scorecard
The release scorecard data is useful to understand how we well we are doing at meeting release targets and how that is trending across releases.
It would be nice to similarly provide data for test runs to help track the "health" of our test suite execution across releases (health of the tests & their execution, which can relate to the underlying infrastructure). As a connected note, this is also a piece in the larger end goal of highlighting opportunities to reduce the burden on triage engineers (e.g. by highlighting machine specific failures across releases in a different manner)
I imagine this to involve enhancing the existing Release Summary Report (RSR) which already contains most of the data (whether in the report itself or in links), and presenting it in a manner that connects the state across releases.
This proposal is open to all feedback. To start the discussion, I propose tracking AQAvit test execution data per release that is formatted to easily understand platform state across releases. This would contain:
The text was updated successfully, but these errors were encountered: