-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change min/max to overall_min/overall_max + update comparison results publisher #687
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please consider adding (or updating) a test as well.
osbenchmark/aggregator.py
Outdated
weighted_metrics[metric]['overall_max'] = max(item_values) | ||
elif item_key == 'median': | ||
weighted_sum = sum(value * iterations for value in item_values) | ||
total_iterations = iterations * len(item_values) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this already available from above and does not need to be re-computed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, I went ahead and moved this calculation outside of the loop.
osbenchmark/aggregator.py
Outdated
else: | ||
weighted_sum = sum(value * iterations for value in item_values) | ||
total_iterations = iterations * len(item_values) | ||
weighted_metrics[metric][item_key] = weighted_sum / total_iterations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The elif
is probably not needed here, since the code in the two arms is the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 we can just have a comment in the else statement right after else
stating what cases this for
else:
# for items like median
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, I added a comment to clarify what cases this is for
@@ -464,16 +464,16 @@ def _write_results(self, metrics_table, metrics_table_console): | |||
data_plain=metrics_table, data_rich=metrics_table_console) | |||
|
|||
def _publish_throughput(self, baseline_stats, contender_stats, task): | |||
b_min = baseline_stats.metrics(task)["throughput"]["min"] | |||
b_min = baseline_stats.metrics(task)["throughput"].get("overall_min") or baseline_stats.metrics(task)["throughput"]["min"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would probably be better to set the overall_min
key prior, so dealing with a special case won't be necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this logic is necessary since the results publisher is also used for normal test executions which will not contain 'overall' min/max values
b_unit = baseline_stats.metrics(task)["throughput"]["unit"] | ||
|
||
c_min = contender_stats.metrics(task)["throughput"]["min"] | ||
c_min = contender_stats.metrics(task)["throughput"].get("overall_min") or contender_stats.metrics(task)["throughput"]["min"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change was made in the ComparisonResultsPublisher but should they also be made in the SummaryResultsPublisher?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, since the summary results publisher isn't used by the aggregator class. Although if we wanted to publish results after aggregation then this would be necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add tests to results_publisher to confirm the changes work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments
… publisher Signed-off-by: Michael Oviedo <mikeovi@amazon.com>
73d3486
to
07830c9
Compare
I added a test for this, but let me know if I should add more! I also updated the unit tests for the aggregator class. |
07830c9
to
8f45161
Compare
Signed-off-by: Michael Oviedo <mikeovi@amazon.com>
8f45161
to
16d3428
Compare
Description
This change updates min/max values in aggregated results to reflect the min/max values across all test executions, rather than just an average of the min/max values. I also changed the names from
min
andmax
tooverall_min
andoverall_max
for clarity. Changes were also made to the comparison results publisher class, so if an aggregated result with these new metrics were to be used in a comparison, these new 'overall' min and max values can still be compared without issue.Issues Resolved
#684
Testing
make test
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.