-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a developer document on running scheduler benchmarks #3192
Conversation
/assign @misterikkit |
@guineveresaenger: GitHub didn't allow me to request PR reviews from the following users: for, dev. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Ugh. Every time. |
/cc @eduartua |
/approve Holding for any other comments. Feel free to remove hold when satisfied. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bsalamat, cblecker The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
as the initialization phase of the benchmark. `minPods` specifies how many pods | ||
must be created and scheduled as the actual part of benchmarking. | ||
You may add other items to the above array and run the benchmarks again. For | ||
example, if you want to measure performance in a 2000 node cluster when scheduling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel that changing the test cases should be part of this documentation. Or at the very least, it should not be under the heading "running integration benchmarks"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps what we need is to have a section in this doc for running benchmarks and another section for writing new ones or modifying the test cases. Any literal references to test names, files, or code may get out of sync with this document, so let's think about whether that can be avoided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if I agree. I myself usually run benchmarks with a lot larger number of pods in order to minimize other overheads. At the same time, we haven't been able to add all those configurations in the code because they timeout in our CI/CD.
I agree that literal references to test names and files requires additional effort in keeping them in sync with documents, but at this point we don't have a much better way of doing this without losing usefulness of the doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@misterikkit I see your point. From a viewpoint of usefulness, having concrete examples is really helpful so I'd be in favor of leaving this, perhaps with a disclaimer and a link to current test files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on how I see this get used, it means that people who want to test extra large scenarios have to carry their own patches around, and there is no guarantee that those patches match what other contributors are using for test. Maybe these scenarios are unique enough that it's not important.
I am in favor of having documentation saying, "here is how to add a test case with lots of nodes/pods".
directory. | ||
|
||
```sh | ||
make test-integration WHAT=./test/integration/scheduler_perf KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=xxx -bench=BenchmarkScheduling" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about -bench=.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added it and more information about Go benchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean to remove this now that you added the other example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, not really. I actually wanted to keep this so that people find an example of how particular benchmarks can be specified in CLI.
``` | ||
|
||
These benchmarks are located in `./test/integration/scheduler_perf/scheduler_bench_test.go`. | ||
The function names start with `BenchmarkScheduling`. At the beginning of each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will probably add benchmarks that don't follow this naming scheme. (when I get around to it.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you plan to add scheduling benchmarks, I think it would make sense if they start with BenchmarkScheduling
. If they are non-scheduling benchmarks, they should probably be added to other files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they are non-scheduling benchmarks, then they should be in a different package. We can discuss if I ever get a PR mailed out. 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a fair point! 😄
There is also https://github.com/kubernetes/kubernetes/blob/master/test/integration/scheduler_perf/README.md btw. To someone who does not regularly run scheduler benchmarks can either this document and referred README updated to reflect which one is preferable? |
@gnufied Thanks for pointing it out. The instructions in the README file are not detailed enough. Those who contribute code to the scheduler normally want to run benchmarks with more details and configurations. This document can help in those situations. |
531f8d2
to
a2f102d
Compare
a2f102d
to
101f7e2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
directory. | ||
|
||
```sh | ||
make test-integration WHAT=./test/integration/scheduler_perf KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-run=xxx -bench=BenchmarkScheduling" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean to remove this now that you added the other example?
/hold cancel |
/sig scheduling