-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update pg status with applyStatus #3865
base: master
Are you sure you want to change the base?
Conversation
Welcome @jxs1211! |
/assign @hwdef |
pkg/scheduler/cache/cache.go
Outdated
@@ -300,7 +301,22 @@ func (su *defaultStatusUpdater) UpdatePodGroup(pg *schedulingapi.PodGroup) (*sch | |||
return nil, err | |||
} | |||
|
|||
updated, err := su.vcclient.SchedulingV1beta1().PodGroups(podgroup.Namespace).Update(context.TODO(), podgroup, metav1.UpdateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this modification is correct. I remember that this function will also update the annotation and label, so it is wrong to only update the status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we may rewrite the logic , not to persist these nums fields in podgroups, like this pr: #3751
I think the proposal is want to not update the status fields in podgroups, do you mean the annotation and labels still need to be updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there may be some other fields in spec or status need to be updated. We can't only just update the condition. What I want to do is not to refresh running/failed/succeeded fields in every session, because if session interval is short, there may be a lot of podgroup update requests, which may burden kube-apiserver.
Could you help to research that are there some other tools that can help kubectl calculate the number of running/failed/succeeded pods in the podgroup and then display these fields, but not by directly reading these fields in the status for display? Or if this cannot be achieved, we can use vcctl to display them, but I think more users might like to use kubectl. |
Yes, do you have any suggestion on a specific tool or method to achieve that, I don't have any idea for that by now. |
You may refer to:
Use kubectl plugin may help us still show the column running/failed/succeeded, but we don't need to persist these fields in vc-controller, but calculate dynamically by counting running/failed/succeeded pods in kubectl plugin. |
I think we can implement it in vcctl first, later we can decide whether we should implement a kubectl plugin. |
I found the doc related for this |
ba44179
to
24f673d
Compare
24f673d
to
72dce44
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
72dce44
to
f26b621
Compare
No, running/pending/unknown....fields in queue status are not persisted now, but there are still fields in podgroups Running/Failed/Succeeded that need to be refreshed every session, I think they does not need to be persisted also, you can check here: volcano/pkg/scheduler/framework/session.go Lines 353 to 355 in 0966fd5
|
Signed-off-by: Jay Shane <327411586@qq.com>
f26b621
to
71e05ad
Compare
I removed it. |
It's not enough, there may some users need to use vcctl to check these fields, we need to query the pods under podgroups, similar to this PR: #3751 then calculate the number of pods for display. Also we need to check carefully whether if these fields is depended by some plugins or actions |
Do you mean we need to do the same thing at the removed code like PR: #3751 did, don't we try to offload the apiserver's pressure in session, do I misunderstand? Another things I want to make sure is we're discussing removed the field(Running/Failed/Succeeded) in PodGroupStatus or just remove any reference of them(like the code was removed)? |
/hold |
Yes, we need to do the same thing as #3751, remove the three fields of PodGroupStatus. If these three fields are dependent on other code, we need to use other methods to transform the dependent code (for example, get the Running/Succeeded/Failed pods under the podgroup) |
So the right things are:
|
/area performance |
Improve the performance of update pg status with applyStatus api.
#3852