-
Notifications
You must be signed in to change notification settings - Fork 772
Migrate InfoOptions.podSpecReplias and info.Scheduler.TotalRequests to info.TemplateSpec.PodSet #2524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate InfoOptions.podSpecReplias and info.Scheduler.TotalRequests to info.TemplateSpec.PodSet #2524
Conversation
}, | ||
constants.JobTrainerNode: { | ||
Replicas: 100, // Replicas is taken from TrainJob NumNodes. | ||
PodRequests: resRequests, // TODO (andreyvelich): Add support for TrainJob ResourcesPerNode in TotalRequests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened: #2525
c45ef80
to
e627f78
Compare
/assign @kubeflow/wg-training-leads @astefanutti |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for updating this @tenzen-y!
Mostly lgtm
PodSets []PodSet | ||
} | ||
|
||
type PodSet struct { | ||
// PodSet name is the name to identify PodSpec. | ||
// This typically has the name stored in each PodSpec. | ||
Name string | ||
// If Name is trainer-node, CountForNonTrainer is null. | ||
// For Trainer, PodSet Count should be stored in Info.RuntimePolicy.MLPolicy.NumNodes. | ||
CountForNonTrainer *int32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we already removed the Trainer struct, do you want to refactor this count parameter in the next PR ?
I just feel like instead of updating the info.RuntimePolicy.MLPolicy.NumNodes
, we can directly update count value in PodSet for the appropriate name (e.g. Launcher, Node):
info.RuntimePolicy.MLPolicy.NumNodes = numNodes |
Which means PodSet.Count always dictates values for Parallelism and Completions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like it should simplifies a lot of logic we have.
For example:
trainer/pkg/runtime/framework/plugins/coscheduling/coscheduling.go
Lines 128 to 131 in e627f78
case constants.JobTrainerNode: | |
count = *info.RuntimePolicy.MLPolicy.NumNodes | |
default: | |
count = *ps.CountForNonTrainer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's right. I will change MLPolocy data structure only for internal in the next PR.
I decoupled the changes from this PR since the internal MLPololicy data structure change has a lot of affection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's right. I will change MLPolocy data structure only for internal in the next PR.
Do you mean after we refactor MLPolicy for the Info object, we can change the Count value in PodSet ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will drop MLPolicy.NumNodes only from internal data structure, and rename PodSet.CoutNotForTrainer with PodSet.Count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that make sense!
Name string | ||
// If Name is trainer-node, CountForNonTrainer is null. | ||
// For Trainer, PodSet Count should be stored in Info.RuntimePolicy.MLPolicy.NumNodes. | ||
CountForNonTrainer *int32 | ||
InitContainers []Container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add InitContainers once we have a use-case when InitContainer can be configured via TrainJob ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to keep InitContainers here since this is abstraction for Runtime.
So, each plugin can easily take and modify initContainers throughout cycle.
This is mostly preparation for overriding order changes (Plugin Change -> TrainJob -> Runtime).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good.
Containers []Container | ||
Volumes []corev1ac.VolumeApplyConfiguration | ||
Endpoints iter.Seq[string] | ||
// The total PodSet requests can be calculated with | ||
// SinglePodRequests x [CountForNonTrainer|RuntimePolicy.MLPolicy.NumNodes]. | ||
SinglePodRequests corev1.ResourceList |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason to store singlePodRequest rather than entire PodRequest ?
Since the only place we use it is in the co-scheduling plugin where we multiple it by count?
quantity.Mul(int64(count)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we want to avoid re-calculating every time we change numNodes.
If we have this SinglePodRequests
, we can avoid calculating when we change numNodes in each plugin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, so you imagine scenarios when Count can be modified by multiple plugins ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes after I drop MLPolicy.NumNodes.
…o info.TemplateSpec.PodSet Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com> # Conflicts: # pkg/runtime/core/trainingruntime.go # pkg/runtime/runtime.go
e627f78
to
d0d0c08
Compare
I resolved conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @tenzen-y!
/lgtm
/approve
feel free to unhold
/hold
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andreyvelich The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thank you |
What this PR does / why we need it:
I removed duplicated internal data structure,
InfoOptions.podSpecReplicas
andInfo.Scheduler.TotalRequests
and migrate its usage to
Info.TemplateSpec.PodSet
.Which issue(s) this PR fixes (optional, in
Fixes #<issue number>, #<issue number>, ...
format, will close the issue(s) when PR gets merged):Part-of #2495
Checklist: