You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that after scale down, the pod may be created again because the pod still terminating.
Here is the code: func (s *shardManager) Shards() ([]*shard.Shard, error) { pods, err := s.getPods(s.sts.Spec.Selector.MatchLabels) // **Here will also get the pods that is terminating** if err != nil { return nil, errors.Wrap(err, "list pod") } ... return ret, nil }
The text was updated successfully, but these errors were encountered:
This is not a problem since NotReady Pods will never participate Target assignment.
Yes, NoReady pod will no participate target assignment. However, if it takes longer to terminate the Pod than the coordinator loop interval, it will be created again.
Let's look at the logic that calculate the 'scale' in coordinator.go:
shards := getPod(owner by prometheus statefulset) // include the pod status is terminating
scale := len(shards) // Because the 'shards' contains the terminating pod, scale >= sts.replicas
Here only focus on tryScaleUp(shardInfo), because the 'changeAble' flag of the pod in terminating state is false, so it will not run into 'scale--'.
Finally, ChangeScale will create the pods that in terminating state again.
I found that after scale down, the pod may be created again because the pod still terminating.
Here is the code:
func (s *shardManager) Shards() ([]*shard.Shard, error) { pods, err := s.getPods(s.sts.Spec.Selector.MatchLabels) // **Here will also get the pods that is terminating** if err != nil { return nil, errors.Wrap(err, "list pod") } ... return ret, nil }
The text was updated successfully, but these errors were encountered: