Skip to content

StatefulSet never successfully rolled out #68573

Closed
@tonglil

Description

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Statefulset does not seem to ever finish rolling out when checking with kubectl rollout status sts/x:

$ kubectl rollout status sts/prometheus
Waiting for partitioned roll out to finish: 0 out of 2 new pods have been updated...

What you expected to happen:

The command to show "successfully rolled out" as it does for deployments:

$ kubectl rollout status deploy/test
deployment "test" successfully rolled out

We expect the command to exit as we're depending on that to move onto the next step of the script/job.

How to reproduce it (as minimally and precisely as possible):
I wish I could share my manifest but it's not cleaned up at this time.

Anything else we need to know?:

sts manifest snippet:

  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
$ kubectl get sts/prometheus
NAME         DESIRED   CURRENT   AGE
prometheus   2         2         4d

$ kubectl describe sts/prometheus
Name:               prometheus
CreationTimestamp:  Fri, 07 Sep 2018 14:51:43 -0700
Selector:           app=prometheus
Labels:             app=prometheus
Annotations:        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --record=true --filename=/tmp/distr...
                    kubernetes.io/change-cause=kubectl apply --record=true --filename=/tmp/distro.yml
Replicas:           2 desired | 2 total
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed

Ordered output of when I kubectl edit statefulset prometheus:

$ kubectl rollout status sts/prometheus &
[1] 91055
Waiting for partitioned roll out to finish: 0 out of 2 new pods have been updated...

$ kubectl get pods -w &
[2] 91777
NAME           READY     STATUS    RESTARTS   AGE
prometheus-0   2/2       Running   0          2m
prometheus-1   2/2       Running   0          3m

# kubectl edit

prometheus-1   2/2       Terminating   0         3m
Waiting for partitioned roll out to finish: 0 out of 2 new pods have been updated...
Waiting for partitioned roll out to finish: 0 out of 2 new pods have been updated...
prometheus-1   0/2       Terminating   0         3m
Waiting for 1 pods to be ready...
prometheus-1   0/2       Terminating   0         3m
prometheus-1   0/2       Terminating   0         4m
prometheus-1   0/2       Terminating   0         4m
prometheus-1   0/2       Pending   0         0s
prometheus-1   0/2       Pending   0         0s
prometheus-1   0/2       Init:0/1   0         0s
Waiting for 1 pods to be ready...
prometheus-1   0/2       PodInitializing   0         12s
prometheus-1   1/2       Running   0         14s
prometheus-1   2/2       Running   0         17s
prometheus-0   2/2       Terminating   0         3m
Waiting for partitioned roll out to finish: 1 out of 2 new pods have been updated...
prometheus-0   0/2       Terminating   0         3m
prometheus-0   0/2       Terminating   0         3m
Waiting for 1 pods to be ready...
prometheus-0   0/2       Terminating   0         4m
prometheus-0   0/2       Terminating   0         4m
prometheus-0   0/2       Pending   0         0s
prometheus-0   0/2       Pending   0         0s
prometheus-0   0/2       Init:0/1   0         0s
Waiting for 1 pods to be ready...
prometheus-0   0/2       PodInitializing   0         11s
prometheus-0   1/2       Running   0         13s
prometheus-0   2/2       Running   0         15s
Waiting for partitioned roll out to finish: 0 out of 2 new pods have been updated...
Waiting for partitioned roll out to finish: 0 out of 2 new pods have been updated...

$ kubectl get pod
NAME           READY     STATUS    RESTARTS   AGE
prometheus-0   2/2       Running   0          33s
prometheus-1   2/2       Running   0          1m

As you can see, it just becomes "Waiting for partitioned roll out to finish" when all changes have been applied.

Environment:

  • Kubernetes version (use kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.6-gke.2", GitCommit:"384b4eaa132ca9a295fcb3e5dfc74062b257e7df", GitTreeState:"clean", BuildDate:"2018-08-15T00:10:14Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: GKE

Metadata

Assignees

Labels

area/stateful-appskind/bugCategorizes issue or PR as related to a bug.sig/appsCategorizes an issue or PR as relevant to SIG Apps.sig/cliCategorizes an issue or PR as relevant to SIG CLI.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions