Skip to content

MachinePool observedGeneration is updated without changing conditions on upgrades #10059

Open
@dkoshkin

Description

@dkoshkin

What steps did you take and what happened?

Create an AKS cluster with machinepools with CAPZ then upgrade the spec.version in the MachinePool.

What did you expect to happen?

We rely on status.observedGeneration and different status fields and conditions to determine if the MachinePool is still being upgraded. This works great with MachineDeployments with lots of different infra providers, but for MachinePools there is no clear signal when the upgrade has started.

I've captured the MachinePool objects during an upgrade:

  1. spec.version was upgraded to v1.28.3 but observedGeneration is still 2, this is expected the controllers haven't acted yet.
[2024-01-16 15:54:09] ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
  creationTimestamp: "2024-01-16T23:45:43Z"
  finalizers:
  - machinepool.cluster.x-k8s.io
  generation: 3
  name: dkoshkin-az-upgrade-8
  namespace: default
  resourceVersion: "4509"
  uid: 612e7d87-b0d2-4da9-8174-9ca890642246
spec:
  template:
    spec:
      version: v1.28.3
status:
  availableReplicas: 3
  bootstrapReady: true
  conditions:
  - lastTransitionTime: "2024-01-16T23:51:06Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2024-01-16T23:45:43Z"
    status: "True"
    type: BootstrapReady
  - lastTransitionTime: "2024-01-16T23:51:06Z"
    status: "True"
    type: InfrastructureReady
  - lastTransitionTime: "2024-01-16T23:45:43Z"
    status: "True"
    type: ReplicasReady
  infrastructureReady: true
  observedGeneration: 2
  phase: Running
  readyReplicas: 3
  replicas: 3

The controller picks up the spec change and reconciles it, updating observedGeneration to 3 which matches generation.
This is where I would expect to some status change that the spec is outdated and will be upgraded.

[2024-01-16 15:54:09] ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
  creationTimestamp: "2024-01-16T23:45:43Z"
  finalizers:
  - machinepool.cluster.x-k8s.io
  generation: 3
  name: dkoshkin-az-upgrade-8
  namespace: default
  resourceVersion: "4510"
  uid: 612e7d87-b0d2-4da9-8174-9ca890642246
spec:
  template:
    spec:
      version: v1.28.3
status:
  availableReplicas: 3
  bootstrapReady: true
  conditions:
  - lastTransitionTime: "2024-01-16T23:51:06Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2024-01-16T23:45:43Z"
    status: "True"
    type: BootstrapReady
  - lastTransitionTime: "2024-01-16T23:51:06Z"
    status: "True"
    type: InfrastructureReady
  - lastTransitionTime: "2024-01-16T23:45:43Z"
    status: "True"
    type: ReplicasReady
  infrastructureReady: true
  observedGeneration: 3
  phase: Running
  readyReplicas: 3
  replicas: 3

Then about 10 seconds later we get a status change with Ready and InfrastructureReady conditions changing to False. By this point our wait code has exited since it checks for observedGeneration==generation and the Ready condition.

[2024-01-16 15:54:21]  ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
  creationTimestamp: "2024-01-16T23:45:43Z"
  finalizers:
  - machinepool.cluster.x-k8s.io
  generation: 3
  name: dkoshkin-az-upgrade-8
  namespace: default
  resourceVersion: "4574"
  uid: 612e7d87-b0d2-4da9-8174-9ca890642246
spec:
  template:
    spec:
      version: v1.28.3
status:
  availableReplicas: 3
  bootstrapReady: true
  conditions:
  - lastTransitionTime: "2024-01-16T23:54:21Z"
    message: agentpools creating or updating
    reason: Creating
    severity: Info
    status: "False"
    type: Ready
  - lastTransitionTime: "2024-01-16T23:45:43Z"
    status: "True"
    type: BootstrapReady
  - lastTransitionTime: "2024-01-16T23:54:21Z"
    message: agentpools creating or updating
    reason: Creating
    severity: Info
    status: "False"
    type: InfrastructureReady
  - lastTransitionTime: "2024-01-16T23:45:43Z"
    status: "True"
    type: ReplicasReady
  infrastructureReady: true
  observedGeneration: 3
  phase: Running
  readyReplicas: 3
  replicas: 3

Cluster API version

v1.5.3

Kubernetes version

No response

Anything else you would like to add?

We would like to avoid solving this and not have "sleeps" to wait for changes to happen (or not happen) and instead would like to use the status.

I'm looking for some guidance

  1. Confirming that this is a bug
  2. and any pointers if this is already handled in the topology reconcilers, as I believe there would be a similar need to know when a MachinePool is outdated and is going to be upgraded

Label(s) to be applied

/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/machinepoolIssues or PRs related to machinepoolskind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.priority/important-longtermImportant over the long term, but may not be staffed and/or may need multiple releases to complete.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions