Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests for volume mode conversion feature #867

Merged
merged 1 commit into from
Feb 10, 2023

Conversation

RaunakShah
Copy link
Contributor

What type of PR is this?
/kind test

What this PR does / why we need it:

Follow up to #832
This adds further e2e tests described in the feature KEP - https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3141-prevent-volume-mode-conversion#e2e-tests

Test output:

provision volumes with different volume modes from volume snapshot dataSource when the source volume mode is altered with permissions
/Users/raunakshah/go/src/github.com/kubernetes-csi/external-provisioner/test/e2e/storage/provision.go:154
  STEP: Creating a kubernetes client @ 02/07/23 14:18:59.983
  Feb  7 14:18:59.983: INFO: >>> kubeConfig: /Users/raunakshah/.kube/config
  STEP: Building a namespace api object, basename pvcs-from-volume-snapshots @ 02/07/23 14:18:59.99
  STEP: Waiting for a default service account to be provisioned in namespace @ 02/07/23 14:19:00.002
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 02/07/23 14:19:00.007
  Feb  7 14:19:00.011: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=kube-system describe deployment snapshot-controller'
  Feb  7 14:19:00.113: INFO: stderr: ""
  Feb  7 14:19:00.113: INFO: stdout: "Name:                   snapshot-controller\nNamespace:              kube-system\nCreationTimestamp:      Tue, 07 Feb 2023 14:09:52 +0530\nLabels:                 <none>\nAnnotations:            deployment.kubernetes.io/revision: 1\nSelector:               app=snapshot-controller\nReplicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType:           RollingUpdate\nMinReadySeconds:        15\nRollingUpdateStrategy:  1 max unavailable, 0 max surge\nPod Template:\n  Labels:           app=snapshot-controller\n  Service Account:  snapshot-controller\n  Containers:\n   snapshot-controller:\n    Image:      registry.k8s.io/sig-storage/snapshot-controller:v6.2.1\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --leader-election=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Available      True    MinimumReplicasAvailable\n  Progressing    True    NewReplicaSetAvailable\nOldReplicaSets:  <none>\nNewReplicaSet:   snapshot-controller-7789d87cf4 (2/2 replicas created)\nEvents:\n  Type    Reason             Age   From                   Message\n  ----    ------             ----  ----                   -------\n  Normal  ScalingReplicaSet  9m8s  deployment-controller  Scaled up replica set snapshot-controller-7789d87cf4 to 2\n"
  Feb  7 14:19:00.113: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=default describe sts csi-hostpathplugin'
  Feb  7 14:19:00.181: INFO: stderr: ""
  Feb  7 14:19:00.182: INFO: stdout: "Name:               csi-hostpathplugin\nNamespace:          default\nCreationTimestamp:  Tue, 07 Feb 2023 14:10:35 +0530\nSelector:           app.kubernetes.io/component=plugin,app.kubernetes.io/instance=hostpath.csi.k8s.io,app.kubernetes.io/name=csi-hostpathplugin,app.kubernetes.io/part-of=csi-driver-host-path\nLabels:             app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\nAnnotations:        <none>\nReplicas:           1 desired | 1 total\nUpdate Strategy:    RollingUpdate\n  Partition:        0\nPods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:           app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\n  Service Account:  csi-hostpathplugin-sa\n  Containers:\n   hostpath:\n    Image:      gcr.io/k8s-staging-sig-storage/hostpathplugin:canary\n    Port:       9898/TCP\n    Host Port:  0/TCP\n    Args:\n      --drivername=hostpath.csi.k8s.io\n      --v=5\n      --endpoint=$(CSI_ENDPOINT)\n      --nodeid=$(KUBE_NODE_NAME)\n    Liveness:  http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5\n    Environment:\n      CSI_ENDPOINT:    unix:///csi/csi.sock\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /dev from dev-dir (rw)\n      /var/lib/kubelet/plugins from plugins-dir (rw)\n      /var/lib/kubelet/pods from mountpoint-dir (rw)\n   csi-external-health-monitor-controller:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=$(ADDRESS)\n      --leader-election\n    Environment:\n      ADDRESS:  /csi/csi.sock\n    Mounts:\n      /csi from socket-dir (rw)\n   node-driver-registrar:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-node-driver-registrar:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n      --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock\n    Environment:\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /registration from registration-dir (rw)\n   liveness-probe:\n    Image:      gcr.io/k8s-staging-sig-storage/livenessprobe:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --csi-address=/csi/csi.sock\n      --health-port=9898\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-attacher:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-attacher:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-provisioner:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-provisioner:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n      --feature-gates=Topology=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-resizer:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-resizer:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      -csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-snapshotter:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-snapshotter:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n  Volumes:\n   socket-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins/csi-hostpath\n    HostPathType:  DirectoryOrCreate\n   mountpoint-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/pods\n    HostPathType:  DirectoryOrCreate\n   registration-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins_registry\n    HostPathType:  Directory\n   plugins-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins\n    HostPathType:  Directory\n   csi-data-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/csi-hostpath-data/\n    HostPathType:  DirectoryOrCreate\n   dev-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /dev\n    HostPathType:  Directory\nVolume Claims:     <none>\nEvents:\n  Type    Reason            Age    From                    Message\n  ----    ------            ----   ----                    -------\n  Normal  SuccessfulCreate  8m25s  statefulset-controller  create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\n"
  STEP: Creating CSI Hostpath driver Storage Class @ 02/07/23 14:19:00.182
  STEP: Creating VolumeSnapshotClass @ 02/07/23 14:19:00.187
  Feb  7 14:19:00.200: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-hcdhs] to have phase Bound
  Feb  7 14:19:00.205: INFO: PersistentVolumeClaim pvc-hcdhs found but phase is Pending instead of Bound.
  Feb  7 14:19:02.209: INFO: PersistentVolumeClaim pvc-hcdhs found and phase=Bound (2.008933792s)
  Feb  7 14:19:02.224: INFO: Waiting up to 1m0s for VolumeSnapshot volumesnapshot-xfj4b to become ready
  Feb  7 14:19:02.227: INFO: VolumeSnapshot volumesnapshot-xfj4b found but is not ready.
  Feb  7 14:19:03.232: INFO: VolumeSnapshot volumesnapshot-xfj4b found and is ready
  Feb  7 14:19:03.232: INFO: WaitUntil finished successfully after 1.008647708s
  Feb  7 14:19:03.252: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-jlndp] to have phase Bound
  Feb  7 14:19:03.255: INFO: PersistentVolumeClaim pvc-jlndp found but phase is Pending instead of Bound.
  Feb  7 14:19:05.262: INFO: PersistentVolumeClaim pvc-jlndp found and phase=Bound (2.00946325s)
  STEP: Deleting VolumeSnapshotClass @ 02/07/23 14:19:05.269
  STEP: Deleting CSI Hostpath driver Storage Class @ 02/07/23 14:19:05.274
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-74qz6 @ 02/07/23 14:19:05.278
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-f68mj @ 02/07/23 14:19:05.281
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-hcdhs @ 02/07/23 14:19:05.283
  Feb  7 14:19:05.285: INFO: Deleting PersistentVolumeClaim "pvc-hcdhs"
  Feb  7 14:19:05.292: INFO: Waiting up to 2m0s for PersistentVolume pvc-6d9674c4-151b-48c0-9883-718b95df23ca to get deleted
  Feb  7 14:19:05.297: INFO: PersistentVolume pvc-6d9674c4-151b-48c0-9883-718b95df23ca found and phase=Bound (5.216166ms)
  Feb  7 14:19:07.301: INFO: PersistentVolume pvc-6d9674c4-151b-48c0-9883-718b95df23ca was removed
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-jlndp @ 02/07/23 14:19:07.301
  Feb  7 14:19:07.304: INFO: Deleting PersistentVolumeClaim "pvc-jlndp"
  Feb  7 14:19:07.311: INFO: Waiting up to 2m0s for PersistentVolume pvc-2b3f31ad-6371-4154-a037-3f1fb02ed853 to get deleted
  Feb  7 14:19:07.314: INFO: PersistentVolume pvc-2b3f31ad-6371-4154-a037-3f1fb02ed853 found and phase=Bound (3.14025ms)
  Feb  7 14:19:09.320: INFO: PersistentVolume pvc-2b3f31ad-6371-4154-a037-3f1fb02ed853 was removed
  STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-8081/volumesnapshot-csffb @ 02/07/23 14:19:09.32
  STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-2540/volumesnapshot-xfj4b @ 02/07/23 14:19:09.324
  STEP: deleting the snapshot @ 02/07/23 14:19:09.33
  STEP: checking the Snapshot has been deleted @ 02/07/23 14:19:09.337
  Feb  7 14:19:09.337: INFO: Waiting up to 5m0s for volumesnapshots volumesnapshot-xfj4b to be deleted
  Feb  7 14:19:09.342: INFO: volumesnapshots volumesnapshot-xfj4b has been found in namespace pvcs-from-volume-snapshots-2540 and is not deleted
  Feb  7 14:19:11.348: INFO: volumesnapshots volumesnapshot-xfj4b is not found in namespace pvcs-from-volume-snapshots-2540 and has been deleted
  Feb  7 14:19:11.348: INFO: WaitUntil finished successfully after 2.010785833s
  STEP: Wait for VolumeSnapshotContent snapcontent-ce17c0c1-0bae-4d1e-b9b8-55e32c60cf0f to be deleted @ 02/07/23 14:19:11.348
  Feb  7 14:19:11.348: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-ce17c0c1-0bae-4d1e-b9b8-55e32c60cf0f to be deleted
  Feb  7 14:19:11.351: INFO: volumesnapshotcontents snapcontent-ce17c0c1-0bae-4d1e-b9b8-55e32c60cf0f is not found and has been deleted
  Feb  7 14:19:11.351: INFO: WaitUntil finished successfully after 3.062917ms
  STEP: Destroying namespace "pvcs-from-volume-snapshots-2540" for this suite. @ 02/07/23 14:19:11.352
• [11.375 seconds]
------------------------------
provision volumes with different volume modes from volume snapshot dataSource when the source volume mode is nil
/Users/raunakshah/go/src/github.com/kubernetes-csi/external-provisioner/test/e2e/storage/provision.go:212
  STEP: Creating a kubernetes client @ 02/07/23 14:19:11.359
  Feb  7 14:19:11.359: INFO: >>> kubeConfig: /Users/raunakshah/.kube/config
  STEP: Building a namespace api object, basename pvcs-from-volume-snapshots @ 02/07/23 14:19:11.366
  STEP: Waiting for a default service account to be provisioned in namespace @ 02/07/23 14:19:11.384
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 02/07/23 14:19:11.39
  Feb  7 14:19:11.394: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=kube-system describe deployment snapshot-controller'
  Feb  7 14:19:11.503: INFO: stderr: ""
  Feb  7 14:19:11.503: INFO: stdout: "Name:                   snapshot-controller\nNamespace:              kube-system\nCreationTimestamp:      Tue, 07 Feb 2023 14:09:52 +0530\nLabels:                 <none>\nAnnotations:            deployment.kubernetes.io/revision: 1\nSelector:               app=snapshot-controller\nReplicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType:           RollingUpdate\nMinReadySeconds:        15\nRollingUpdateStrategy:  1 max unavailable, 0 max surge\nPod Template:\n  Labels:           app=snapshot-controller\n  Service Account:  snapshot-controller\n  Containers:\n   snapshot-controller:\n    Image:      registry.k8s.io/sig-storage/snapshot-controller:v6.2.1\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --leader-election=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Available      True    MinimumReplicasAvailable\n  Progressing    True    NewReplicaSetAvailable\nOldReplicaSets:  <none>\nNewReplicaSet:   snapshot-controller-7789d87cf4 (2/2 replicas created)\nEvents:\n  Type    Reason             Age    From                   Message\n  ----    ------             ----   ----                   -------\n  Normal  ScalingReplicaSet  9m19s  deployment-controller  Scaled up replica set snapshot-controller-7789d87cf4 to 2\n"
  Feb  7 14:19:11.504: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=default describe sts csi-hostpathplugin'
  Feb  7 14:19:11.578: INFO: stderr: ""
  Feb  7 14:19:11.578: INFO: stdout: "Name:               csi-hostpathplugin\nNamespace:          default\nCreationTimestamp:  Tue, 07 Feb 2023 14:10:35 +0530\nSelector:           app.kubernetes.io/component=plugin,app.kubernetes.io/instance=hostpath.csi.k8s.io,app.kubernetes.io/name=csi-hostpathplugin,app.kubernetes.io/part-of=csi-driver-host-path\nLabels:             app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\nAnnotations:        <none>\nReplicas:           1 desired | 1 total\nUpdate Strategy:    RollingUpdate\n  Partition:        0\nPods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:           app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\n  Service Account:  csi-hostpathplugin-sa\n  Containers:\n   hostpath:\n    Image:      gcr.io/k8s-staging-sig-storage/hostpathplugin:canary\n    Port:       9898/TCP\n    Host Port:  0/TCP\n    Args:\n      --drivername=hostpath.csi.k8s.io\n      --v=5\n      --endpoint=$(CSI_ENDPOINT)\n      --nodeid=$(KUBE_NODE_NAME)\n    Liveness:  http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5\n    Environment:\n      CSI_ENDPOINT:    unix:///csi/csi.sock\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /dev from dev-dir (rw)\n      /var/lib/kubelet/plugins from plugins-dir (rw)\n      /var/lib/kubelet/pods from mountpoint-dir (rw)\n   csi-external-health-monitor-controller:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=$(ADDRESS)\n      --leader-election\n    Environment:\n      ADDRESS:  /csi/csi.sock\n    Mounts:\n      /csi from socket-dir (rw)\n   node-driver-registrar:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-node-driver-registrar:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n      --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock\n    Environment:\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /registration from registration-dir (rw)\n   liveness-probe:\n    Image:      gcr.io/k8s-staging-sig-storage/livenessprobe:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --csi-address=/csi/csi.sock\n      --health-port=9898\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-attacher:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-attacher:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-provisioner:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-provisioner:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n      --feature-gates=Topology=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-resizer:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-resizer:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      -csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-snapshotter:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-snapshotter:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n  Volumes:\n   socket-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins/csi-hostpath\n    HostPathType:  DirectoryOrCreate\n   mountpoint-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/pods\n    HostPathType:  DirectoryOrCreate\n   registration-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins_registry\n    HostPathType:  Directory\n   plugins-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins\n    HostPathType:  Directory\n   csi-data-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/csi-hostpath-data/\n    HostPathType:  DirectoryOrCreate\n   dev-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /dev\n    HostPathType:  Directory\nVolume Claims:     <none>\nEvents:\n  Type    Reason            Age    From                    Message\n  ----    ------            ----   ----                    -------\n  Normal  SuccessfulCreate  8m36s  statefulset-controller  create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\n"
  STEP: Creating CSI Hostpath driver Storage Class @ 02/07/23 14:19:11.579
  STEP: Creating VolumeSnapshotClass @ 02/07/23 14:19:11.588
  Feb  7 14:19:11.601: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-8klml] to have phase Bound
  Feb  7 14:19:11.607: INFO: PersistentVolumeClaim pvc-8klml found but phase is Pending instead of Bound.
  Feb  7 14:19:13.613: INFO: PersistentVolumeClaim pvc-8klml found and phase=Bound (2.011433416s)
  Feb  7 14:19:13.629: INFO: Waiting up to 1m0s for VolumeSnapshot volumesnapshot-hxn4n to become ready
  Feb  7 14:19:13.632: INFO: VolumeSnapshot volumesnapshot-hxn4n found but is not ready.
  Feb  7 14:19:14.638: INFO: VolumeSnapshot volumesnapshot-hxn4n found and is ready
  Feb  7 14:19:14.638: INFO: WaitUntil finished successfully after 1.009092084s
  Feb  7 14:19:14.657: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-d6n7g] to have phase Bound
  Feb  7 14:19:14.661: INFO: PersistentVolumeClaim pvc-d6n7g found but phase is Pending instead of Bound.
  Feb  7 14:19:16.666: INFO: PersistentVolumeClaim pvc-d6n7g found and phase=Bound (2.008371708s)
  STEP: Deleting VolumeSnapshotClass @ 02/07/23 14:19:16.67
  STEP: Deleting CSI Hostpath driver Storage Class @ 02/07/23 14:19:16.674
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-74qz6 @ 02/07/23 14:19:16.678
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-f68mj @ 02/07/23 14:19:16.68
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-hcdhs @ 02/07/23 14:19:16.683
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-jlndp @ 02/07/23 14:19:16.685
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-4728/pvc-8klml @ 02/07/23 14:19:16.687
  Feb  7 14:19:16.689: INFO: Deleting PersistentVolumeClaim "pvc-8klml"
  Feb  7 14:19:16.695: INFO: Waiting up to 2m0s for PersistentVolume pvc-87c1af56-d979-4c10-91b5-3f93739e1fce to get deleted
  Feb  7 14:19:16.698: INFO: PersistentVolume pvc-87c1af56-d979-4c10-91b5-3f93739e1fce found and phase=Bound (2.570417ms)
  Feb  7 14:19:18.702: INFO: PersistentVolume pvc-87c1af56-d979-4c10-91b5-3f93739e1fce was removed
  STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-4728/pvc-d6n7g @ 02/07/23 14:19:18.702
  Feb  7 14:19:18.707: INFO: Deleting PersistentVolumeClaim "pvc-d6n7g"
  Feb  7 14:19:18.711: INFO: Waiting up to 2m0s for PersistentVolume pvc-77e785ea-1e92-403b-a23b-281797941ffa to get deleted
  Feb  7 14:19:18.714: INFO: PersistentVolume pvc-77e785ea-1e92-403b-a23b-281797941ffa found and phase=Bound (3.048041ms)
  Feb  7 14:19:20.718: INFO: PersistentVolume pvc-77e785ea-1e92-403b-a23b-281797941ffa was removed
  STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-8081/volumesnapshot-csffb @ 02/07/23 14:19:20.718
  STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-2540/volumesnapshot-xfj4b @ 02/07/23 14:19:20.721
  STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-4728/volumesnapshot-hxn4n @ 02/07/23 14:19:20.723
  STEP: deleting the snapshot @ 02/07/23 14:19:20.725
  STEP: checking the Snapshot has been deleted @ 02/07/23 14:19:20.731
  Feb  7 14:19:20.731: INFO: Waiting up to 5m0s for volumesnapshots volumesnapshot-hxn4n to be deleted
  Feb  7 14:19:20.734: INFO: volumesnapshots volumesnapshot-hxn4n has been found in namespace pvcs-from-volume-snapshots-4728 and is not deleted
  Feb  7 14:19:22.739: INFO: volumesnapshots volumesnapshot-hxn4n is not found in namespace pvcs-from-volume-snapshots-4728 and has been deleted
  Feb  7 14:19:22.739: INFO: WaitUntil finished successfully after 2.007310375s
  STEP: Wait for VolumeSnapshotContent snapcontent-e10eae42-de50-45ab-9b9b-f0f076aa4224 to be deleted @ 02/07/23 14:19:22.739
  Feb  7 14:19:22.739: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-e10eae42-de50-45ab-9b9b-f0f076aa4224 to be deleted
  Feb  7 14:19:22.741: INFO: volumesnapshotcontents snapcontent-e10eae42-de50-45ab-9b9b-f0f076aa4224 is not found and has been deleted
  Feb  7 14:19:22.741: INFO: WaitUntil finished successfully after 1.867ms
  STEP: Destroying namespace "pvcs-from-volume-snapshots-4728" for this suite. @ 02/07/23 14:19:22.741
• [11.387 seconds]

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Additional e2e tests for volume mode conversion 

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Feb 7, 2023
@k8s-ci-robot
Copy link
Contributor

@RaunakShah: The label(s) kind/test cannot be applied, because the repository doesn't have them.

In response to this:

What type of PR is this?
/kind test

What this PR does / why we need it:

Follow up to #832
This adds further e2e tests described in the feature KEP - https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3141-prevent-volume-mode-conversion#e2e-tests

Test output:

provision volumes with different volume modes from volume snapshot dataSource when the source volume mode is altered with permissions
/Users/raunakshah/go/src/github.com/kubernetes-csi/external-provisioner/test/e2e/storage/provision.go:154
 STEP: Creating a kubernetes client @ 02/07/23 14:18:59.983
 Feb  7 14:18:59.983: INFO: >>> kubeConfig: /Users/raunakshah/.kube/config
 STEP: Building a namespace api object, basename pvcs-from-volume-snapshots @ 02/07/23 14:18:59.99
 STEP: Waiting for a default service account to be provisioned in namespace @ 02/07/23 14:19:00.002
 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 02/07/23 14:19:00.007
 Feb  7 14:19:00.011: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=kube-system describe deployment snapshot-controller'
 Feb  7 14:19:00.113: INFO: stderr: ""
 Feb  7 14:19:00.113: INFO: stdout: "Name:                   snapshot-controller\nNamespace:              kube-system\nCreationTimestamp:      Tue, 07 Feb 2023 14:09:52 +0530\nLabels:                 <none>\nAnnotations:            deployment.kubernetes.io/revision: 1\nSelector:               app=snapshot-controller\nReplicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType:           RollingUpdate\nMinReadySeconds:        15\nRollingUpdateStrategy:  1 max unavailable, 0 max surge\nPod Template:\n  Labels:           app=snapshot-controller\n  Service Account:  snapshot-controller\n  Containers:\n   snapshot-controller:\n    Image:      registry.k8s.io/sig-storage/snapshot-controller:v6.2.1\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --leader-election=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Available      True    MinimumReplicasAvailable\n  Progressing    True    NewReplicaSetAvailable\nOldReplicaSets:  <none>\nNewReplicaSet:   snapshot-controller-7789d87cf4 (2/2 replicas created)\nEvents:\n  Type    Reason             Age   From                   Message\n  ----    ------             ----  ----                   -------\n  Normal  ScalingReplicaSet  9m8s  deployment-controller  Scaled up replica set snapshot-controller-7789d87cf4 to 2\n"
 Feb  7 14:19:00.113: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=default describe sts csi-hostpathplugin'
 Feb  7 14:19:00.181: INFO: stderr: ""
 Feb  7 14:19:00.182: INFO: stdout: "Name:               csi-hostpathplugin\nNamespace:          default\nCreationTimestamp:  Tue, 07 Feb 2023 14:10:35 +0530\nSelector:           app.kubernetes.io/component=plugin,app.kubernetes.io/instance=hostpath.csi.k8s.io,app.kubernetes.io/name=csi-hostpathplugin,app.kubernetes.io/part-of=csi-driver-host-path\nLabels:             app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\nAnnotations:        <none>\nReplicas:           1 desired | 1 total\nUpdate Strategy:    RollingUpdate\n  Partition:        0\nPods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:           app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\n  Service Account:  csi-hostpathplugin-sa\n  Containers:\n   hostpath:\n    Image:      gcr.io/k8s-staging-sig-storage/hostpathplugin:canary\n    Port:       9898/TCP\n    Host Port:  0/TCP\n    Args:\n      --drivername=hostpath.csi.k8s.io\n      --v=5\n      --endpoint=$(CSI_ENDPOINT)\n      --nodeid=$(KUBE_NODE_NAME)\n    Liveness:  http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5\n    Environment:\n      CSI_ENDPOINT:    unix:///csi/csi.sock\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /dev from dev-dir (rw)\n      /var/lib/kubelet/plugins from plugins-dir (rw)\n      /var/lib/kubelet/pods from mountpoint-dir (rw)\n   csi-external-health-monitor-controller:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=$(ADDRESS)\n      --leader-election\n    Environment:\n      ADDRESS:  /csi/csi.sock\n    Mounts:\n      /csi from socket-dir (rw)\n   node-driver-registrar:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-node-driver-registrar:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n      --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock\n    Environment:\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /registration from registration-dir (rw)\n   liveness-probe:\n    Image:      gcr.io/k8s-staging-sig-storage/livenessprobe:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --csi-address=/csi/csi.sock\n      --health-port=9898\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-attacher:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-attacher:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-provisioner:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-provisioner:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n      --feature-gates=Topology=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-resizer:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-resizer:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      -csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-snapshotter:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-snapshotter:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n  Volumes:\n   socket-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins/csi-hostpath\n    HostPathType:  DirectoryOrCreate\n   mountpoint-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/pods\n    HostPathType:  DirectoryOrCreate\n   registration-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins_registry\n    HostPathType:  Directory\n   plugins-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins\n    HostPathType:  Directory\n   csi-data-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/csi-hostpath-data/\n    HostPathType:  DirectoryOrCreate\n   dev-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /dev\n    HostPathType:  Directory\nVolume Claims:     <none>\nEvents:\n  Type    Reason            Age    From                    Message\n  ----    ------            ----   ----                    -------\n  Normal  SuccessfulCreate  8m25s  statefulset-controller  create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\n"
 STEP: Creating CSI Hostpath driver Storage Class @ 02/07/23 14:19:00.182
 STEP: Creating VolumeSnapshotClass @ 02/07/23 14:19:00.187
 Feb  7 14:19:00.200: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-hcdhs] to have phase Bound
 Feb  7 14:19:00.205: INFO: PersistentVolumeClaim pvc-hcdhs found but phase is Pending instead of Bound.
 Feb  7 14:19:02.209: INFO: PersistentVolumeClaim pvc-hcdhs found and phase=Bound (2.008933792s)
 Feb  7 14:19:02.224: INFO: Waiting up to 1m0s for VolumeSnapshot volumesnapshot-xfj4b to become ready
 Feb  7 14:19:02.227: INFO: VolumeSnapshot volumesnapshot-xfj4b found but is not ready.
 Feb  7 14:19:03.232: INFO: VolumeSnapshot volumesnapshot-xfj4b found and is ready
 Feb  7 14:19:03.232: INFO: WaitUntil finished successfully after 1.008647708s
 Feb  7 14:19:03.252: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-jlndp] to have phase Bound
 Feb  7 14:19:03.255: INFO: PersistentVolumeClaim pvc-jlndp found but phase is Pending instead of Bound.
 Feb  7 14:19:05.262: INFO: PersistentVolumeClaim pvc-jlndp found and phase=Bound (2.00946325s)
 STEP: Deleting VolumeSnapshotClass @ 02/07/23 14:19:05.269
 STEP: Deleting CSI Hostpath driver Storage Class @ 02/07/23 14:19:05.274
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-74qz6 @ 02/07/23 14:19:05.278
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-f68mj @ 02/07/23 14:19:05.281
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-hcdhs @ 02/07/23 14:19:05.283
 Feb  7 14:19:05.285: INFO: Deleting PersistentVolumeClaim "pvc-hcdhs"
 Feb  7 14:19:05.292: INFO: Waiting up to 2m0s for PersistentVolume pvc-6d9674c4-151b-48c0-9883-718b95df23ca to get deleted
 Feb  7 14:19:05.297: INFO: PersistentVolume pvc-6d9674c4-151b-48c0-9883-718b95df23ca found and phase=Bound (5.216166ms)
 Feb  7 14:19:07.301: INFO: PersistentVolume pvc-6d9674c4-151b-48c0-9883-718b95df23ca was removed
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-jlndp @ 02/07/23 14:19:07.301
 Feb  7 14:19:07.304: INFO: Deleting PersistentVolumeClaim "pvc-jlndp"
 Feb  7 14:19:07.311: INFO: Waiting up to 2m0s for PersistentVolume pvc-2b3f31ad-6371-4154-a037-3f1fb02ed853 to get deleted
 Feb  7 14:19:07.314: INFO: PersistentVolume pvc-2b3f31ad-6371-4154-a037-3f1fb02ed853 found and phase=Bound (3.14025ms)
 Feb  7 14:19:09.320: INFO: PersistentVolume pvc-2b3f31ad-6371-4154-a037-3f1fb02ed853 was removed
 STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-8081/volumesnapshot-csffb @ 02/07/23 14:19:09.32
 STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-2540/volumesnapshot-xfj4b @ 02/07/23 14:19:09.324
 STEP: deleting the snapshot @ 02/07/23 14:19:09.33
 STEP: checking the Snapshot has been deleted @ 02/07/23 14:19:09.337
 Feb  7 14:19:09.337: INFO: Waiting up to 5m0s for volumesnapshots volumesnapshot-xfj4b to be deleted
 Feb  7 14:19:09.342: INFO: volumesnapshots volumesnapshot-xfj4b has been found in namespace pvcs-from-volume-snapshots-2540 and is not deleted
 Feb  7 14:19:11.348: INFO: volumesnapshots volumesnapshot-xfj4b is not found in namespace pvcs-from-volume-snapshots-2540 and has been deleted
 Feb  7 14:19:11.348: INFO: WaitUntil finished successfully after 2.010785833s
 STEP: Wait for VolumeSnapshotContent snapcontent-ce17c0c1-0bae-4d1e-b9b8-55e32c60cf0f to be deleted @ 02/07/23 14:19:11.348
 Feb  7 14:19:11.348: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-ce17c0c1-0bae-4d1e-b9b8-55e32c60cf0f to be deleted
 Feb  7 14:19:11.351: INFO: volumesnapshotcontents snapcontent-ce17c0c1-0bae-4d1e-b9b8-55e32c60cf0f is not found and has been deleted
 Feb  7 14:19:11.351: INFO: WaitUntil finished successfully after 3.062917ms
 STEP: Destroying namespace "pvcs-from-volume-snapshots-2540" for this suite. @ 02/07/23 14:19:11.352
• [11.375 seconds]
------------------------------
provision volumes with different volume modes from volume snapshot dataSource when the source volume mode is nil
/Users/raunakshah/go/src/github.com/kubernetes-csi/external-provisioner/test/e2e/storage/provision.go:212
 STEP: Creating a kubernetes client @ 02/07/23 14:19:11.359
 Feb  7 14:19:11.359: INFO: >>> kubeConfig: /Users/raunakshah/.kube/config
 STEP: Building a namespace api object, basename pvcs-from-volume-snapshots @ 02/07/23 14:19:11.366
 STEP: Waiting for a default service account to be provisioned in namespace @ 02/07/23 14:19:11.384
 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 02/07/23 14:19:11.39
 Feb  7 14:19:11.394: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=kube-system describe deployment snapshot-controller'
 Feb  7 14:19:11.503: INFO: stderr: ""
 Feb  7 14:19:11.503: INFO: stdout: "Name:                   snapshot-controller\nNamespace:              kube-system\nCreationTimestamp:      Tue, 07 Feb 2023 14:09:52 +0530\nLabels:                 <none>\nAnnotations:            deployment.kubernetes.io/revision: 1\nSelector:               app=snapshot-controller\nReplicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable\nStrategyType:           RollingUpdate\nMinReadySeconds:        15\nRollingUpdateStrategy:  1 max unavailable, 0 max surge\nPod Template:\n  Labels:           app=snapshot-controller\n  Service Account:  snapshot-controller\n  Containers:\n   snapshot-controller:\n    Image:      registry.k8s.io/sig-storage/snapshot-controller:v6.2.1\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --leader-election=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Available      True    MinimumReplicasAvailable\n  Progressing    True    NewReplicaSetAvailable\nOldReplicaSets:  <none>\nNewReplicaSet:   snapshot-controller-7789d87cf4 (2/2 replicas created)\nEvents:\n  Type    Reason             Age    From                   Message\n  ----    ------             ----   ----                   -------\n  Normal  ScalingReplicaSet  9m19s  deployment-controller  Scaled up replica set snapshot-controller-7789d87cf4 to 2\n"
 Feb  7 14:19:11.504: INFO: Running '/opt/homebrew/bin/kubectl --server=https://127.0.0.1:56677 --kubeconfig=/Users/raunakshah/.kube/config --namespace=default describe sts csi-hostpathplugin'
 Feb  7 14:19:11.578: INFO: stderr: ""
 Feb  7 14:19:11.578: INFO: stdout: "Name:               csi-hostpathplugin\nNamespace:          default\nCreationTimestamp:  Tue, 07 Feb 2023 14:10:35 +0530\nSelector:           app.kubernetes.io/component=plugin,app.kubernetes.io/instance=hostpath.csi.k8s.io,app.kubernetes.io/name=csi-hostpathplugin,app.kubernetes.io/part-of=csi-driver-host-path\nLabels:             app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\nAnnotations:        <none>\nReplicas:           1 desired | 1 total\nUpdate Strategy:    RollingUpdate\n  Partition:        0\nPods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:           app.kubernetes.io/component=plugin\n                    app.kubernetes.io/instance=hostpath.csi.k8s.io\n                    app.kubernetes.io/name=csi-hostpathplugin\n                    app.kubernetes.io/part-of=csi-driver-host-path\n  Service Account:  csi-hostpathplugin-sa\n  Containers:\n   hostpath:\n    Image:      gcr.io/k8s-staging-sig-storage/hostpathplugin:canary\n    Port:       9898/TCP\n    Host Port:  0/TCP\n    Args:\n      --drivername=hostpath.csi.k8s.io\n      --v=5\n      --endpoint=$(CSI_ENDPOINT)\n      --nodeid=$(KUBE_NODE_NAME)\n    Liveness:  http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5\n    Environment:\n      CSI_ENDPOINT:    unix:///csi/csi.sock\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /dev from dev-dir (rw)\n      /var/lib/kubelet/plugins from plugins-dir (rw)\n      /var/lib/kubelet/pods from mountpoint-dir (rw)\n   csi-external-health-monitor-controller:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-external-health-monitor-controller:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=$(ADDRESS)\n      --leader-election\n    Environment:\n      ADDRESS:  /csi/csi.sock\n    Mounts:\n      /csi from socket-dir (rw)\n   node-driver-registrar:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-node-driver-registrar:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n      --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock\n    Environment:\n      KUBE_NODE_NAME:   (v1:spec.nodeName)\n    Mounts:\n      /csi from socket-dir (rw)\n      /csi-data-dir from csi-data-dir (rw)\n      /registration from registration-dir (rw)\n   liveness-probe:\n    Image:      gcr.io/k8s-staging-sig-storage/livenessprobe:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --csi-address=/csi/csi.sock\n      --health-port=9898\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-attacher:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-attacher:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      --v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-provisioner:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-provisioner:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n      --feature-gates=Topology=true\n      --prevent-volume-mode-conversion=true\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-resizer:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-resizer:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      -csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n   csi-snapshotter:\n    Image:      gcr.io/k8s-staging-sig-storage/csi-snapshotter:canary\n    Port:       <none>\n    Host Port:  <none>\n    Args:\n      -v=5\n      --csi-address=/csi/csi.sock\n    Environment:  <none>\n    Mounts:\n      /csi from socket-dir (rw)\n  Volumes:\n   socket-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins/csi-hostpath\n    HostPathType:  DirectoryOrCreate\n   mountpoint-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/pods\n    HostPathType:  DirectoryOrCreate\n   registration-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins_registry\n    HostPathType:  Directory\n   plugins-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/kubelet/plugins\n    HostPathType:  Directory\n   csi-data-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /var/lib/csi-hostpath-data/\n    HostPathType:  DirectoryOrCreate\n   dev-dir:\n    Type:          HostPath (bare host directory volume)\n    Path:          /dev\n    HostPathType:  Directory\nVolume Claims:     <none>\nEvents:\n  Type    Reason            Age    From                    Message\n  ----    ------            ----   ----                    -------\n  Normal  SuccessfulCreate  8m36s  statefulset-controller  create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\n"
 STEP: Creating CSI Hostpath driver Storage Class @ 02/07/23 14:19:11.579
 STEP: Creating VolumeSnapshotClass @ 02/07/23 14:19:11.588
 Feb  7 14:19:11.601: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-8klml] to have phase Bound
 Feb  7 14:19:11.607: INFO: PersistentVolumeClaim pvc-8klml found but phase is Pending instead of Bound.
 Feb  7 14:19:13.613: INFO: PersistentVolumeClaim pvc-8klml found and phase=Bound (2.011433416s)
 Feb  7 14:19:13.629: INFO: Waiting up to 1m0s for VolumeSnapshot volumesnapshot-hxn4n to become ready
 Feb  7 14:19:13.632: INFO: VolumeSnapshot volumesnapshot-hxn4n found but is not ready.
 Feb  7 14:19:14.638: INFO: VolumeSnapshot volumesnapshot-hxn4n found and is ready
 Feb  7 14:19:14.638: INFO: WaitUntil finished successfully after 1.009092084s
 Feb  7 14:19:14.657: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-d6n7g] to have phase Bound
 Feb  7 14:19:14.661: INFO: PersistentVolumeClaim pvc-d6n7g found but phase is Pending instead of Bound.
 Feb  7 14:19:16.666: INFO: PersistentVolumeClaim pvc-d6n7g found and phase=Bound (2.008371708s)
 STEP: Deleting VolumeSnapshotClass @ 02/07/23 14:19:16.67
 STEP: Deleting CSI Hostpath driver Storage Class @ 02/07/23 14:19:16.674
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-74qz6 @ 02/07/23 14:19:16.678
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-8081/pvc-f68mj @ 02/07/23 14:19:16.68
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-hcdhs @ 02/07/23 14:19:16.683
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-2540/pvc-jlndp @ 02/07/23 14:19:16.685
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-4728/pvc-8klml @ 02/07/23 14:19:16.687
 Feb  7 14:19:16.689: INFO: Deleting PersistentVolumeClaim "pvc-8klml"
 Feb  7 14:19:16.695: INFO: Waiting up to 2m0s for PersistentVolume pvc-87c1af56-d979-4c10-91b5-3f93739e1fce to get deleted
 Feb  7 14:19:16.698: INFO: PersistentVolume pvc-87c1af56-d979-4c10-91b5-3f93739e1fce found and phase=Bound (2.570417ms)
 Feb  7 14:19:18.702: INFO: PersistentVolume pvc-87c1af56-d979-4c10-91b5-3f93739e1fce was removed
 STEP: Deleting PersistentVolumeClaim pvcs-from-volume-snapshots-4728/pvc-d6n7g @ 02/07/23 14:19:18.702
 Feb  7 14:19:18.707: INFO: Deleting PersistentVolumeClaim "pvc-d6n7g"
 Feb  7 14:19:18.711: INFO: Waiting up to 2m0s for PersistentVolume pvc-77e785ea-1e92-403b-a23b-281797941ffa to get deleted
 Feb  7 14:19:18.714: INFO: PersistentVolume pvc-77e785ea-1e92-403b-a23b-281797941ffa found and phase=Bound (3.048041ms)
 Feb  7 14:19:20.718: INFO: PersistentVolume pvc-77e785ea-1e92-403b-a23b-281797941ffa was removed
 STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-8081/volumesnapshot-csffb @ 02/07/23 14:19:20.718
 STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-2540/volumesnapshot-xfj4b @ 02/07/23 14:19:20.721
 STEP: Deleting VolumeSnapshot pvcs-from-volume-snapshots-4728/volumesnapshot-hxn4n @ 02/07/23 14:19:20.723
 STEP: deleting the snapshot @ 02/07/23 14:19:20.725
 STEP: checking the Snapshot has been deleted @ 02/07/23 14:19:20.731
 Feb  7 14:19:20.731: INFO: Waiting up to 5m0s for volumesnapshots volumesnapshot-hxn4n to be deleted
 Feb  7 14:19:20.734: INFO: volumesnapshots volumesnapshot-hxn4n has been found in namespace pvcs-from-volume-snapshots-4728 and is not deleted
 Feb  7 14:19:22.739: INFO: volumesnapshots volumesnapshot-hxn4n is not found in namespace pvcs-from-volume-snapshots-4728 and has been deleted
 Feb  7 14:19:22.739: INFO: WaitUntil finished successfully after 2.007310375s
 STEP: Wait for VolumeSnapshotContent snapcontent-e10eae42-de50-45ab-9b9b-f0f076aa4224 to be deleted @ 02/07/23 14:19:22.739
 Feb  7 14:19:22.739: INFO: Waiting up to 5m0s for volumesnapshotcontents snapcontent-e10eae42-de50-45ab-9b9b-f0f076aa4224 to be deleted
 Feb  7 14:19:22.741: INFO: volumesnapshotcontents snapcontent-e10eae42-de50-45ab-9b9b-f0f076aa4224 is not found and has been deleted
 Feb  7 14:19:22.741: INFO: WaitUntil finished successfully after 1.867ms
 STEP: Destroying namespace "pvcs-from-volume-snapshots-4728" for this suite. @ 02/07/23 14:19:22.741
• [11.387 seconds]

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Additional e2e tests for volume mode conversion 

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 7, 2023
@RaunakShah
Copy link
Contributor Author

/test pull-kubernetes-csi-external-provisioner-canary

@@ -145,4 +147,117 @@ var _ = ginkgo.Describe("provision volumes with different volume modes from volu
framework.Failf("expected failure message [%s] not parsed in event list for PVC %s/%s", volumeModeConversionFailureMessage, pvc2.Namespace, pvc2.Name)
}
})

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you switch the order of PVC and VolumeSnapshot deletion and have VolumeSnapshot deleted first? Some storage systems can't delete volumes when there are snapshots on them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can submit a followup PR to make this change as it is not introduced by this PR.

@xing-yang
Copy link
Contributor

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 10, 2023
@xing-yang
Copy link
Contributor

Can you update the KEP to add links to these new tests?

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RaunakShah, xing-yang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants