Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VolumeSnapshotClass not in "snapshot.storage.k8s.io/v1beta1" #537

Closed
PSjoe opened this issue Jul 30, 2020 · 12 comments
Closed

VolumeSnapshotClass not in "snapshot.storage.k8s.io/v1beta1" #537

PSjoe opened this issue Jul 30, 2020 · 12 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@PSjoe
Copy link

PSjoe commented Jul 30, 2020

/kind bug

What happened?
Following the directions for CSI volume snapshots: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot

Step 1 returns with:
error: unable to recognize "specs/classes/snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1beta1"

What you expected to happen?
VolumeSnapshotClass should be created.

How to reproduce it (as minimally and precisely as possible)?
Run through example deployment in aws-ebs-csi-driver/examples/kubernetes/snapshot/ on a newly instantiated EKS cluster running 1.17.

Environment

  • Kubernetes version (use kubectl version):
    Server Version: GitVersion:"v1.17.6-eks-4e7f64", Client Version: GitVersion:"v1.17.7-eks-bffbac"
  • Driver version:
    kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
    Also attempted with the 0.5.0 tag with the same results.

CSI driver appears to be running:

kubectl get pods --all-namespaces`
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
 kube-system   aws-node-tkfvb                        1/1     Running   0          42h
 kube-system   coredns-55c5fcd78f-kj828              1/1     Running   0          43h
 kube-system   coredns-55c5fcd78f-mlqms              1/1     Running   0          43h
 kube-system   ebs-csi-controller-78bc69cb98-7bfcm   4/4     Running   0          3s
 kube-system   ebs-csi-controller-78bc69cb98-w9fc9   4/4     Running   0          3s
 kube-system   ebs-csi-node-vg4j5                    3/3     Running   0          3s
 kube-system   kube-proxy-cx66g                      1/1     Running   0          42h`

And according to this: https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ the feature gate for VolumeSnapshotDataSource should be enabled by default in 1.17.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 30, 2020
@kingli-crypto
Copy link

I am facing the same issue. I believe this is due to VolumeSnapshotDataSource is not enabled in EKS 1.17.
Maybe a maintainer can kindly comment on that.

@Amos-85
Copy link

Amos-85 commented Jul 31, 2020

Hi @PSjoe & @kingli-crypto ,

I face the same issue today,
it's looks like it's a missing part when installing the snapshot-controller, it doesn't install those crd's VolumeSnapshot, VolumeSnapshotContent, and VolumeSnapshotClass

Those crd's is not part of the k8s core api according to there documentation:

https://kubernetes.io/docs/concepts/storage/volume-snapshots

The workaround is to install them manually:

https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml"

https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml

https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml

don't forget to ensure all those policies attached to an ec2 instance profile:

https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.5.0/docs/example-iam-policy.json

@PSjoe
Copy link
Author

PSjoe commented Aug 14, 2020

Thanks for the tip! That got me closer. I no longer get any error messages when stepping through the example. However, it seems my snapshot never gets created. Are you able to get Step 5:
kubectl apply -f specs/snapshot/
to work? It doesn't error. It just never gets to Ready To Use: true.

I've also tried the example of importing a static snapshot (specs/snapshot-import/). It never creates a new volume. No errors. Just never gets there. I feel like there's some thing I'm missing that's supposed to kick off and do these.

Also, yes, I have that IAM possibly attached to the role that my K8s nodes are using. So it shouldn't be an IAM permissions thing.

@PSjoe
Copy link
Author

PSjoe commented Aug 14, 2020

Never mind. I've backed out everything and started over and it seems to be working now.

@Amos-85
Copy link

Amos-85 commented Aug 14, 2020

@PSjoe ,
Cheers!

@PSjoe
Copy link
Author

PSjoe commented Aug 20, 2020

Just an FYI in case this hits anyone else:

I found I had to install the CSI driver using the helm chart: helm install aws-ebs-csi-driver --namespace kube-system --set enableVolumeScheduling=true --set enableVolumeResizing=true --set enableVolumeSnapshot=true https://github.com/kubernetes-sigs/aws-ebs-csi-driver/releases/download/v0.6.0/helm-chart.tgz

Using the example in both GitHub and on AWS docs page: kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master" doesn't include the snapshot driver, it seems. You can try and create snapshots, but they never actually materialize.

For the CRDs, you have to download the following: wget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml wget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml wget https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl apply -f .

After that, EBS snapshotting works pretty much as described.

@nothingofuse
Copy link

it seems these crds are no longer in that location, does anyone know of the current solution to this issue?

@Amos-85
Copy link

Amos-85 commented Sep 9, 2020

@nothingofuse ,
You can follow the external-snapshotter project and install the crd's version you wish.

@PSjoe
Copy link
Author

PSjoe commented Sep 14, 2020

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 13, 2020
@ayberk
Copy link
Contributor

ayberk commented Dec 18, 2020

Closing this as we've updated the README.

/close

@k8s-ci-robot
Copy link
Contributor

@ayberk: Closing this issue.

In response to this:

Closing this as we've updated the README.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

7 participants