-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cephfs CSI plugin #30
Conversation
Makefile
Outdated
docker build -t $(CEPHFS_IMAGE_NAME):$(CEPHFS_IMAGE_VERSION) deploy/cephfs/docker | ||
|
||
push-container: rbdplugin-container | ||
docker push $(RBD_IMAGE_NAME):$(RBD_IMAGE_VERSION) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a cephfs container push here and i'll test it
verbs: ["get", "list", "update"] | ||
- apiGroups: [""] | ||
resources: ["secrets"] | ||
verbs: ["get", "list"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need secret?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting secret only requires for a provisioner
resources: ["namespaces"] | ||
verbs: ["get", "list"] | ||
- apiGroups: [""] | ||
resources: ["persistentvolumes"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
- apiGroups: [""] | ||
resources: ["persistentvolumes"] | ||
verbs: ["get", "list", "watch", "update"] | ||
- apiGroups: ["storage.k8s.io"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
resources: ["persistentvolumes"] | ||
verbs: ["get", "list", "watch", "update"] | ||
- apiGroups: ["storage.k8s.io"] | ||
resources: ["volumeattachments"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
path: /var/lib/kubelet/plugins/cephfsplugin | ||
type: DirectoryOrCreate | ||
- name: host-dev | ||
hostPath: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why need /dev? (for fuse?)
- apiGroups: [""] | ||
resources: ["persistentvolumeclaims"] | ||
verbs: ["get", "list", "watch", "update"] | ||
- apiGroups: ["storage.k8s.io"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you need secret (eventually)
deploy/cephfs/kubernetes/pvc.yaml
Outdated
name: cephfs-pvc | ||
spec: | ||
accessModes: | ||
- ReadWriteOnce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ReadWriteMany
pkg/cephfs/cephfs.go
Outdated
}) | ||
|
||
fs.driver.AddVolumeCapabilityAccessModes([]csi.VolumeCapability_AccessMode_Mode{ | ||
csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
multi node writer
pkg/cephfs/cephfs.go
Outdated
} | ||
|
||
fs.driver.AddControllerServiceCapabilities([]csi.ControllerServiceCapability_RPC_Type{ | ||
csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you don't need controller publish/unpublish for ceph fs
pkg/cephfs/controllerserver.go
Outdated
return cs.Driver.ValidateControllerServiceRequest(v, csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME) | ||
} | ||
|
||
func (cs *controllerServer) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this needs rework, see cephfs provisioner code here https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs
pkg/cephfs/nodeserver.go
Outdated
return nil | ||
} | ||
|
||
func (ns *nodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
node pubish volume call should expect ceph mon and secret to run ceph-fuse mount.
…blish is not needed
…access to fs Uses a slightly modified version of https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/cephfs/cephfs_provisioner/cephfs_provisioner.py This should be rewritten properly in Go, but for it works for now - for demonstration purposes TODO: * readOnly is not taken into account * controllerServer.DeleteVolume does nothing
@@ -4,5 +4,9 @@ metadata: | |||
name: cephfs | |||
provisioner: cephfsplugin | |||
parameters: | |||
provisionRoot: /cephfs | |||
reclaimPolicy: Delete | |||
adminID: admin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With CSI Spec 0.2.0 you do not have to keep ceph username/key in StorageClass. Please refer to updated CSI RBD driver StorageClass and also controller.go to see how to use credentails which are passed now inside of RPC call.
Hey, I'm having some trouble testing the driver - could you please share some info on how you do the testing yourself, so we can have a common test environment? To speed things up a little
since minikube's kubelet probably hasn't moved to (oops sorry about the close&reopening, misclicked) |
@gman0 For testing I am using my local single node k8s cluster and another single node ceph cluster. Both runs as VM on bare metal linux box. |
Sure, that would be cool, thanks a lot! |
Ok so I set up a VM with Kubernetes 1.10.0-beta.4 (since k8s 1.9.* doesn't support Now, when I deploy the driver and create the PersistentVolumeClaim, it seems to work:
Only when I create a testing pod that is supposed to mount this volume, this happens:
Could you please describe your testing environment in more detail? And how you managed to run the RBD driver, because I'm running out of ideas on how do I get this one going. |
@gman0 Here is my deployment yaml. |
@@ -56,7 +56,7 @@ func (ids *DefaultIdentityServer) GetPluginCapabilities(ctx context.Context, req | |||
{ | |||
Type: &csi.PluginCapability_Service_{ | |||
Service: &csi.PluginCapability_Service{ | |||
Type: csi.PluginCapability_Service_UNKNOWN, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't change vendor code directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, I'll revert that
provisioner: csi-cephfsplugin | ||
parameters: | ||
# The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel) | ||
mounter: fuse |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put the storage class on par with existing cephfs storage class, don't set mounter here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if not here, what would be the correct place to store such configuration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probe cephfs-fuse first, if not exit, fall back to kernel cephfs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it can be assumed that both fuse and kernel client should be present anyway, since that's what the cephfsplugin Docker image contains, no? And one of our use-cases would be to override the defaults and use the kernel client instead, for performance reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the probe idea tries one at a time till mounted. In this case, you can either make mounter order hard coded or based on a cli option. You can first try cephfs-fuse, if works then done, if not, try kernel cephfs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine to have a probe if not given, but we would like our users to be able to choose between using ceph-fuse or the kernel client explicitly. The best i think is to let them pass it (optionally) in the storage class.
rootPath: / | ||
user: admin | ||
|
||
csiProvisionerSecretName: csi-cephfs-secret |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use the secret to create cephfs volumes
added Pool and ProvisionVolume fields Mounter is now optional
volumes have "csi-cephfs-dyn-" prefix when they are provisioned dynamically (provisionVolume=true) and have "csi-cephfs-sta-" prefix when they are provisioned statically by the user (provisionVolume=false)
…mented DeleteVolume
This reverts commit 70f954d.
|
This looks ok now. Can we get it merged and try to iterate with smaller PRs after? It got massive and seems kind of stuck. |
@rochaporto @gman0 Please address CI discovered failures and we will merge it. |
Thanks a lot! |
cephfs CSI plugin
ocs: add initial ocs/README.md
…-to-release-4.9 [release-4.9] syncing devel branch to get latest updates
This is a Work-In-Progress draft of cephfs CSI plugin. Internally, it uses
ceph-fuse
for mounting andsetfattr
for setting quotas on volume.I've tested it with CSI 0.1.0 (since the version of Kubernetes installed on my system only supports 0.1.0 - no problem to bump the version later) and while it does work, at least for now the
cephfsplugin
container expects to have ahostPath
volume with populated/etc/ceph
containing the config and the keyring - thoughceph-fuse
should be easy enough to modify so that the credentials can be passed through command-line arguments.Also, the merge for Kubernete's
KeyMutex.TryLockKey
is still not done, so this code contains the same bug as in #29 (see pkg/cephfs/util.gotryLock
)I'm guessing the whole codebase could do some refactoring as
cephfsplugin
andrbdplugin
both share some parts of the code.Please review / post feedback.