-
Notifications
You must be signed in to change notification settings - Fork 52
Adding support for block volumes #380
Conversation
|
should we also revert effect of #362 which added refusal all block volume requests |
|
What about deployment examples, should we show provisioning of a block volume ? |
I added an example app and PVC part of this change. Do you think need more?
Yes, I guess we should mention this in ReadMe, I will do that. |
Updated ReadMe with Block volume usage, @okartau Can you check if it is clear enough. |
README.md
Outdated
| #### Note about raw block volumes | ||
| Applications can use volumes provisioned by PMEM-CSI as [raw block devices](https://kubernetes.io/blog/2019/03/07/raw-block-volume-support-to-beta/). For provisioning a PMEM volume that supposed to use as raw block device, one has to create a `PersistentVolumeClaim` with `volumeMode: Block`. Check with provided example [PVC](deploy/common/pmem-pvc-block-volume.yaml) and [application](deploy/common/pmem-app-block-volume.yaml) for usage reference. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of:
For provisioning a PMEM volume that supposed to use as raw block device
how about simpler:
For provisioning a PMEM volume as a raw block device
similarly:
instead of:
Check with provided example
use:
See example
We can also wait for few more days when @pohly gets back, he can also take a look on these sentences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @okartau's proposed language changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those changes are al
Olev Kartau notifications@github.com writes:
Do we already have tests (e.g. from Kubernetes or csi-test) which cover block volumes? while working with #365 I noticed that if we dont respond NOT_SUPPORTED for block volume mode, and we enable VolumeMode suite, then some (just few) more tests are executed. I guess these may be about block volumes.
Can we pull that into this PR?
Now I pulled the commit from #365 that enables the block volume test suite.
|
@pohly Can you give review comments. |
50a59da to
43cedd4
Compare
|
Needs to be rebased... |
pohly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we already have tests (e.g. from Kubernetes or csi-test) which cover block volumes?
while working with #365 I noticed that if we dont respond NOT_SUPPORTED for block volume mode, and we enable VolumeMode suite, then some (just few) more tests are executed. |
|
Olev Kartau <notifications@github.com> writes:
> Do we already have tests (e.g. from Kubernetes or csi-test) which cover block volumes?
while working with #365 I noticed that if we dont respond
NOT_SUPPORTED for block volume mode, and we enable VolumeMode suite,
then some (just few) more tests are executed. I guess these may be
about block volumes.
Can we pull that into this PR?
|
43cedd4 to
4ea6ae7
Compare
|
Looks like tests are failing? The second one is a timeout. |
|
log shows tester tries to run local kubectl, seems some commits from #365 were left out that work around local kubectl? |
The other failure was also timeout, while creating a Pod: |
|
In driver logs we don't have good explanation why pod is not created. |
|
I added debug trace in pv_util.go CreateSecPodWithNodeName() printing all args to MakeSecPod, and it appears this is called twice with almost same args (as per calling code in volumemode.go). |
|
I think the error is the one from kubelet emitted close to pod creation failure point: kubelet pmem-csi-clear-govm-worker2} FailedMapVolume: MapVolume.SetUp failed for volume "pvc-150511aa-e10b-11e9-b34d-0242ac110002" : volumeattachments.storage.k8s.io "csi-2bc0683aaf417707b935a60597d8945cce77abb97a08d8d816addd01b613e70b" is forbidden: User "system:node:pmem-csi-clear-govm-worker2" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found between node "pmem-csi-clear-govm-worker2" and this object. This gives a hint it is a rights/permissions issue. where a work-around was proposed via adding some rights like this: Also, it is mentioned that this gets fixed by migration to k8s 1.15 (our CI still runs 1.14, also my trials on stand-alone cluster). I will re-try in another stand-alone cluster running 1.15, to verify this "fixed in 1.15" claim. |
|
confirmed, Block-mode volume creation and publish works in 1.16 cluster (govm-ubuntu cluster with 4xVM). |
efc6409 to
34a2515
Compare
8bea046 to
5ca43e4
Compare
|
@avalluri: what's the plan regarding Kubernetes 1.14? This isn't part of the pre-merge testing, but we will end up testing on it once it's merged. |
| // testsuites.InitSnapshottableTestSuite, | ||
| // testsuites.InitSubPathTestSuite, | ||
| // testsuites.InitVolumeIOTestSuite, | ||
| // testsuites.InitVolumeModeTestSuite, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, did the tests actually run for your in block mode? I only get:
PMEM Volumes
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/storage/csi_volumes.go:57
[Driver: pmem-csi]
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/storage/csi_volumes.go:102
[Testpattern: Dynamic PV (block volmode)] volumeMode
/nvme/gopath/src/github.com/intel/pmem-csi/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
should not mount / map unused volumes in a pod [BeforeEach]
/nvme/gopath/src/github.com/intel/pmem-csi/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:324
Driver pmem-csi doesn't support -- skipping
/nvme/gopath/src/github.com/intel/pmem-csi/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:151
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's because the default file system (i.e. the empty string) is no longer listed as supported:
1f61a3a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After adding that, it worked for me, also on Kubernetes 1.14, i.e. I couldn't reproduce the problem that @okartau had.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, did the tests actually run for your in block mode? I only get:
You are right, I realized this when working on ephemeral volumes branch and fixed here : c7f560c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please cherry-pick c9bf3a4 to fix it in this PR.
pohly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@avalluri: please investigate why it failed on 1.13: #438 (comment)
Also, cherry-pick c9bf3a4 to ensure that that failing test actually runs.
| |--------------------+--------------------------------+ ------------------------ | ||
| | 1.13 | CSINodeInfo, CSIDriverRegistry | unsupported <sup>1</sup> | ||
| |--------------------|--------------------------------|---------------- | ||
| | 1.13 | CSINodeInfo, CSIDriverRegistry,<br>CSIBlockVolume</br>| unsupported <sup>1</sup> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why <br> for CSIBlockVolume but not the others?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The column is too wide after adding one more feature gate. So, I just made it multi-line column.
|
@pohly Is it good enough to merge. |
pohly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Let's merge once we have test results.
In commit 2e9fde4, we dropped support for v1.13, but tests are still pointing to storage class files in deploy/kubernetes-1.13. This change fixes this by pointing to deploy/common.
Code changes to support provisioning of raw block volumes. For block volumes driver need not to create a file-system in NodePublishVolume stage. With this change we only support fsdax raw block volumes. Should find a way to support 'dax' raw volume, which is a character device. Signed-off-by: Amarnath Valluri <amarnath.valluri@intel.com>
As now we support block volumes we no more return "unsupported" for block volumes. Hence reverting commit 3fcd755.
Several (all?) of the volumeMode tests run with the default filesystem
(i.e. the empty string). If we don't list that as supported, raw block
tests are not executed:
S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds]
PMEM Volumes
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/storage/csi_volumes.go:57
[Driver: pmem-csi]
/nvme/gopath/src/github.com/intel/pmem-csi/test/e2e/storage/csi_volumes.go:102
[Testpattern: Dynamic PV (block volmode)] volumes
/nvme/gopath/src/github.com/intel/pmem-csi/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
should store data [BeforeEach]
/nvme/gopath/src/github.com/intel/pmem-csi/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:146
Driver pmem-csi doesn't support -- skipping
/nvme/gopath/src/github.com/intel/pmem-csi/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:151
Signed-off-by: Amarnath Valluri amarnath.valluri@intel.com