-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added permission to get nodes for rbd #4302
Conversation
Thanks for the pr ! @iPraveenParihar after the new changes to obtain node labels if the pods are running in k8s cluster, we'll need rbac to get nodes regardless of topology or read affinity right ? |
@nemcikjan Can you remove the surrounding conditions here too ?
|
@Rakshith-R I can but I looked also into the cephfs driver implementation and got the impression that it's not necessary for the cephfs rbac but it's possible that I missed something |
Yes, we're only checking if cephcsi is running inside k8s before trying to fetch the node labels now. |
yes, we can remove the read affinity condition. |
Just a note, I also had to update the provisioner role, in my cluster. It apparently also needs the ability to get nodes. The exact same resolution, would apply to it. |
@XtremeOwnageDotCom I reviewed the implementation and it doesn't seem that the rbd provisioner needs that permission at all based on the condition here. Maybe @Rakshith-R or @iPraveenParihar could confirm. |
In RBD, we need to add the NodeServer condition as done in CephFS ceph-csi/internal/rbd/driver/driver.go Lines 128 to 133 in 2309168
CephFS, ceph-csi/internal/cephfs/driver.go Lines 112 to 118 in 2309168
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me,
Can you please squash both the commits into one ?
The last commit came after I pressed enter for this review. Thanks |
@Rakshith-R should I then squash all the commits? |
Just the first two since they are so similar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request has been modified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks !
@Mergifyio rebase |
added permission to get nodes for rbd and cephfs nodeplugin daemonset Signed-off-by: Jan Nemcik <jan.nemcik@solargis.com>
node labels are fetched only if controller is running in k8s and is nodeserver Signed-off-by: Jan Nemcik <jan.nemcik@solargis.com>
✅ Branch has been successfully rebased |
@Mergifyio queue |
✅ The pull request has been merged automaticallyThe pull request has been merged automatically at 3443546 |
/test ci/centos/k8s-e2e-external-storage/1.26 |
/test ci/centos/mini-e2e-helm/k8s-1.26 |
/test ci/centos/upgrade-tests-cephfs |
/test ci/centos/mini-e2e/k8s-1.26 |
/test ci/centos/k8s-e2e-external-storage/1.28 |
/test ci/centos/k8s-e2e-external-storage/1.27 |
/test ci/centos/upgrade-tests-rbd |
/test ci/centos/mini-e2e-helm/k8s-1.28 |
/test ci/centos/mini-e2e-helm/k8s-1.27 |
/test ci/centos/mini-e2e/k8s-1.28 |
/test ci/centos/mini-e2e/k8s-1.27 |
Describe what this PR does
Based on the implementation in rbd driver the nodeplugin should have by default permission to get nodes in order to retrieve the node labels
Is there anything that requires special attention
Future concerns
List items that are not part of the PR and do not impact it's
functionality, but are work items that can be taken up subsequently.
Checklist:
guidelines in the developer
guide.
Request
Show available bot commands
These commands are normally not required, but in case of issues, leave any of
the following bot commands in an otherwise empty comment in this PR:
/retest ci/centos/<job-name>
: retest the<job-name>
after unrelatedfailure (please report the failure too!)