Skip to content

Conversation

@rashi-jain1
Copy link

No description provided.

@datamattsson
Copy link
Collaborator

Could you elaborate how you've determined setting requests and limits to "0" isn't working?

@rashi-jain1
Copy link
Author

We tested using the following in the NFS StorageClass to disable resource enforcement:

nfsResourceLimitsCpuM: "0"
nfsResourceLimitsMemoryMi: "0"

The driver interpreted "0" correctly for limits, but still applied the default requests (500m CPU, 512Mi memory).
This produces an invalid pod spec because Kubernetes requires requests ≤ limits, but the driver generated:

requests.cpu = 500m > limits.cpu = 0
requests.memory = 512Mi > limits.memory = 0

This mismatch caused the NFS deployment to fail and PVC provisioning to fail.

@rashi-jain1
Copy link
Author

When specifying these, seeing this error in PVC provisioning:

Warning  ProvisionStorage      21s (x6 over 77s)  csi.hpe.com                                                     failed to create nfs deployment hpe-nfs-5b44dc50-da60-4fe3-94e3-9c79e60651a5, err Deployment.apps "hpe-nfs-5b44dc50-da60-4fe3-94e3-9c79e60651a5" is invalid: [spec.template.spec.containers[0].resources.requests: Invalid value: "500m": must be less than or equal to cpu limit of 0, spec.template.spec.containers[0].resources.requests: Invalid value: "512Mi": must be less than or equal to memory limit of 0]   Warning  ProvisioningFailed    21s (x6 over 77s)  csi.hpe.com_rhel9-worker1_a72857dc-2005-4cf1-8564-2ca6b0ab4ee7  failed to provision volume with StorageClass "hpe-arcus-nfs-1": rpc error: code = Internal desc = Failed to create NFS provisioned volume pvc-5b44dc50-da60-4fe3-94e3-9c79e60651a5, err failed to create nfs deployment hpe-nfs-5b44dc50-da60-4fe3-94e3-9c79e60651a5, err Deployment.apps "hpe-nfs-5b44dc50-da60-4fe3-94e3-9c79e60651a5" is invalid: [spec.template.spec.containers[0].resources.requests: Invalid value: "500m": must be less than or equal to cpu limit of 0, spec.template.spec.containers[0].resources.requests: Invalid value: "512Mi": must be less than or equal to memory limit of 0], rollback status: success

@dileepds
Copy link
Collaborator

We tested the code changes with setting following value also to '0' right ? It should not fail even if we set all the values as '0'

nfsResourceRequestsCpuM
nfsResourceRequestsMemoryMi

@datamattsson
Copy link
Collaborator

Invalid value: "500m": must be less than or equal to cpu limit of 0, spec.template.spec.containers[0].resources.requests: Invalid value: "512Mi": must be less than or equal to memory limit of 0]

This is a matter of semantics, you can't request "500m" when the limit is "0" and why would requests matter when there's no limit? An NFS server freshly deployed doesn't need a lot to boot.

Set both to "0" and you get the results you need, right?

We tested the code changes with setting following value also to '0' right ? It should not fail even if we set all the values as '0'

The original branch is working without these changes with all "0" and that was how it was tested. Requests was added after limits and we never considered setting just requests without limits.

@rashi-jain1
Copy link
Author

If all four parameters
nfsResourceRequestsCpuM
nfsResourceRequestsMemoryMi
nfsResourceLimitsCpuM
nfsResourceLimitsMemoryMi

are set to "0", the behavior is correct — no limits and no requests are applied.

The problem occurs only when the user specifies limits = "0" but does not provide explicit request value
In that case, the default requests values are applied (e.g., 500m CPU), which becomes invalid because requests must be ≤ limits.

So the fix we are implementing ensures that if only limits are set to "0", requests are also disabled automatically, and the driver shouldn’t fall back to defaults.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants