Description
What happened:
logs are getting flooded as Drive is have endless Retry mechanism. WE can say spamming is happening on the failure command.
What you expected to happen:
logs should have a counter for retry when we see errors while bucket creation/access/grant.
As same error message is repeated day/night if failure is not fixed. User should have a control, how many time system should retry
How to reproduce this bug (as minimally and precisely as possible):
- Induce an error while bucket any of the workflow creation/access/grant.
- Look into the provisioner log, you will see same error message is repeated day/night if failure is not fixed.
If the issue remains for couple of days, then it will eat all memory and space of the system
Issue is
The log handling for the COSI APIs is handled by the sidecar (https://github.com/kubernetes-sigs/container-object-storage-interface-provisioner-sidecar) where if an error is occured it endless goes on a retry mechanism. However, the sidecar does not currently stop retrying after some time. We should have a tweaking counter for the same
Metadata
Metadata
Assignees
Type
Projects
Status