-
Notifications
You must be signed in to change notification settings - Fork 262
High memory usage in chart 1.4.4 #666
Comments
Greetings, This could be caused by any number of things, we have upgraded all dependencies to mitigate as many CVEs as possible. If you can narrow down the cause, it will be useful for the catalog, but at this point please understand Helm Operator is not recommended to be used by anyone, all feature development and testing is undertaken on Helm Controller. The memory use has been specially targeted in recent versions of Helm Controller to ensure that memory issues of the Helm CLI (which is not designed as a memory-resident package but as a CLI which can expect to be cleaned up after when the install is finished...) do not "leak" during normal operation of Helm Controller. This kind of attention is not possible to replicate today on Helm Operator due to limited resources in our Open Source project team, our attention is focused on the new development. |
No problem @kingdonb, I'm aware that you guys are heavily working on Flux v2 and the best approach is to upgrade to it. The idea was just to point out that this error could appear. |
We can leave the issue open for visibility, I went through and closed all issues that are over 1 year old with no activity, but this is about a recent release of Helm Operator, so it is still relevant 👍 Thanks for reporting @ovitor |
We will be archiving the Helm Operator repo very soon, as described by: Please upgrade to Helm Controller and Flux v2, where loads of focus has been expended to ensure that memory usage is as efficient as possible, among every other concern. Closing now, as there will be no more work on Helm Operator in this repo. For migration support, please consult the migration guide: https://fluxcd.io/flux/migration/flux-v1-migration/ or contact us in the Flux channel on CNCF Slack where we still offer migration assistance, and workshops that are sponsored by the Flux project members and their supportive team at Weaveworks. |
Describe the bug
We were using a very old version of fluxcd/helm-operator chart, v0.7.0, recently we've upgraded our Kubernetes cluster and this chart alongside it (to version 1.4.4). After upgrading to v1.4.4 we've noticed a high memory utilization by its pods, causing endless evictions/terminations.
After we've downgraded from v1.4.4 to v1.4.3 this issue doesn't take place any more.
To Reproduce
Steps to reproduce the behavior:
Additional context
The text was updated successfully, but these errors were encountered: