-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
repo-cache-expiration is ignored for helm repositories #4002
Comments
One way to get hard refreshes is to periodically add the @alexmt do you think it would be nice to add a |
@darshanime with argocd.argoproj.io/refresh=hard using helm repo not works |
@darshanime Yes that is one way to do it, but that defeats the purpose of CI and automated tools if you have to manually inject a k8s label. Sure I could automate the injection, but that's a really hacky solution. This is a pretty fundamental interaction that Argo should support. Whether it be by hard refresh, or some other option, right now ArgoCD simply doesn't support helm chart updates in place, since it never goes back out to pull the latest version of the chart. |
What about sync a chart that only changes the app version! |
@arielhernandezmusa If I'm understanding you correctly, that wouldn't help me out. Nothing you can change on the helm chart would help, because Argo never re-pulls the chart itself, which is the problem at its core. Something has to tell Argo to, instead of using internal cache, go out and re-pull the helm chart from its source. Right now the only way to do that is a manual 'hard refresh'. |
Okay, maybe what I say can be an improvement |
This issue is also related to more cases:
It would be nice to have a way to cover those use cases without manual intervention (hard-refresh). I think this PR would be helpful here: #4678 |
Related, manual syncs should probably always do a hard refresh. There's maybe a five minute lag with default application setup even if you manually go into the UI and hit Sync. |
I think we need a way to selectively and periodically hard refresh applications that have some sort of annotation in there. |
Please, please this, or something similar. |
+1 |
+1 This is a pretty big issue for me. My development workflow involves re-releasing a Helm chart with the same version but containing updates to the templates within the chart. I am using the Application custom resource referencing a Helm chart from a Helm repository and not a git repository. When I build and publish an update to the Helm chart i.e., adding a new resource, syncing the application has no effect.. the original version of the chart is cached so the updates are not picked up. I have tried setting This is a complete blocker for me using Argo CD. The ability to hard sync a Helm application is key; or at least have Helm charts expire from the repo cache. Is there any plan for this bug? Otherwise any advice would be greatly appreciated. |
@irishandyb I am very interested in this problem being solved and obviously everyone is free to use things as they prefer. IMHO I believe that changing the templates of a chart without changing the version number of the chart itself is against the paradigm of versioning itself. Even if this caching problem gets solved I doubt it solves your problem because solving such anti-paradigm stuff doesn't make much sense for the community. It basically makes the cache itself useless. |
Hi @pierluigilenoci, thank you very much for your message. I am very grateful for your feedback. I completely understand the importance of correct versioning for releases. However, during the development stage I often make quick and small changes to the chart that I wish to test without the need for version or release management. For example, if I add a new Service to a chart and wish to test it, I do not want to need to bump up the version in the Chart.yaml and also the referenced version in the Application resource. To quickly test my change it is easier to simply re-build and re-deploy (or sync) small changes. Again if I was to subsequently find a bug in the Service, such as pointing to the wrong port in the Pod, then another quick change to the template, re-build and sync. In short, in the development cycle, I am avoiding the need to continuously update the Chart.yaml and Application resource. Is this incorrect? Should I look at automating the bumping of the versions or is there another common pattern? |
@irishandyb I understand the particular need but I find it to be a bit overdoing. However, there are tools to automate versioning. |
@irishandyb @pierluigilenoci You both bring up valid points, I have a similar dev cycle that re-uses tags, knowing that isn't the "best practice standard". All this aside though, I think we need to step back and look at what is actually being asked which would be beneficial in more than one situation, and that is to either: The re-use of tags is just one use-case but more generically, being able to re-check the helm chart repository for changes periodically is something that argocd should have the ability to do, like what it currently does with git repositories. On top of all of this, there's the point to be made that this is basically a bug, not a feature request, since argocd currently has a cache expiration parameter that straight-up doesn't work at all. Simply honoring this param would solve these issues. You could set the cache expiration to whatever you want, which, if working, would force argocd to go back to the helm chart repo source to check truth. |
@pierluigilenoci - thank you very much for your input. I will research further and consider changing my process. That being said, I do agree with @De1taE1even and it would be nice to see some options around caching of Helm charts. |
@De1taE1even @irishandyb IMHO the discussion here is overlapping different themes:
Regarding the versioning of the charts, invalidating the cache is not necessary because Semantic Versioning supports pre-releases and therefore each test can have its own clear tag (and in this way it is also easier to log changes). As for the cache, having the option to invalidate it with some form of trigger or to disable it entirely is certainly a desirable thing. But it would also be enough just to fix the bug that makes it unusable. I think it might make sense to separate the feature request to improve cache management from this bug fix request to get things clearer. |
@pierluigilenoci Separating these topics seems fine to me. Honestly, correct me if I'm wrong, but if the cache invalidation worked properly, and I set it to 24 hours, then the first sync check after the cache was invalidated would be forced to pull from source, and that'd be good enough for my use case. |
@De1taE1even theoretically, when a cache is invalidated, the software should be forced to re-download the content from the original source. How to use this mechanism then I'll leave to you. 😜 |
Another case when this issue occur: All of my apps (Application CR) are using remote values (via URL) to supply it to helm. Any change in those values won't occur in Sync or Refresh. Only Hard Refresh can help. Thus, if changes will be made in those values, I am not aware of them until manually clicking Hard Refresh. |
@De1taE1even I see #8928 has been merged in Do I understand correctly that the bug was not LE: I checked the code and it seems it behaves like this:
I don't think noCache should be overloaded to control both the redis cache and the file system cache. @alexmt WDYT? |
@alexef You are correct, my only concern is not being able to properly expire helm chart cache. The new flag should solve this for me, and I like that it can be set to a different, less frequent interval than the normal refresh. Thank you for pointing it out, I missed it. I haven't tested it yet, but assuming it does what it's supposed to do, this is great solution to this ticket, from my perspective. |
@alexef Well, I don't know why it isn't working, but that flag did nothing. I confirmed 100% that the flag is applied, and I set it to 3600 to hard refresh once per hour. It's been several hours, and no automatic hard refresh was performed. I manually performed a hard refresh and immediately saw the change, so this flag isn't working properly. I even cycled every single pod as part of the argocd installation, just to make sure all pods had the most updated config. I also verified that I'm on a newer version than required for this flag to be supported. |
@De1taE1even thank you for the updates. |
@alexef Is there some documentation I missed about adding |
I am hitting a very similar case with my helm chart. My helm chart was packed with a wrong image tag and I discovered that once I deployed to Argo. So I re packed the same chart with the right image and deployed to Argo, but Argo didn't pull it from the remote repo, so I had to use the hard refresh mechanism to see the new image. |
Running into this today again, this time it was a bitnami helm chart which I submitted a pull request for, I pulled the helm chart and my fix was missing. After some communication the helm chart was updated (with the same version number) and it worked, till I redeployed. Afterwards there is a caching issue. Click 'hard refresh' hasn't helped. Running into this issue also in my development environment where hard refresh works but afterwards when something restarts its trying to load a different version than the latest and greatest, maybe its cause I'm using an oci helm chart store. Wish I could just clear out the helm chart cache somehow. I think this issue may not be getting the attention it needs because of philosophical issues. Gitops its awesome for developers and its perfectly reasonable to at some point to want to always pull the latest helm chart until there are scripts in place to update all the micro services helm charts and versions, along with an overall helm chart with its own version, rather than using 0.0.x which seems to get confused. Maybe a series of options could be allowed which are more developer related, something like:
Or per applicationset like so:
|
Submitted a feature request #18000 |
I've asked about this in slack several times over several days and no one has an answer. It doesn't help that I literally can't find one example of this flag actually getting used on the entirety of the internet.
Describe the bug
I recently transitioned all my argocd apps to point at a private helm repository we host, instead of pointing the applications back at our gitlab instance. This is because we're getting customers that need a private installation of our product, not managed by us. This has worked great for the most part, I really like sync'ing to a helm repo versus gitlab.
However, changes to the helm chart aren't picked up by argo because argo isn't re-pulling the helm chart from our chart repo. The only way I can get argo to re-pull is to issue a manual hard-refresh. But I can't get this to happen in any automated fashion. I've researched this extensively, and the only thing I could find is this
repo-cache-expiration
flag that you can set in the repo server command that will expire argo's redis cache and force it to re-pull the chart from source. I set this flag to 1m for testing. At first I didn't think I set it correctly, but then I saw that it was working, for the couple of apps I still had argo pointing to gitlab. But for the helm-based applications, this flag seems to be completely ignored. Nothing I've been able to do will convince argo to re-pull from our helm chart repo.To Reproduce
Expected behavior
With the
repo-cache-expiration
flag set to 1m I'd expect a normal refresh of the app to re-pull the chart from source, but it doesn't.Screenshots
Here's a snippet of my repo server deployment, for reference on how I'm setting the flag:
Version
The text was updated successfully, but these errors were encountered: