-
Notifications
You must be signed in to change notification settings - Fork 262
Helm 3 support #8
Comments
This is definitely something we should track (and work) on. I am currently unaware of what changes Helm 3 exactly brings us, except for having read the design proposal a couple of months ago and being aware of it being Tiller-less. I need to schedule some time to look into the alpha release and see how it fits into what we currently have (if it fits at all), so we can determine what the following steps would be. If someone already has looked into it and has ideas, feel free to comment. |
i think the main implication is no tiller, and release information is stored in kubernetes objects and not etcd note that in the current alpha |
Good session from this week's Kubecon about the changes: https://www.youtube.com/watch?v=lYzrhzLAxUI |
Finally had a chance to look into the first alpha release and see how this would fit into the Helm operator. Consider the following to be a list of observations and questions that came to mind, they may be short-sighted, incomplete, prone to changes, or even incorrect. A
|
When The rationale for
I quite like this alternative. I think the deciding factor between this and introducing an entirely new CRD will be whether we need (substantially) different fields. A good exercise might be to speculate on what a HelmRelease for a Helm v3 release would have to look like -- are there fields that become irrelevant? Are there new, mandatory fields? |
The This requires less user-action and removes edge cases where the wrong client version is set for a Chart that is not supported.
+1, this should be the v3 default, even if we still support
Release names used to be cluster-global, but now they are now fully namespaced in helmv3. If we still support |
Thinking about U/X in the end: |
@gsf I have been refactoring the |
Helm 3.0.0 stable has been released 🎉 |
Helm v3 support can be tested with builds of the Install Helm v3 CLI and add it to your path:
Install Flux:
Install the HelmRelease CRD that contains the helm version field:
Install Helm Operator with Helm v3 support using the latest build:
Keep an eye on https://hub.docker.com/repository/docker/fluxcd/helm-operator-prerelease/tags?page=1&ordering=last_updated for new builds. |
I've taken a look at the latest image (
The logs show that the operator thinks there is a difference between the applied and expected chart due to the type mismatch for the value of
The important part being:
This doesn't impact any of the running pods but does clog up the release history meaning rollbacks would be impossible
This has happenned on a number of charts where I've used integers in the values |
I am taking a look on helm-v3-dev-7589ee47 and helm-operator creates secrets in the following format:
where Helm 3 has a prefix:
|
@rowecharles thanks for reporting this. The reason this seems to happen is because the dry-run values generated by Helm (internally) are returned directly from memory and take a shorter route than the values returned from storage, bypassing a parser that casts the float values to strings. I was able to get rid of the spurious 'upgrades' by always casting the values to a YAML string, and then re-reading this string into a @eschereisin this is because the Helm operator is still running |
Is there an estimate when helm-operator supports helmv3 production-ready? Can I offer help for testing/evaluation? |
Pre-release builds are published on every commit to the helm-v3-dev branch. |
Helm Operator using Helm v3 (stable) can now be tested. Install Helm v3 CLI and add it to your path:
Install Flux:
Install the HelmRelease CRD that contains the helm version field:
Install Helm Operator with Helm v3 support using the latest build:
We've created a a GitHub issue template for Helm v3. Please take this for a spin and create issues if you find any problems with it. Thanks! |
Hey! For example. Here is what I get after a fresh install of sealed-secrets-controller:
I also get validation errors from cert-manager release, which is also managed by helm-operator right now:
Do you have any ideas how to address that? Thank you very much! |
@dragonsmith Are you able to install these charts with the helm v3 CLI? The way CRDs are handled has changed: https://helm.sh/docs/topics/chart_best_practices/custom_resource_definitions/ |
@gsf the interesting thing is these charts are installed successfully either way (via helm3-cli or using helm-operator). It's just that helm operator gives such warnings. Also, look closer: the second warning is about: I'm aware that there are changes to CRD handling but I thought it steps in only if we use Chart api V2, because we actually need a back compatibility with current charts, but I did not dive into the new code. |
@dragonsmith That ValidationError looks like this issue: pegasystems/pega-helm-charts#34 (comment). |
Glad to see this merged into master, can't wait for the next release 👍 |
Also confirming that we're looking good, really appreciate the quick fixes! :) |
The latest rc seems just install prometheus operator during the first run and leaves it untouched afterwards. But I still get this error message:
Can we just ignore this message? |
I have also found this problem in flux-helm-operator:
I do not understand this error my config looks just fine and just to work before using helm2:
I do not see the use of the keyword enabled anywhere in this helm release. |
@runningman84 That "enabled" field in PodDisruptionBudget was a Helm V3 incompatibility in prometheus-blackbox-exporter prior to version 2.0.0. You may need to set the content of PodDisruptionBudget as described in the README: https://github.com/helm/charts/tree/master/stable/prometheus-blackbox-exporter#200 |
I can also confirm that @runningman84 the |
I have also ran into this and it's rather annoying about every 5 minutes its does a revision.
|
@smark88 you are running an old version, |
Not sure if there is anything to do with 1.0.0-rc7 but I was running from a build of helm-v3-dev for a few weeks, I have changed to 1.0.0-rc7 and now most of the helm releases have a status of "failed to compose values for chart release" even though they are installed fine For example
|
|
I have also two releases with status failed to compose values also they are installed using the rc7 version. |
@runningman84 @REBELinBLUE can you both share a copy of one of |
I can share one of them which is using a public chart:
this is the kubectl output:
|
@runningman84 this works without any issues for me, can you share the |
this is the output
|
I redeployed last night and the
|
@hiddeco https://github.com/REBELinBLUE/k8s-on-hypriot/blob/master/deployments/velero/velero/velero.yaml but most of the charts in the |
@REBELinBLUE @runningman84 does one of you still have access to logs from around the time the status message was pushed for the release? |
I don't think so but I will check. Shouldn't helm operator try it again each run and the error should go away or come back again? |
Sadly not I'm afraid, as it's just a test server I don't have the logs being stored and I've killed the pod to see if it fixes it. All I get now in the logs is things like
|
@runningman84 the message is only updated if an actual release happens because some mutation is made (either to the |
But that is the point, they were fine, nothing has been changed but the status has for some reason changed to that? That said, I will update the values now so an update actually runs and see what happens. |
Well, I made this change REBELinBLUE/k3s-rpi-cluster@eecdb49#diff-b5b0500f31d74e5d8f95df6d7237661f and when it was run the status changed to "Helm release sync succeeded" so I guess that is a fix |
Closing this as most bugs have been fixed by now, in case you encounter a new one do not hesitate to open a dedicated issue. On a last note I want to thank everyone who commented on this issue for their dedication and the quality of the reported issues, it made the bug fixing as fun as it can get. 🌷 |
With the release of Helm 3 would like to track progress for this integration
The text was updated successfully, but these errors were encountered: