-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading k3s leaves orphaned replicasets? #5056
Comments
Yeah, we could probably tweak the canned Deployment manifests to set |
It's purely cosmetic, though, right? Other than the (minor) storage required in the DB? Can I just go ahead and delete them? |
A cleaner way to do it might be to patch the deployments to set the revisionHistoryLimit to 0, instead of the default 2. This will cause the deployment controller to clean up the old revisions. |
@brandond what's the status of this? |
This is purely a cosmetic thing; I'll take a look at addressing it in the next patch cycle though. |
This change was just for the packages that are distributed as flat manifests - coredns, metrics-server, and local-storage but not for bundled components like traefik. Due to this, result looks like this:
So, it will be change it for next release. |
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
1 server; 4 agents. All RPi 4 (8GB). All running Ubuntu 21.10.
Describe the bug:
I've got some orphaned replicasets:
Steps To Reproduce:
Not entirely sure, but the 42d would correspond to when I first installed k3s on this cluster, and the 28d would correspond to when I upgraded it.
Installation: https://blog.differentpla.net/blog/2021/12/20/k3s-ubuntu-reinstall/
Upgrade: https://blog.differentpla.net/blog/2022/01/03/upgrading-k3s/
Expected behavior:
I'm assuming that these replicasets should have been deleted.
Actual behavior:
They weren't.
Additional context / logs:
Is it safe to delete them myself?
Backporting
The text was updated successfully, but these errors were encountered: