Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading k3s leaves orphaned replicasets? #5056

Open
1 task
rlipscombe opened this issue Jan 31, 2022 · 6 comments
Open
1 task

Upgrading k3s leaves orphaned replicasets? #5056

rlipscombe opened this issue Jan 31, 2022 · 6 comments
Assignees
Labels
kind/enhancement An improvement to existing functionality status/2023 confirmed
Milestone

Comments

@rlipscombe
Copy link

rlipscombe commented Jan 31, 2022

Environmental Info:
K3s Version:

k3s version v1.22.5+k3s1 (405bf79d)
go version go1.16.10

Node(s) CPU architecture, OS, and Version:

Linux rpi401 5.13.0-1015-raspi #17-Ubuntu SMP PREEMPT Thu Jan 13 01:27:28 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:

1 server; 4 agents. All RPi 4 (8GB). All running Ubuntu 21.10.

Describe the bug:

I've got some orphaned replicasets:

$ kubectl --namespace kube-system get rs
NAME                                DESIRED   CURRENT   READY   AGE
coredns-7448499f4d                  0         0         0       42d
local-path-provisioner-5ff76fc89d   0         0         0       42d
metrics-server-86cbb8457f           0         0         0       42d
traefik-6b84f7cbc                   0         0         0       42d
coredns-85cb69466                   1         1         1       28d
local-path-provisioner-64ffb68fd    1         1         1       28d
metrics-server-9cf544f65            1         1         1       28d
traefik-786ff64748                  1         1         1       28d

Steps To Reproduce:

Not entirely sure, but the 42d would correspond to when I first installed k3s on this cluster, and the 28d would correspond to when I upgraded it.

Installation: https://blog.differentpla.net/blog/2021/12/20/k3s-ubuntu-reinstall/
Upgrade: https://blog.differentpla.net/blog/2022/01/03/upgrading-k3s/

Expected behavior:

I'm assuming that these replicasets should have been deleted.

Actual behavior:

They weren't.

Additional context / logs:

Is it safe to delete them myself?

Backporting

  • Needs backporting to older releases
@brandond
Copy link
Member

brandond commented Feb 2, 2022

Yeah, we could probably tweak the canned Deployment manifests to set revisionHistoryLimit: 0 to avoid retaining the old replicaset.

@brandond brandond added this to the v1.24.0+k3s1 milestone Feb 2, 2022
@brandond brandond added the kind/enhancement An improvement to existing functionality label Feb 2, 2022
@rlipscombe
Copy link
Author

It's purely cosmetic, though, right? Other than the (minor) storage required in the DB? Can I just go ahead and delete them?

@brandond
Copy link
Member

brandond commented Feb 3, 2022

A cleaner way to do it might be to patch the deployments to set the revisionHistoryLimit to 0, instead of the default 2. This will cause the deployment controller to clean up the old revisions.

@katran001 katran001 modified the milestones: v1.24.0+k3s1, v1.24.3+k3s1 Jun 28, 2022
@katran001 katran001 modified the milestones: v1.24.3+k3s1, v1.24.4+k3s1 Aug 1, 2022
@cwayne18
Copy link
Member

@brandond what's the status of this?

@brandond
Copy link
Member

This is purely a cosmetic thing; I'll take a look at addressing it in the next patch cycle though.

@bguzman-3pillar
Copy link

This change was just for the packages that are distributed as flat manifests - coredns, metrics-server, and local-storage but not for bundled components like traefik. Due to this, result looks like this:

$ kubectl get rs -A 
NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE
kube-system   coredns-597584b69b                  1         1         1       43m
kube-system   local-path-provisioner-79f67d76f8   1         1         1       43m
kube-system   metrics-server-5f9f776df5           1         1         1       39m
kube-system   traefik-66c46d954f                  1         1         1       38m
kube-system   traefik-bb69b68cd                   0         0         0       43m

So, it will be change it for next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement An improvement to existing functionality status/2023 confirmed
Projects
Status: Enhancements
Development

No branches or pull requests

6 participants