-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(uninstall):remove kubearmor annotations from kubernetes resources #440
base: main
Are you sure you want to change the base?
Conversation
@Prateeknandle let's add a warning as well that policies and annotations will be removed when running |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
fe9b421
to
2b93122
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
KubeArmor v1.4.0 stable, BPF LSM node - pods belonging to deployments are also getting restarted with --force
and being presented with this warning.
IMO pods which don't have the apparmor annotation should not be restarted?
$ ./karmor uninstall
ℹ️ Resources not managed by helm/Global Resources are not cleaned up. Please use karmor uninstall --force if you want complete cleanup.
ℹ️ Following pods will get restarted with karmor uninstall --force:
+-----+-----------------------------------------+-------------+
| NO | POD NAME | NAMESPACE |
+-----+-----------------------------------------+-------------+
| 1 | nginx-bf5d5cf98-99lcw | default |
| 2 | coredns-576bfc4dc7-55xmp | kube-system |
| 3 | local-path-provisioner-6795b5f9d8-7vt46 | kube-system |
| 4 | metrics-server-557ff575fb-r67vv | kube-system |
+-----+-----------------------------------------+-------------+
❌ KubeArmor resources removed
🔄 Checking if KubeArmor pods are stopped...
🔴 Done Checking; all services are stopped!
⌚️ Termination Time: 4.329732048s
kubectl describe pod after uninstall
Name: nginx-bf5d5cf98-99lcw
Namespace: default
Priority: 0
Service Account: default
Node: kubearmor-dev-next/10.0.2.15
Start Time: Tue, 06 Aug 2024 11:44:33 +0000
Labels: app=nginx
pod-template-hash=bf5d5cf98
Annotations: kubearmor-policy: enabled
kubearmor-visibility: process,file,network,capabilities
Status: Running
IP: 10.42.0.57
IPs:
IP: 10.42.0.57
Controlled By: ReplicaSet/nginx-bf5d5cf98
Containers:
nginx:
Container ID: docker://4ac64f9c6e035a814d1ff752745c36a479f7cedf4a187df81018e56bbb7ad439
Image: nginx
Image ID: docker-pullable://nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 06 Aug 2024 11:44:38 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2ph48 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-2ph48:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even with --force
some resource are left, just like the current behavior.
cc @daemon1024
$ kubectl api-resources | grep kubearmor
kubearmorconfigs operator.kubearmor.com/v1 true KubeArmorConfig
kubearmorclusterpolicies csp security.kubearmor.com/v1 false KubeArmorClusterPolicy
b2a48ee
to
5ffaeef
Compare
5e6659d
to
17ec671
Compare
…flag is triggered Signed-off-by: Prateek <prateeknandle@gmail.com>
7ed2efa
to
0d07968
Compare
775921a
to
f0c920a
Compare
Run legacy uninstall after uninstall regardless of installation type |
CRDs are cleaning successfully -
Do we plan to keep clusterrole and clusterrolebindings even after force uninstall?
|
fixes: #434
karmor uninstall
that will be restarted when--force
flag will be usedkarmor uninstall
output: