Skip to content

The KubeArmor DaemonSet pod restarts due to a goroutine error. #2004

@thungrac

Description

@thungrac

Bug Report

General Information

  • Environment description (GKE, VM-Kubeadm, vagrant-dev-env, minikube, microk8s, ...)

KubeArmorOperator on k8s on-premise

# karmor probe
Found KubeArmor running in Kubernetes

Daemonset :
 	kubearmor 	Desired: 21	Ready: 21	Available: 21	
Deployments : 
 	kubearmor-operator  	Desired: 1	Ready: 1	Available: 1	
 	kubearmor-relay     	Desired: 1	Ready: 1	Available: 1	
 	kubearmor-controller	Desired: 1	Ready: 1	Available: 1	
Containers : 
 	kubearmor-apparmor-containerd-98c2c-cnr7b	Running: 1	Image Version: kubearmor/kubearmor:v1.5.3   
 	kubearmor-controller-6b58f65dcd-2f4zf    	Running: 1	Image Version: kubearmor/kubearmor-controller:v1.5.3  
 	kubearmor-operator-7799dd6fbb-8cg9v      	Running: 1	Image Version: kubearmor/kubearmor-operator:v1.5.3    	
 	kubearmor-relay-5bf689dcd8-8qknm         	Running: 1	Image Version: kubearmor/kubearmor-relay-server:v1.4.6	
  • Kernel version (run uname -a)
Linux k8s-ai3 5.4.0-200-generic #220-Ubuntu SMP Fri Sep 27 13:19:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Orchestration system version in use (e.g. kubectl version, ...)

k8s version v1.24.17 / containerd://1.7.22

Node 1 : 
 	OS Image:                 	Ubuntu 20.04.1 LTS 	
 	Kernel Version:           	5.4.0-200-generic  	
 	Kubelet Version:          	v1.24.17           	
 	Container Runtime:        	containerd://1.7.22	
 	Active LSM:               	AppArmor           	
 	Host Security:            	false              	
 	Container Security:       	true               	
 	Container Default Posture:	audit(File)        	audit(Capabilities)	audit(Network)	
 	Host Default Posture:     	audit(File)        	audit(Capabilities)	audit(Network)	
 	Host Visibility:          	none     
  • Link to relevant artifacts (policies, deployments scripts, ...)

Install KubeArmorOperator ver 1.5.3 https://github.com/kubearmor/KubeArmor/tree/v1.5.3/deployments/helm/KubeArmorOperator

  • Target containers/pods

KubeArmor

To Reproduce

KubeArmor keeps restarting at unpredictable intervals. The error log is:

fatal error: concurrent map read and map write
	/usr/src/KubeArmor/KubeArmor/feeder/feeder.go:797 +0x9b
	/usr/src/KubeArmor/KubeArmor/feeder/feeder.go:591 +0x849
	/usr/src/KubeArmor/KubeArmor/monitor/logUpdate.go:569 +0x20ed
goroutine 1 [chan receive, 120 minutes]:
	/usr/src/KubeArmor/KubeArmor/core/kubeArmor.go:895 +0x29fa
 - 
github.com/kubearmor/KubeArmor/KubeArmor/core.KubeArmor()
main.main()
goroutine 13198930 [running]:
 - 
github.com/kubearmor/KubeArmor/KubeArmor/feeder.(*Feeder).ShouldDropAlertsPerContainer(0xc00048fd90, 0xf0000ad6, 0xf0000ad5)
github.com/kubearmor/KubeArmor/KubeArmor/feeder.(*Feeder).PushLog(_, {0x67d30d31, {0xc00709d1e0, 0x1b}, {0x0, 0x0}, {0xc00038b2a0, 0x7}, {0xc0055c67c0, 0x8}, ...})
created by github.com/kubearmor/KubeArmor/KubeArmor/monitor.(*SystemMonitor).UpdateLogs in goroutine 114
	/usr/src/KubeArmor/KubeArmor/main.go:79 +0x3ed
 - 
sync.runtime_notifyListWait(0xc00048fba8, 0x19)
sync.(*Cond).Wait(0xc00782a4c0?)
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc00048fb80, 0xc00005a4f0)
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004dce40, {0x2870160, 0xc0004d4b40}, 0x1, 0xc000118150)
	/go/pkg/mod/k8s.io/client-go@v0.32.1/informers/factory.go:160 +0x56
	/go/pkg/mod/k8s.io/client-go@v0.32.1/informers/factory.go:158 +0x205
goroutine 82 [chan receive, 2 minutes]:
	/go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/shared_informer.go:973 +0x45
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
created by k8s.io/client-go/informers.(*sharedInformerFactory).Start in goroutine 76
 - 
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
goroutine 77 [sync.Cond.Wait, 2 minutes]:
	/usr/local/go/src/runtime/sema.go:587 +0x159
	/usr/local/go/src/sync/cond.go:71 +0x85
	/go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/delta_fifo.go:588 +0x231
	/go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/controller.go:195 +0x30
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc00048fc30)
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/controller.go:166 +0x375
	/go/pkg/mod/k8s.io/client-go@v0.32.1/tools/cache/shared_informer.go:508 +0x2a9
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004dce40, 0x3b9aca00, 0x0, 0x1, 0xc000118150)
k8s.io/apimachinery/pkg/util/wait.Until(...)
k8s.io/client-go/tools/cache.(*controller).Run(0xc00048fc30, 0xc000118150)
k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc00048fad0, 0xc000118150)
k8s.io/client-go/informers.(*sharedInformerFactory).Start.func1()
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004d8f70, {0x2870160, 0xc00022a4e0}, 0x1, 0xc00052a000)
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:55 +0x1b
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:72 +0x4c
	/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:70 +0x73

Expected behavior

Fix this error. Thank you very much.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions