-
Notifications
You must be signed in to change notification settings - Fork 19
Description
The kube-solo project is very interesting, and its goals closely align with our current work—specifically, deploying lightweight Kubernetes clusters (e.g., k3s/k8s) at the edge. While kube-solo focuses on single-node deployment and our use case targets multi-node scenarios, it remains highly relevant to our efforts.
We’ve achieved approximately 240MB of memory usage per edge node through deep customization of k3s, and our goal is to stabilize it below 200MB. According to the official documentation, kube-solo maintains memory consumption within the range of 180–220MB, which makes it a valuable reference for us.
However, during my initial deployment, I observed that kube-solo consumed as much as 480MB of memory. After reviewing related issues, I learned this might be due to the total amount of host memory available—on systems with abundant RAM, the runtime tends to allocate more resources proactively.
To test this hypothesis, I limited the VM’s memory to 512MB, and the result was striking: kube-solo’s memory usage dropped to around 150MB—even lower than the documented minimum.
As shown in the attached graph, this is the memory usage trend monitored continuously over a 30-minute period. During the entire test, the system ran almost nothing besides kube-solo, with no kubectl operations or external workloads applied.
Questions
- Why is the memory fluctuation so severe?
- Is this level of oscillation considered normal under idle conditions?
I’d appreciate any insights into the observed behavior. Thank you!