-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[metricbeat/kubernetes] Remove fallback to the Node's limit for the pod 'usage.limit.pct' metrics #40029
[metricbeat/kubernetes] Remove fallback to the Node's limit for the pod 'usage.limit.pct' metrics #40029
Changes from 2 commits
220ae6d
873aebc
dcc356d
59388cf
f32ad29
8da750c
3916619
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -58,6 +58,9 @@ func eventMapping(content []byte, metricsRepo *util.MetricsRepo, logger *logp.Lo | |
podId := util.NewPodId(pod.PodRef.Namespace, pod.PodRef.Name) | ||
podStore := nodeStore.GetPodStore(podId) | ||
|
||
allContainersCPULimitsDefined := true | ||
allContainersMemoryLimitsDefined := true | ||
|
||
for _, container := range pod.Containers { | ||
usageNanoCores += container.CPU.UsageNanoCores | ||
usageMem += container.Memory.UsageBytes | ||
|
@@ -70,17 +73,22 @@ func eventMapping(content []byte, metricsRepo *util.MetricsRepo, logger *logp.Lo | |
containerStore := podStore.GetContainerStore(container.Name) | ||
containerMetrics := containerStore.GetContainerMetrics() | ||
|
||
containerCoresLimit := nodeCores | ||
if containerMetrics.CoresLimit != nil { | ||
containerCoresLimit = containerMetrics.CoresLimit.Value | ||
// podCoreLimit and podMemLimit are defined only if all of Pod containers have a limit defined, otherwise the limit will be set to 0 | ||
if allContainersCPULimitsDefined && containerMetrics.CoresLimit == nil { | ||
allContainersCPULimitsDefined = false | ||
podCoreLimit = 0.0 | ||
} | ||
if allContainersCPULimitsDefined { | ||
podCoreLimit += containerMetrics.CoresLimit.Value | ||
} | ||
|
||
containerMemLimit := nodeMem | ||
if containerMetrics.MemoryLimit != nil { | ||
containerMemLimit = containerMetrics.MemoryLimit.Value | ||
if allContainersMemoryLimitsDefined && containerMetrics.MemoryLimit == nil { | ||
allContainersMemoryLimitsDefined = false | ||
podMemLimit = 0.0 | ||
} | ||
if allContainersMemoryLimitsDefined { | ||
podMemLimit += containerMetrics.MemoryLimit.Value | ||
} | ||
podCoreLimit += containerCoresLimit | ||
podMemLimit += containerMemLimit | ||
} | ||
|
||
podEvent := mapstr.M{ | ||
|
@@ -132,32 +140,7 @@ func eventMapping(content []byte, metricsRepo *util.MetricsRepo, logger *logp.Lo | |
kubernetes2.ShouldPut(podEvent, "start_time", pod.StartTime, logger) | ||
} | ||
|
||
// NOTE: | ||
// - `podCoreLimit > `nodeCores` is possible if a pod has more than one container | ||
// and at least one of them doesn't have a limit set. The container without limits | ||
// inherit a limit = `nodeCores` and the sum of all limits for all the | ||
// containers will be > `nodeCores`. In this case we want to cap the | ||
// value of `podCoreLimit` to `nodeCores`. | ||
// - `nodeCores` can be 0 if `state_node` and/or `node` metricsets are disabled. | ||
// - if `nodeCores` == 0 and podCoreLimit > 0` we need to avoid that `podCoreLimit` is | ||
// incorrectly overridden to 0. That's why we check for `nodeCores > 0`. | ||
if nodeCores > 0 && podCoreLimit > nodeCores { | ||
podCoreLimit = nodeCores | ||
} | ||
|
||
// NOTE: | ||
// - `podMemLimit > `nodeMem` is possible if a pod has more than one container | ||
// and at least one of them doesn't have a limit set. The container without limits | ||
// inherit a limit = `nodeMem` and the sum of all limits for all the | ||
// containers will be > `nodeMem`. In this case we want to cap the | ||
// value of `podMemLimit` to `nodeMem`. | ||
// - `nodeMem` can be 0 if `state_node` and/or `node` metricsets are disabled. | ||
// - if `nodeMem` == 0 and podMemLimit > 0` we need to avoid that `podMemLimit` is | ||
// incorrectly overridden to 0. That's why we check for `nodeMem > 0`. | ||
if nodeMem > 0 && podMemLimit > nodeMem { | ||
podMemLimit = nodeMem | ||
} | ||
|
||
// `nodeCores` can be 0 if `state_node` and/or `node` metricsets are disabled | ||
if nodeCores > 0 { | ||
kubernetes2.ShouldPut(podEvent, "cpu.usage.node.pct", float64(usageNanoCores)/1e9/nodeCores, logger) | ||
} | ||
|
@@ -167,6 +150,7 @@ func eventMapping(content []byte, metricsRepo *util.MetricsRepo, logger *logp.Lo | |
} | ||
|
||
if usageMem > 0 { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here we dont check if podMemLimit > 0, so this means that memory.usage.node.pct will always be reported for a pod right? But what we need is to report it only in case that Otherwise this Kibana UI will always use the node limits There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
no,
why? the point is to split
I think Kibana issue should be covered separately by elastic/kibana#184984 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I had a second look here. So this change wont break the existing UI, because kibana checks kubernetes.pod.memory.usage.limit.pct > 0.0. As long as this metric is populated then we dont care for the kubernetes.pod.memory.usage.node.pct. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hm, it seems to not work this way usage.node.pct is > 1% for the selected pod: seems this script doesn't work as expected |
||
// `nodeMem` can be 0 if `state_node` and/or `node` metricsets are disabled | ||
if nodeMem > 0 { | ||
kubernetes2.ShouldPut(podEvent, "memory.usage.node.pct", float64(usageMem)/nodeMem, logger) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that This should me moved under the
Breaking Changes
sectionThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is already under
Breaking changes
>Metricbeat
https://github.com/elastic/beats/blob/873aebc393d753d1ccd9cea9cca141067b368a3a/CHANGELOG.next.asciidoc#breaking-changesam I missing smth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this, so maybe it is a rendering issue of the page. I though is only under Metricbeat