-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[receiver/kubeletstats] Add new CPU utilization metrics #27276
[receiver/kubeletstats] Add new CPU utilization metrics #27276
Conversation
Because of some bad changelog stuff from #25894 I added the Skip Changelog label to get it to pass, but we do have a changelog here. |
@dmitryax ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
"cpu": "2m", | ||
"memory": "10M" | ||
}, | ||
"limits": { | ||
"cpu": "100m", | ||
"cpu": "4m", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the test data to give us more interesting percentages to work with. The previous values produced really small numbers when used with the values set for the first pod in https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kubeletstatsreceiver/testdata/stats-summary.json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the test data to give us more interesting percentages to work with.
I'm just having troubles finding where this change is reflected. Is there a test output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the scraper_test.go
file, these changes let me test that there was a value above 1 for k8s.pod.cpu_request_utilization
and k8s.container.cpu_request_utilization
but less than 1 for k8s.pod.cpu_limit_utilization
and k8s.container.cpu_limit_utilization
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Thanks for clarifying. Given that fakeRestClient
provides static values, I think we should generate pdata output and use it for validation the way we do it in most oh other tests. Maybe worth filing an issue with good first issue
label for this. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya that is a good idea. I'll open an issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…ry#27276) **Description:** Adds new CPU utilization metrics with respect to pod/container CPU limits and requests **Link to tracking Issue:** <Issue number if applicable> Closes open-telemetry#24905 **Testing:** <Describe what testing was performed and which tests were added.> Added new unit tests and tested locally
Description:
Adds new CPU utilization metrics with respect to pod/container CPU limits and requests
Link to tracking Issue:
Closes #24905
Testing:
Added new unit tests and tested locally