-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s.pod.network.io gives data only from eth0 #30196
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@prabhatsharma I believe you're right, thank you for bringing this to our attention. In https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kubeletstatsreceiver/internal/kubelet/network.go we use the bytes provided at the root of https://pkg.go.dev/k8s.io/kubelet/pkg/apis/stats/v1alpha1#NetworkStats. If we wanted to recorded all the interface stats I believe we'd need to loop through the If we did this I believe it would add an extra dimension to the datapoints we produced for |
This issue makes sense, looks like we are collecting only default network stats:
It would make sense to add interface name as an extra dimension. Regarding breaking change, we could make a featureflag? |
Definitely a featureflag. Also I think we're in luck: the existing metric already defines |
I believe this issue is also impacting the I am wondering if we need to have any additional logic for pods which run in hostNetwork since these would have all host network interfaces show up which can blow up in cardinality and the values might not even make sense since it is for complete host traffic. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
#33993 reports the issue for the |
I don't believe there was any blocking reason. We still want this, but it has been hard to prioritize. |
Revived that at #34287. PTAL |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
receiver/kubeletstats
What happened?
Description
I have been trying to get network io details per namespace. Comparing the metrics of
k8s.pod.network.io
with cadvisor -container_network_receive_bytes_total
I found that kubeletstatsreceiver does not return data for interfaces other thaneth0
. This provides incomplete picture around networking bandwidth utilized.Steps to Reproduce
sum(rate(k8s_pod_network_io{k8s_namespace_name="$k8s_namespace_name", direction="receive"}[5m])) by (interface)
vs
sum(rate(container_network_receive_bytes_total{namespace="$k8s_namespace_name"}[5m])) by (interface)
Expected Result
I should see data from all interfaces from
k8s_pod_network_io
stream.Actual Result
got results only for
eth0
Collector version
v0.90.1
Environment information
Environment
Amazon EKS
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: