-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nginx_ingress_controller_requests is missing for Ingress that has had no requests #6937
Comments
/ assign |
Please upgrade to 0.46.0 and see if this is still an issue. |
/triage needs-information |
Hi @strongjz sorry for the delayed response. I upgrade to 0.47.0 and repeated my above test (created new debug ingress, waited for host and address to appear, checked if metric appeared) and could still not see any |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale Hey @strongjz do you need any more information? |
i can confirm this behavior on version v1.0.4 / 4.0.6 where the metric |
/triage accepted We are happy for any contributions |
@iamNoah1: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove needs-information |
/remove triage/needs-information |
/remove triage-needs-information |
The issue seems to be a bit old, but I'm presenting my thoughts here to provide some closure to others who might see this. I feel the problem at hand is that the metric itself could have multiple labels, and those labels could have multiple values. This makes it harder to initialize |
@iamNoah1 I am new to open source and Kubernetes, and I would like to work on this issue. |
@johurul000 can you write your own technical description of the problem to be solved here |
@longwuyuan no, since I am new, guidance is very helpful |
I mean is the metric missing in your cluster also ? |
Hope it helps a fellow user, I was using wildcard domain and I have to add |
Hey, is anyone still working on this? I think I can take it up if someone can guide me on how to achieve this. |
I too am running into this issue when trying to implement Flagger which refers to this metric in the documents. Would love an update on this. |
I'm running into this issue while using NGINX Ingress Prometheus Overview in GCP Monitoring with the latest version. |
imo this is a prometheus issue, they could introduce a function that returns a 0 if null. This isn't an issue specific to nginx_ingress_controller_requests. |
Have you tried to report this "issue" to Prometheus? If yes, please share the link to your report. |
I haven't, I have read read somewhere it's the responsibility of the metric reporter to pre populate the 0's. I can't find the github issue page now. |
I can easily reproduce this and can see that the metric doesn't appear until requests come through. However, I'm not sure there's an obvious fix. We could just set the metric to 0 when an ingress object is found, but the metric includes labels like: method, path and status (as in http status code). What should these be set too when we set the value to 0? I'm not sure there's a nice way to do this so that the metric appears right away and we don't end up with metrics that will always be 0 because the default labels we choose are never matched with a real request. |
@rikatz I know that I have a followup for the tests, but if you don't mind I would like to work on this issue :D |
/assign @StuxxNet |
Any update on this? |
NGINX Ingress controller version: 0.30.0
Kubernetes version (use
kubectl version
): 1.15Environment:
What happened:
The
nginx_ingress_controller_requests
metric was missing for Ingresses that have had 0 requests. I queriednginx_ingress_controller_requests == 0
for our prometheus metrics and found no time series.What you expected to happen:
I expected ingresses that have had no requests sent to them to have a
nginx_ingress_controller_requests
metric with a count of 0 rather than not being present.How to reproduce it:
nginx_ingress_controller_requests{ingress="test"}
exec
'd to a pod on the cluster and hit the NGINX controller's/metrics
endpoint to check if our prometheus stack was filtering out the metric, but it didn't appear there eitherAnything else we need to know:
Please let me know if you need any other info from me, thanks for your time :)
/kind bug
The text was updated successfully, but these errors were encountered: