-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanos Receive: "Out of bounds metric" #4831
Comments
Also, this happens constantly when rolling out updates to receivers as we can see while it takes one down to replace and the hash ring can no longer contact a member of the ring:
|
Also, we've noticed when a receiver begins to report poorly, this behavior is what we see:
Logs begin to accumulate:
and then everything grinds to a halt: We are users of the thanos-reciever-controller, it doesn't seem to help when the ring gets into this state. Args we run recieve with:
Did recently find: cortexproject/cortex#2366 that looks very related and potentially promising? Is there anything we can do to potentially alleviate this issue for the time being in our environments? |
Short term we should try to check if receiver response is correct. OutOfBounds is unrecoverable. Do we also tell this back to Prometheus? That is at least to resolve the infinite loop of retries. |
Hit this issue quite a lot recently. |
We used to hit it too, but at least since switching to the receive router & ingestor mode we haven't hit it since. Not sure however 100% that it's related, but it might be worth trying if you are not using that mode yet. Details on our setup here. |
It can be related if your receive is struggling to keep up I'm sure. For this issue, what I really would like is an easy-to-replicate setup for my dev environment. There was a related issue, which I now cannot find anymore. What I stated there is that I believe we do not handle the HTTP code correctly. The short version is that Prometheus basically could return two things; 1: "error, try again" and 2: "error, unrecoverable". - I think there is a chance we handle case 2 as 1, causing basically a DoS until one resets Prometheus again. |
exactly, the Prometheus team claims it handles the HTTP code properly and that its on the thanos receive side: prometheus/prometheus#5649 Going to peel apart our thanos config into the router and ingestor and report back. |
After giving thanos receive double the resources as well as breaking it down into a router/ingestor - we still see this issue using the latest, 0.24.0. Whats interesting is it will begin happening with a single tenant (picture below) and then cascade over time to ALL tenants reporting to receive. If we're not the only one having this issue, this seems to be a massive issue that will happen to any receive user over time. |
What would help me, is a very easy to reproduce example. I understand this is perhaps hard to do but I've already spend some time on this, not even triggering this error once.. while actually pushing some really weird stuff to receiver. If you could help me with this? |
Sure absolutely, I'll see if I can help produce it in any way I can. Cloud Provider: GKE v1.21.6-gke.1500 Unfortunately, it seems time is a factor in being able to produce this issue, but we've found you can potentially force it if you can get your prometheus to fall behind in shipping it's WAL for a bit by choking its connection throughput. The metrics we usually have it complain about always tend to be related to cadvisor and kubestate metrics we ship:
and sometimes we catch these:
Let me know what other information would be useful for you to have to potentially reproduce. We cannot reliably get it to happen, but it happens at-least monthly in terms of cadence currently. Looking at building a system to mitigate it on sites with prometheus if it begins spinning out of control. |
Do you monitor prometheus with https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/manifests/grafana-dashboardDefinitions.yaml - see prometheus-remote-write.json ? If so, can you show me how it looks like? I will try to reproduce this again tomorrow or so. |
I could reproduce this by fiddling around with my local time, causing;
What I however not could reproduce is the behaviour from Prometheus side. Perhaps this is due to the fact I'm using an older Prometheus version because I'm having some issues installing prometheus on my M1 lol |
@Kampe is there any way you perhaps could test my PR in some staging/dev environment? |
Yes absolutely, is there an image sha that gets built for this anywhere I can reference? Would be glad to deploy it tomorrow into development environment! |
We only have that for things in |
I looked a bit through the original report and I'm wondering whether there are two distinct things going on:
@Kampe did you say you detected this behavior during receiver rollout (i.e. period of time when some replicas might be in non-ready state) or also outside of this? |
Interesting, it seems you're correct there are two events happening here, and we also see the context deadline behavior as well during the rollouts, however these will subside eventually after all pods are ready. We will see them crop back up in time, yes - we thought this behavior was related to the receiver's resourcing but we still see this issue when giving receive double the CPU we were requesting. We also rolled out the latest topology using a router and ingestor with hopes this would solve the issue that we were seeing there. No avail. You can see the 409's start happening in my graphs above ^^ |
I see, so it seemingly happens at random times. So it seems a receiver replica simply fails to forward the request to another receiver(s) and this is manifesting itself as You mention in the original post that killing Prometheus instances resolves it? (I'm thinking how that could be related, since if this is connectivity issue between receivers, I would expect this still to keep happening even after Prometheus instance restarts). Does it subside by itself or is that step still necessary? |
Yeah it's very strange as to when we see it appear and are not able to reliably reproduce it, it not only requires the prometheus is restarted, but that its entire WAL is also dumped, or else you'll see the same behavior repeat on restart. (we utilize statefulsets for prometheus as we're using the prometheus operator, so we delete the entire statefulset to get it to report again to thanos receive properly) This is our go to strategy whenever we see any behavior like this (no matter 500s or 400s as we judge based on last reported block compared to the currently produced block) We're currently working on an operator to detect that a prometheus is behind in shipping its blocks, and calling the local k8s api to kill the prometheus instances to force remote writes to report again properly because doing this manually at any scale is, as you can imagine, very cumbersome. |
Hey folks - just chiming in here. I am a colleague of @jmichalek132 and can confirm that we are see this behavior happening in our environment even after the move to running thanos receive in router and ingestor mode. The frequency is very irregular and the behavior seems to only affect a prometheus instances in a particular environment while instances in other environments writing to the same endpoint see little to no degradation in their remote write. We typically see increased latency and timeouts on the router. However the ingestors do not exhibit and substantial latency. This was also validated by forking thanos and adding more instrumentation(spans) around the TSDB write. If you have any pointers as to what is worth investigating next, we are all ears. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Our team is currently experiencing this issue too. Did you have any luck with the that operator to kill Prometheus instances as a workaround? |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
The same problem occured last week in our system. Several prometheus stop remote writing to Thanos due to network problem, when the network recovery after two days,Thanos receive-distribuotr and receive crash. We must restart all prometheus instance and drop all wal data. The version we use is 0.28.0. |
This is a limitation of TSDB and should be solved once we have support for out of order samples. |
@fpetkovski me and @philipgough also had big troubles because of this and worked on a fix at #5970. Basically we return a 4xx if the request contained any out of bounds sample, so that Prometheus stop retrying that request forever as it quickly snowball and bring Receives to a halt. This is still pretty much valid, even when we support out of order samples, because:
|
Thanks for the explanation. I wonder why this is not covered by https://github.com/thanos-io/thanos/blob/main/pkg/receive/handler.go#L884 |
@fpetkovski same. I think some things got lost in translation in between this issue and #5131. |
Also, according to this comment it looks like at some point "out of bounds" returned a 409 error. |
@fpetkovski I suspect it's because of the Lines 516 to 528 in e911f03
Lines 939 to 977 in e911f03
|
Maybe the underlying issue is on the fact that |
Now that #5910 has been merged, could this issue be fixed? |
@fpetkovski does this issue not predate the ketama hashing algorithm? |
That's correct, but in that PR we revamped the way we calculate quorum. I suspect it should help, but I am not 100% sure. Do you have a way to reproduce this issue somehow? |
As @philipgough could confirm, Thanos isn't at fault here. The error handling code is working fine. At least in our scenario, we attribute 500s to being spammed (effectively DDoSed) by aggressive Prometheus remote write configurations (short min backoff, short client side timeout, big max shards, etc). |
We hit this issue again recently in production with Thanos Initially I was thinking that there was a potential for Thanos to return a 5xx when it should not and it could be related to the existing #5407. However, adding some tests via #5976 appears to show that is not the case and the error handling is correct. What I did manage to highlight with a test is that an unrecoverable error with a forward request that takes longer than the forward timeout and results in a context cancellation will return a 500. So we can envisage a situation where one or more receivers become slow or we get a slow network between some receivers can start a loop where remote write clients begin to retry over and over for requests that will never succeed. We can see from the graphs below that as things become slow for whatever reason (in our case, we believe we are sending traffic to receives that are not fully ready) a vicious cycle starts with crashing of receivers and build up of appenders etc. We mitigated the incident in the end by rolling out additional receivers and when everything become healthy we started to handle requests and respond with 429s correctly. |
Hey folks - I wanted to chime in here with some relevant observations. Although I could write at length about this topic, I will keep it brief. Historically when we did deployments on our receive cluster (specifically the ingesters) - we would see all sorts of errors (500/503/409). The logs revealed several types of errors on ingestion from "Out of Bounds" to "Already Exists". However I noticed that when the deployment began we'd also see errors on the routers claiming that quorum wasn't reached. This always perplexed me since our replication factor was 3 and rollouts on the ingester statefulsets would happen one at a time and appropriate probes (startup, readiness and liveness) prevented the deployment from progression until the previous ingester was ready to receive traffic. So theoretically quorum should in fact have been reached but it wasn't. I then considered the possibility that there is a race condition and decided to update the ingester StatefulSets with the |
I do still recommend tuning prometheus remote write params to reduce its aggressiveness on retries as well as the max samples per batch to reduce the overall remote write rate. In our experiments we noticed that doubling the number of samples per batch from the default 500 to 1000 had virtually no impact on remote write latency ( from the prometheus perspective). This is now our default configuration across the board. There is likely a tradeoff for the appropriate samples per batch beyond which there individual remote write requests start taking too long so be sure to test your changes. |
If you are using a k8s environment with the thanos receive controller, make sure that the configmap is updated on each receiver pod at the same time. In my case, it took between 1 and 5 minutes for the full hashing to synchronised (for all pods to see the changed config settings), at which point I noticed a context deadline error on the receiver, which pushed back prometheus remote writes and increased the desired shard count. as @vanugrah said, setting |
I think thanos receiver should also have hashing set up based on url, not ip, like hashring with pod work well:
but headless service not work
|
Hey folks, Wanted to chime in hear since I forgot to mention something in my earlier post. If you use thanos-receive-controller in a k8s environment, then as @gurumee92 identified, uneven updating of the hashring configmap across routers is a recipe for disappointment. Though instead of increasing the The way we have solved this is by adding this sidecar https://github.com/squat/configmap-to-disk to our router pods, such that when there are any changes to the configmap, the on disk representation is updated in near real time ( few seconds to converge) with the magic of informers. @philipgough Has also forked the thanos receive controller and implemented an informer version here: I'd say that this has been on of the most impactful changes for remote write stability in our pipeline which processes about 500 Million active series. |
Hello!
We run a handful of "edge" sites in a hub and spoke topology and feed metrics from remote promethei to a centralized receiver hash-ring. We see issues pop up quite often relating to "Out Of Bounds" metrics in receiver. The prometheus remote write will begin to return 500s to prometheus and Reciever latency climbs, impacting other tenants. We're unsure if the metrics are too far in the future, or in the past, however we assume the past as it's likely we could have experienced network or latency issues with shipment of remote metrics to cloud. It seems this is related to TSDB and Prometheus WAL limitations? prometheus/prometheus#9607
The shipping prometheus itself is setup with the prometheus-operator, and has been configured with
disableCompaction: true
to ensure it doesn't ship old blocks to Thanos, too (even though we'd LOVE (read: need) to have them...)Worse? When prometheus fails to remote write, it just keeps retrying, getting into a loop with the receiver until the block is discarded. This is 100% not intended and there probably needs to be a feature/flag added on one side of the system here to help prevent this (as data caps are a real thing and this gets into 100s of GBs quick if unchecked).
Related and similar: #4767 - However we certainly have unique labels on our items and rarely are we getting 409 conflicts, but constantly 500s.
Thanos Version: 0.22.0
Prometheus Version: 2.30.3
Object Storage Provider:
GCP
What happened:
Thanos Receive will on occasion get into a fit with the samples a prometheus sends up to it for storage throwing Out of Bounds errors, causing cascading issues for the whole Thanos ingestion of metrics globally within the entire system.
What you expected to happen:
Thanos Receive accepts the samples and stores them in the bucket without issue.
How to reproduce it (as minimally and precisely as possible):
We see this most with shipments of kube-state-metrics, however you can get any prometheus into this issue with a receiver if you wait long enough.
Full logs to relevant components:
Receiver Logs:
Prometheus Logs:
Work Around:
Kill all reporting promethei and their relevant statefulsets, then Receive is happy, but hopefully this can help illustrate how this isn't scalable given (N) tenants :(
The text was updated successfully, but these errors were encountered: