-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanos receiver keeps growing #5601
Comments
Hi, is this a new setup or has the receiver started to grow unexpectedly on an existing setup? I'm not very familiar with the Helm chart, but could you maybe provide all the parameters you are running your receiver with? |
This is a new setup. All parameters I deployed Helm with are already given, so nothing special at all. |
Any idea @matej-g ? Thanks! |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
@matej-g i´ve got the same issue regardind receiver, it does uploads blocks to Object Storage but does´t delete it from local disk, here is the params:
As you can see, we have blocks being created every 2 hours:
|
Thanos, Prometheus and Golang version used:
Thanos version 0.27.0
Prometheus version 2.35.0
Object Storage Provider:
minio s3
What happened:
Currently we run thanos receiver, query compactor and storagegateway.
We have 6 prometheus with a remote rule writing to thanos.
But our thanos receiver keeps growing. In a week the persistent volume has grown by 80Gb.
We see the same growth in our s3 minio backend.
What you expected to happen:
Thanos receiver not to be this big.
How to reproduce it (as minimally and precisely as possible):
Not sure, but we installed Thanos via helm, version 10.5.5:
Full logs to relevant components:
no errors are logged in thanos receiver.
Anything else we need to know:
We run this on a Kubernetes Rancher environment.
The text was updated successfully, but these errors were encountered: