We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanos, Prometheus and Golang version used: thanos version: 12.1.2 loki-stack version: 2.1.0
Object Storage Provider: S3
What happened: storegateway pod keeps on restarting
What you expected to happen: i expect it to not restart :)
How to reproduce it (as minimally and precisely as possible): It just started restarting
Full logs to relevant components:
level=info ts=2023-09-11T19:54:45.893399645Z caller=bucket.go:639 msg="loaded new block" elapsed=89.663576ms id=01GVMMB8DC3RA0AW8H51YM6TZ7 level=info ts=2023-09-11T19:54:45.910767989Z caller=store.go:388 msg="bucket store ready" init_duration=4m28.844156166s level=info ts=2023-09-11T19:54:45.917748864Z caller=intrumentation.go:56 msg="changing probe status" status=ready level=info ts=2023-09-11T19:54:45.923462671Z caller=grpc.go:131 service=gRPC/server component=store msg="listening for serving gRPC" address=0.0.0.0:10901 level=info ts=2023-09-11T19:55:09.517720887Z caller=fetcher.go:478 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=23.606090174s duration_ms=23606 cached=8646 returned=8646 partial=0 level=info ts=2023-09-11T19:56:06.121480024Z caller=main.go:172 msg="caught signal. Exiting." signal=terminated level=warn ts=2023-09-11T19:56:06.141687405Z caller=intrumentation.go:67 msg="changing probe status" status=not-ready reason=null level=info ts=2023-09-11T19:56:06.14581228Z caller=http.go:91 service=http/server component=store msg="internal server is shutting down" err=null level=info ts=2023-09-11T19:56:06.164297202Z caller=http.go:110 service=http/server component=store msg="internal server is shutdown gracefully" err=null level=info ts=2023-09-11T19:56:06.168185318Z caller=intrumentation.go:81 msg="changing probe status" status=not-healthy reason=null level=info ts=2023-09-11T19:56:06.186385795Z caller=grpc.go:138 service=gRPC/server component=store msg="internal server is shutting down" err=null level=info ts=2023-09-11T19:56:06.191434723Z caller=grpc.go:151 service=gRPC/server component=store msg="gracefully stopping internal server" level=info ts=2023-09-11T19:56:06.450578316Z caller=grpc.go:164 service=gRPC/server component=store msg="internal server is shutdown gracefully" err=null level=info ts=2023-09-11T19:56:06.629136152Z caller=main.go:164 msg=exiting rpc error: code = NotFound desc = an error occurred when try to find container "aea3632741416df0faf19c9a7baef80f1229439145a921ef0804358d948671dc": not found
The text was updated successfully, but these errors were encountered:
Can you update to latest thanos version and retry?
Sorry, something went wrong.
No branches or pull requests
Thanos, Prometheus and Golang version used:
thanos version: 12.1.2
loki-stack version: 2.1.0
Object Storage Provider:
S3
What happened:
storegateway pod keeps on restarting
What you expected to happen:
i expect it to not restart :)
How to reproduce it (as minimally and precisely as possible):
It just started restarting
Full logs to relevant components:
level=info ts=2023-09-11T19:54:45.893399645Z caller=bucket.go:639 msg="loaded new block" elapsed=89.663576ms id=01GVMMB8DC3RA0AW8H51YM6TZ7 level=info ts=2023-09-11T19:54:45.910767989Z caller=store.go:388 msg="bucket store ready" init_duration=4m28.844156166s level=info ts=2023-09-11T19:54:45.917748864Z caller=intrumentation.go:56 msg="changing probe status" status=ready level=info ts=2023-09-11T19:54:45.923462671Z caller=grpc.go:131 service=gRPC/server component=store msg="listening for serving gRPC" address=0.0.0.0:10901 level=info ts=2023-09-11T19:55:09.517720887Z caller=fetcher.go:478 component=block.BaseFetcher msg="successfully synchronized block metadata" duration=23.606090174s duration_ms=23606 cached=8646 returned=8646 partial=0 level=info ts=2023-09-11T19:56:06.121480024Z caller=main.go:172 msg="caught signal. Exiting." signal=terminated level=warn ts=2023-09-11T19:56:06.141687405Z caller=intrumentation.go:67 msg="changing probe status" status=not-ready reason=null level=info ts=2023-09-11T19:56:06.14581228Z caller=http.go:91 service=http/server component=store msg="internal server is shutting down" err=null level=info ts=2023-09-11T19:56:06.164297202Z caller=http.go:110 service=http/server component=store msg="internal server is shutdown gracefully" err=null level=info ts=2023-09-11T19:56:06.168185318Z caller=intrumentation.go:81 msg="changing probe status" status=not-healthy reason=null level=info ts=2023-09-11T19:56:06.186385795Z caller=grpc.go:138 service=gRPC/server component=store msg="internal server is shutting down" err=null level=info ts=2023-09-11T19:56:06.191434723Z caller=grpc.go:151 service=gRPC/server component=store msg="gracefully stopping internal server" level=info ts=2023-09-11T19:56:06.450578316Z caller=grpc.go:164 service=gRPC/server component=store msg="internal server is shutdown gracefully" err=null level=info ts=2023-09-11T19:56:06.629136152Z caller=main.go:164 msg=exiting rpc error: code = NotFound desc = an error occurred when try to find container "aea3632741416df0faf19c9a7baef80f1229439145a921ef0804358d948671dc": not found
The text was updated successfully, but these errors were encountered: