Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query: grpc compression not working #5827

Open
yutian1224 opened this issue Oct 27, 2022 · 2 comments
Open

Query: grpc compression not working #5827

yutian1224 opened this issue Oct 27, 2022 · 2 comments

Comments

@yutian1224
Copy link

Version:

Thanos: v0.29.0-rc.0
Image: thanosio/thanos:v0.29.0-rc.0

What happened:

I did a grpc compression test with the latest version of the image, trying to reduce the transmission traffic.

Data flow diagram:
store(grpc) -> query1(grpc) -> query2(http) -> queryFrontend

query1 traffic map:
yellow: with --grpc-compression=snappy
green: with --grpc-compression=none
image

query2 traffic map:
yellow: with --grpc-compression=snappy
green: with --grpc-compression=none
image

**What you expected to happen:

The bandwidth in query1 & query2 should be reduced after turning on grpc compression.

**Details in args

store:
receive --log.level=info --log.format=logfmt --grpc-address=0.0.0.0:10901 --http-address=0.0.0.0:10902 --remote-write.address=0.0.0.0:19291 --objstore.config=$(OBJSTORE_CONFIG) --tsdb.path=/var/thanos/receive --label=thanosreplica="$(NAME)" --label=receive="true" --tsdb.retention=1d --receive.local-endpoint="$(ENDPOINT)"

query1:
query --log.level=info --log.format=logfmt --grpc-address=0.0.0.0:10901 --http-address=0.0.0.0:10902 --endpoint="$(ENDPOINT)" --query.auto-downsampling --query.default-step=30s --query.metadata.default-time-range=5m --query.max-concurrent=100 --query.max-concurrent-select=20 --grpc-compression=snappy

query2:
query --log.level=info --log.format=logfmt --grpc-address=0.0.0.0:10901 --http-address=0.0.0.0:10902 --endpoint="$(QUERY1)" --query.auto-downsampling --query.default-step=30s --query.metadata.default-time-range=5m --query.max-concurrent=100 --query.max-concurrent-select=20 --grpc-compression=snappy

frontend:
query-frontend --log.level=info --log.format=logfmt --web.disable-cors --http-address=0.0.0.0:10902 --query-frontend.compress-responses --query-frontend.downstream-url="$(QUERY2)" - |- --query-frontend.downstream-tripper-config= "max_idle_conns_per_host": 500 "idle_conn_timeout": "5m" --query-range.split-interval=3h --query-range.align-range-with-step --query-range.max-query-parallelism=56 --query-range.max-retries-per-request=5 --query-range.response-cache-max-freshness=15s --query-range.response-cache-config-file=/conf/cache/config.yml --labels.default-time-range=30m --labels.split-interval=3h --labels.partial-response --labels.response-cache-config-file=/conf/cache/config.yml --labels.max-retries-per-request=5 --labels.response-cache-max-freshness=15s

@GiedriusS
Copy link
Member

GiedriusS commented Oct 31, 2022

What about the average traffic usage difference? It's hard to tell what's happening just from usage graphs. I'm 99% sure that compression works because with --grpc-compression=snappy my project https://github.com/GiedriusS/thanos-rust doesn't work (hyperium/tonic#282):

Error executing query: proxy Series(): rpc error: code = Aborted desc = receive series from Addr: 127.0.0.1:50051 LabelSets: {dc="hx", prometheus_node_id="5"} Mint: -9223372036854775808 Maxt: 9223372036854775807: rpc error: code = Unimplemented desc = Message compressed, compression support not enabled.

Perhaps your traffic consists of a lot of unique labels or you have lots of different, small queries hence there's no obvious effect.

@yutian1224
Copy link
Author

Perhaps your traffic consists of a lot of unique labels or you have lots of different, small queries hence there's no obvious effect.

This may be the key, we have a lot of alert and record rules, and queries with large time spans are also sharded by frontend

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants