-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CompressionLayer does not produce compressed data in streaming #292
Comments
Thanks for the report! Compressing streaming response is hard and I don't know a lot about how to do it or whether it requires changes in async-compression. I don't have time to work on this but PRs is much appreciated! |
@davidpdrsn in Nullus157/async-compression#154 we're going towards adding a Would it be acceptable to have something else than I'm also wondering if this issue would also appear on the decompression side, considering Nullus157/async-compression#123 |
Yeah the whole |
In my case I only have a few endpoints where streaming is important. So I have worked around this by adding a middleware outside of the compression layer that strips the tower::ServiceBuilder::new()
.map_request(|mut req: axum::extract::Request| {
if req.uri().path() == "/my-streaming-endpoint" {
req.headers_mut().remove("accept-encoding");
}
req
})
.layer(tower_http::compression::CompressionLayer::new()) Unfortunately you can't do this with One option would be a marker extension to the response object. Maybe that could be included in the How to do streaming compression is definitely quite hard. However I think the goal is probably pretty clear. More or less we want to flush the compression whenever the send buffer is empty. Basically it is better to send lightly-compressed data than nothing. It isn't quite that simple due to things like proxies in the middle that may be buffering for the client, so our send buffer may empty quickly even if there is a buffer somewhere between us and the client. But I suspect some basic metric like "if there is no new data for 10ms, flush" is probably good enough for 99% of cases. It will allow good compression for responses where the sever is streaming data quickly but for intermittent responses it will flush each chunk fairly quickly. The ideal situation would also be some sort of explicit flush, so that the generator can indicate when something should urgently be seen by the end-user. But I think auto-flush is still likely a good default. |
One option would be to let users choose when to flush the compression buffer. Two heuristics that come to mind are number of bytes and number of marker characters (e.g. "flush at a newline every 1MB"). |
That wouldn't really be sufficient for my use cases. In my cast I am doing a slow operation and occasionally outputting a status update. That can be a very small number of bytes and I don't really want to inject a marker just to make flushing work. |
Maybe there's a more advanced version but I think having simple defaults for common cases makes sense, and they can be an implementation of the trait or hook. |
Bug Report
Version
0.3.4
Platform
Linux 5.18.10-76051810-generic, but probably not relevant
Crates
tower-http, async-compression
Description
We noticed in apollographql/router#1572 that the
CompressionLayer
waits until the entire response body is compressed to send it to the client. This is due to theEncoder
behaviour in async-compression: Nullus157/async-compression#154.How we saw that result:
Stream
ofBytes
that is wrapped in an axumStreamBody
The issue comes from this part:
https://github.com/Nemo157/async-compression/blob/ada65c660bcea83dc6a0c3d6149e5fbcd039f739/src/tokio/bufread/generic/encoder.rs#L63-L74
When the underlying stream returns
Poll::Pending
,ready!
will return it directly, so no data will be produced. I patched this in our router to produce data whenever we see aPending
, but that's not a proper solution.Instead, the
CompressionLayer
should be able to direct theEncoder
to produce data depending on conditions like how much data could be produced, or how long since the last chunk was sentThe text was updated successfully, but these errors were encountered: