perf(ui): precompress javascript resources#4236
perf(ui): precompress javascript resources#4236jvoisin wants to merge 1 commit intominiflux:mainfrom
Conversation
There is no need to recompress them on the fly for every request. Moreover, doing the compression once at startup allows miniflux to use the very best compression level, instead of the medium/default one.
fguillot
left a comment
There was a problem hiding this comment.
Looks like deflate support has been dropped in this PR. I don’t have stats on its usage, but I’m assuming it’s fine since brotli and gzip are the most widely used, and zstd is becoming more popular.
|
Is having both brotli and gzip worth this complexity? Miniflux as it is quite lightweight app so having no compression wouldn't make it slow to load. In terms of compression, I doubt that brotli vs gzip will make any significant difference in performance for the amount of data miniflux has. Having just gzip should be enough for download boost. Given that resources are cached by browser anyway, this seems way too complicated for the gain. |
|
IMO compressing files on build would be even better as most (if not all) resources do not change between releases |
|
There are no easy way to precompress files on build in go unfortunately. Compression is worth it on mobile, as miniflux is serving 22kB of javascript, 21kB of css and 12kB of svg, for a total of 55kB. Compressed at the current level, it's a bit more than 10kb. Having support for gzip already adds a bit of complexity, piling deflate and brotli on top of it not so much, doesn't it? |
There's Then, on the go side there are options but I'd go with separate folder inside already existing
Is is for brotli or for gzip? I'd bet that difference is so small it's yak shaving at this point. |
|
But now that I'm thinking given that current pipeline is relying on I'd suggest you to have That way you could've
|
| // asset based on the Accept-Encoding request header value. It returns | ||
| // the bytes to write and the Content-Encoding value ("br", "gzip", or | ||
| // "" for identity). | ||
| func (a asset) Negotiate(acceptEncoding string) (body []byte, encoding string) { |
There was a problem hiding this comment.
That'd lead to invalid results unfortunately.
Accept-Encoding has very specific notation of content.
- Yours will accept
gobrrrrr - As well as
go br - And
pew-gzip-pew - And so on
Also quality value is omitted so for gzip, br;q=0 you'd still return br although that explicitly means client requested no brotli.
I'd not chase 100% correctness tbh. Most (if not all) browser would send plain list of encoding. Compared to Accept-Language (which actually uses q-values) iteration would be dead simple:
for e := strings.SplitSeq(acceptEncoding) {
e = strings.TrimSpace(e)
if i := strings.IndexByte(e, ';'); i > -1 {
e = e[:i] // just in case browsers become smart. not exactly right but whatever
}
switch e {
case "br": ...
case "gzip": ...
case "identity": // return raw
}
}
// fallback to raw| // Best compression levels are used because this only runs once at | ||
| // startup; the resulting byte slices are served directly on every | ||
| // request, avoiding any per-request compression work. | ||
| func precompress(data []byte) (brotliData, gzipData []byte) { |
There was a problem hiding this comment.
With FS per-encoding I'd maybe run that in separate goroutine to avoid delaying startup and at the very end just swap whole FS with one that contains compressed data.
There is no need to recompress them on the fly for every request. Moreover, doing the compression once at startup allows miniflux to use the very best compression level, instead of the medium/default one.
This should fix #4234