-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support snappy compression in query frontend cache #3207
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -70,6 +70,9 @@ func registerQueryFrontend(app *extkingpin.App) { | |||||
|
||||||
cfg.CachePathOrContent = *extflag.RegisterPathOrContent(cmd, "query-range.response-cache-config", "YAML file that contains response cache configuration.", false) | ||||||
|
||||||
cmd.Flag("query-range.compression", "Use compression in results cache. Supported values are: 'snappy' and '' (disable compression)."). | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we elaborate when it is used? On response? or when put in Memcached ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also, maybe There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, it is used before storing the response to Memcached. It doesn't make sense to use it when only fifo-cache is enabled. I should document that. I am okay with either There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
But it will work for other non FIFO backends as well, right? 🤔 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, it compresses the response and then sends it to the results cache backend. If it is fifo-cache, because snappy cache and fifo cache are in memory, so not needed. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Technically it could be needed -> It might reduce memory consumed as well (: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah you are right. Sorry, my mistake. It is a tradeoff between CPU and Memory & network traffic There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No need to be sorry - I think it's extremely useful. The only question is, do we need this for query range or some global cache option? (for future labels and instant)? I think we need it for all so maybe some generic:
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the option can be global. But considering other caching configs, I am not sure. This is another problem and maybe we can have a separate issue to talk about it. The main question is, can we use different cache backends and different cache configs for each endpoint we want to cache? @bwplotka @kakkoyun For example, it is not very good to reuse the same cache config because the response sizes are different for range queries and instant queries. They might need different limits and different max_items configs. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Totally agree. To me, something will be the same, some things different (: |
||||||
Default("").StringVar(&cfg.CacheCompression) | ||||||
|
||||||
cmd.Flag("query-frontend.downstream-url", "URL of downstream Prometheus Query compatible API."). | ||||||
Default("http://localhost:9090").StringVar(&cfg.CortexFrontendConfig.DownstreamURL) | ||||||
|
||||||
|
@@ -107,14 +110,17 @@ func runQueryFrontend( | |||||
level.Warn(logger).Log("msg", "memcached cache valid time set to 0, so using a default of 24 hours expiration time") | ||||||
cfg.CortexResultsCacheConfig.CacheConfig.Memcache.Expiration = 24 * time.Hour | ||||||
} | ||||||
cfg.CortexResultsCacheConfig = cacheConfig | ||||||
cfg.CortexResultsCacheConfig = &queryrange.ResultsCacheConfig{ | ||||||
Compression: cfg.CacheCompression, | ||||||
CacheConfig: *cacheConfig, | ||||||
} | ||||||
} | ||||||
|
||||||
if err := cfg.Validate(); err != nil { | ||||||
return errors.Wrap(err, "error validating the config") | ||||||
} | ||||||
|
||||||
fe, err := cortexfrontend.New(*cfg.CortexFrontendConfig, logger, reg) | ||||||
fe, err := cortexfrontend.New(*cfg.CortexFrontendConfig, nil, logger, reg) | ||||||
if err != nil { | ||||||
return errors.Wrap(err, "setup query frontend") | ||||||
} | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the catch! Please take a look again.