-
Notifications
You must be signed in to change notification settings - Fork 9.4k
[vcl] don't set TTL to 0 #28927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[vcl] don't set TTL to 0 #28927
Conversation
Because of request coalescing, setting the TTL to 0s actually causes backend request queueing, meaning that for one specific object, Varnish will only allow one request at a time. For Varnish 4, set the TTL to 2 minutes, leveraging Hit-for-Pass and Varnish will transform any hit into a pass for the next 2 minutes, preventing queueing. For 5 and 6, the default mechanism is Hit-for-Miss, which works the same as HfP with the added bonus that if the response changes and we we decide to cache it, this will override the HfM TTL. So we can set the TTL for one day.
Hi @gquintard. Thank you for your contribution
❗ Automated tests can be triggered manually with an appropriate comment:
You can find more information about the builds here ℹ️ Please run only needed test builds instead of all when developing. Please run all test builds before sending your PR for review. For more details, please, review the Magento Contributor Guide documentation. |
@magento run all tests |
@@ -166,7 +166,7 @@ sub vcl_backend_response { | |||
|
|||
# cache only successfully responses and 404s | |||
if (beresp.status != 200 && beresp.status != 404) { | |||
set beresp.ttl = 0s; | |||
set beresp.ttl = 120s; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the purpose of caching 500 / 503 / 403?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it doesn't cache them because of the following set beresp.uncacheable = true;
. Instead Varnish will remember that these objects are uncacheable, and will disable request collapsing for them.
If you keep the TTL to 0, you get into something like this:
- 20 users request with the same object
- varnish only goes to the backend once (request coalescing)
- varnish realizes it cannot reuse the object, so it only serves it to one client
- we now have 19 users request with the same object
- varnish only goes to the backend once
- varnish realizes ....
rinse, repeat. Ultimately, the last request needs to wait 20 round-trip to the backend before being delivered.
If you set the TTL to some positive value AND the uncacheable bit to true:
- 20 users request with the same object
- varnish only goes to the backend once (request coalescing)
- varnish realizes it cannot reuse the object, so it only serves it to one client, but remembers it as uncacheable
- the 19 remaining requests are woken up, and can all go to the backend in parallel
it's one of the trickiest Varnish gotchas, so please let me know if I wasn't clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gquintard Your explanation sounds fine for me. ✔️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✔️ After discussing my concerns with the author.
Thank you for your expertise!
@magento run Functional Tests B2B |
Hi @lbajsarowicz, thank you for the review.
|
Hi @lbajsarowicz . Could you put appropriate labels for test coverage #28927 #28928 #28944 ? |
Automated Tests are not applicable, as the change is related to infrastructure configuration. |
Dev experience is required for testing of this PR. Please note that Manual testing has not been performed. |
This basically means never, because there will always be higher prio PR's. |
See extra arguments in #33604 (comment) around this change. Maybe the priority should be increased here? |
Should be marked as top priority, it solve possible performance issue |
@gquintard I'm no longer a Magento Maintainer, but I'm really sorry you've been waiting for more than a year to get any feedback. As this PR became... popular (https://twitter.com/IvanChepurnyi/status/1420298850833682432), I'm pretty sure Magento Community Engineering has extra motivation to review and merge it. For those who encounter the issue - there's always a way to generate Composer Patch: https://patch-diff.githubusercontent.com/raw/magento/magento2/pull/28927.diff and apply that to your project directly without waiting years. |
If tests are required to prove the impact of this PR, here are some
By storing this test in The servers contain artificial delay which will help us prove our point. Here's the version with
Store this code in In a production environment where a lot more concurrency takes place, the performance impact will be more significant than in these isolated test cases.
|
Changing priority since PR brings performance improvement for installations using Varnish |
Hi @gquintard, thank you for your contribution! |
* Updated Varnish vcl files * Included X-Magento-Cache-Debug header in all deployment modes * Added Web-API test
Unfortunately, it got added again in other places in 026e5b2 :( |
Because of request coalescing, setting the TTL to 0s actually causes
backend request queueing, meaning that for one specific object,
Varnish will only allow one request at a time.
For Varnish 4, set the TTL to 2 minutes, leveraging Hit-for-Pass and
Varnish will transform any hit into a pass for the next 2 minutes,
preventing queueing.
For 5 and 6, the default mechanism is Hit-for-Miss, which works the same
as HfP with the added bonus that if the response changes and we we decide
to cache it, this will override the HfM TTL. So we can set the TTL for
one day.
Contribution checklist (*)