-
Notifications
You must be signed in to change notification settings - Fork 9.4k
Update varnish6.vcl #33604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update varnish6.vcl #33604
Conversation
This is Thijs from Varnish Software. The `set beresp.ttl = 0s` should NEVER EVER be used. By setting the TTL to zero seconds, the object will not be stored in the *Hit-For-Miss* cache. When the next request for this object is received, Varnish will assume the content is cacheable and will put the request on the waiting list. However, those requests can never be satisfied in parallel. This means every request for a resource that has a status code other than `200` or `404` will be processed serially. If one of these requests has slow response times, it will slow the entire chain of requests down. I understand that you want to cater for one-off HTTP 500 requests, because you're afraid they'll end up in the *Hit-For-Pass cache* for a long time. But this behavior has changed in Varnish 5: as of Varnish 5, *Hit-For-Pass* has been converted into *Hit-For-Miss*. This means that the object will be uncacheable until the *Hit-For-Miss* TTL expires, or until the next response is deemed cacheable. My advice is to use the standard TTL as illustrated below: ``` if (beresp.status != 200 && beresp.status != 404) { set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); } ``` > Please also change this in `varnish5.vcl` and set the TTL to a lower value in `varnish4.vcl`.
Hi @ThijsFeryn. Thank you for your contribution
❗ Automated tests can be triggered manually with an appropriate comment:
You can find more information about the builds here ℹ️ Please run only needed test builds instead of all when developing. Please run all test builds before sending your PR for review. For more details, please, review the Magento Contributor Guide documentation. 🕙 You can find the schedule on the Magento Community Calendar page. 📞 The triage of Pull Requests happens in the queue order. If you want to speed up the delivery of your contribution, please join the Community Contributions Triage session to discuss the appropriate ticket. 🎥 You can find the recording of the previous Community Contributions Triage on the Magento Youtube Channel ✏️ Feel free to post questions/proposals/feedback related to the Community Contributions Triage process to the corresponding Slack Channel |
Hi @ThijsFeryn! This sounds like a duplicate of #28927, but maybe you can convince people over here that this probably should be seen as a higher priority then P4? 🙂 (PS: Really loved your "Oe Magento doen affeceren" talk at PHPWVL some years ago 😉) |
@hostep I looked at the full VCL file suggested by Magento and based on what I've seen I can do a whole other presentation on how to improve Magento's standard VCL file. Reach out to me on Twitter if you're interested in better VCL files for Magento. And as far as #28927 is concerned, I noticed @gquintard as on that. He's a colleague of mine. |
seem duplicate with mentioned PR as hostep reference |
Hi @ThijsFeryn, thank you for your contribution! |
Hi @ThijsFeryn, @sdzhepa @sivaschenko @gabrieldagama, I'm pretty sure the guys from Varnish Software know way better Varnish than most of the Magento community. Can we increase the priority of PRs from @gquintard (also from Varnish Software), test and deliver them? PS: last time I missed that @gquintard from @varnish |
This is Thijs from Varnish Software.
The
set beresp.ttl = 0s
should NEVER EVER be used. By setting the TTL to zero seconds, the object will not be stored in the Hit-For-Miss cache.When the next request for this object is received, Varnish will assume the content is cacheable and will put the request on the waiting list. However, those requests can never be satisfied in parallel. This means every request for a resource that has a status code other than
200
or404
will be processed serially.If one of these requests has slow response times, it will slow the entire chain of requests down.
I understand that you want to cater for one-off HTTP 500 requests, because you're afraid they'll end up in the Hit-For-Pass cache for a long time. But this behavior has changed in Varnish 5: as of Varnish 5, Hit-For-Pass has been converted into Hit-For-Miss.
This means that the object will be uncacheable until the Hit-For-Miss TTL expires, or until the next response is deemed cacheable.
My advice is to use the standard TTL as illustrated below: