You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary
first RTT for requests that can only happen one after another are not taken into account.
Hi,
imagine this page load where the html page has a single script module, which renders the LCP image. But this script has a dependency (in the form of an esm module import import "./script-2.js" ) to script-2. script-2 loads script-3, 3 loads 4 ... til script-9 imports script-10. And once all dependencies are imported then the LCP image can be rendered:
For the LCP timing I would assume it will take the browser 4-5 round trips to establish the TCP connection and get the initial html, 1 rtt per script file (the browser doesn't know it needs e.g. script-4 before it has loaded script-3 and can't download them in parallel) and finally a last rtt for the image. plus each time the server processing and the actual content download, but for simplicity assume that is zero. This will give us an LCP of a minimum (for rtt=150ms) 2,250 ms.
But Lighthouse reports LCP of 0.8s PSI result (LH 11.5.0, Chrome 122) for this test page. WPT Test for comparison reports the LCP around 3s.
I think the issue is that in the Lanterns network throttling simulation, it assumes resources which are sharing the same connection are always requested in parallel, and don't bring any addition RTTs for their TTFB (if (this._warmed && this._h2) timeToFirstByte = 0;tcp-connection.js#L152). This condition might need to be extended by checking if the node's networkRequestTime or rendererStartTime was not after the networkEndTime form the previous node from the same TCP connection in the original run (i.e. it was requested with or during another request).
Same applies to the extra capacity of the response to one request that can be filled up with the content of the next request (extraBytesDownloaded = ... totalBytesDownloaded - bytesToDownloadtcp-connection.js#L178) which would only be true when the server is already aware of the next request which isn't here the case.
This scenario is for most websites out there irrelevant, but recently no-build frontends and relaying on native es modules is gaining popularity, largely based on its good lighthouse score (see DHH's post for example), which makes it important to have an accurate LH score for such setups.
If any of these makes sense, let me know and I'll try to open a pull request
Cheers
Mehran
The text was updated successfully, but these errors were encountered:
Summary
first RTT for requests that can only happen one after another are not taken into account.
Hi,
imagine this page load where the html page has a single script module, which renders the LCP image. But this script has a dependency (in the form of an esm module import
import "./script-2.js"
) to script-2. script-2 loads script-3, 3 loads 4 ... til script-9 imports script-10. And once all dependencies are imported then the LCP image can be rendered:For the LCP timing I would assume it will take the browser 4-5 round trips to establish the TCP connection and get the initial html, 1 rtt per script file (the browser doesn't know it needs e.g. script-4 before it has loaded script-3 and can't download them in parallel) and finally a last rtt for the image. plus each time the server processing and the actual content download, but for simplicity assume that is zero. This will give us an LCP of a minimum (for rtt=150ms) 2,250 ms.
But Lighthouse reports LCP of 0.8s PSI result (LH 11.5.0, Chrome 122) for this test page. WPT Test for comparison reports the LCP around 3s.
I think the issue is that in the Lanterns network throttling simulation, it assumes resources which are sharing the same connection are always requested in parallel, and don't bring any addition RTTs for their TTFB (
if (this._warmed && this._h2) timeToFirstByte = 0;
tcp-connection.js#L152). This condition might need to be extended by checking if the node'snetworkRequestTime
orrendererStartTime
was not after thenetworkEndTime
form the previous node from the same TCP connection in the original run (i.e. it was requested with or during another request).Same applies to the extra capacity of the response to one request that can be filled up with the content of the next request (
extraBytesDownloaded = ... totalBytesDownloaded - bytesToDownload
tcp-connection.js#L178) which would only be true when the server is already aware of the next request which isn't here the case.This scenario is for most websites out there irrelevant, but recently no-build frontends and relaying on native es modules is gaining popularity, largely based on its good lighthouse score (see DHH's post for example), which makes it important to have an accurate LH score for such setups.
If any of these makes sense, let me know and I'll try to open a pull request
Cheers
Mehran
The text was updated successfully, but these errors were encountered: