-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance.measureMemory API #281
Comments
(I filed issues 4-6 starting at WICG/performance-measure-memory#4.) |
Also ccing @amccreight @nnethercote @smaug----. I'm curious what the intention of:
means. Given the discussions I'm looking at, I think it's clear (especially given the discussion of buckets) that this can't be implemented using per-process memory usage data; it requires splitting up the memory within processes. Given that... what does "fast" mean? Does implementing this require that all memory allocation in browsers be done in bucket-specific allocators, for whatever the buckets are? Or does "fast" still allow time for crawling object graphs? |
Providing only the total memory usage without breaking it down into buckets is a valid implementation. There is a trade-off between the granularity of the breakdown and the performance overhead. Performance Considerations and Implementation Notes list some implementation options. The meaning of "fast" was that the API allows a fast implementation. But it is up to the implementers to decide whether to go with a fast but coarse-grained or with a slow but fine-grained option. |
In general, variability in what an API actually does is bad news both in terms of Web compat (sites except the behavior to match the browser that the developer develops in) and in terms of fingerprinting (although it's generally assumed that hiding engine identity is futile anyway). Why wouldn't the Web compat concern be a problem here if different implementations give results broken down differently? |
Web compat is a valid concern here. We need to weigh the risks against the value that the API provides. Actionable memory measurement will help developers to prevent memory regressions and reduce memory usage of their websites. This is good for end users and browser vendors (see e.g. this comment from the previous discussion). IMO the API is in the sweet spot of the trade-off space. This is how the API minimizes the web compat risks:
|
Perhaps the spec should say that implementations MUST do that. It is quite unclear from the proposal what all should be counted in "bytes". And it surely would be highly UA dependent, so it should say that in its name. |
So I had a somewhat closer look at the explainer this time. A few thoughts:
|
Thanks @dbaron! 1 and 3: The API is enabled only for cross-origin isolated pages, which means that all loaded cross-origin iframes and resources have explicitly allowed the main origin to access them via CORP/CORS. (The COEP policy enforces that.) Thus a malicious web page cannot use the API to leak information from a non-cooperating cross-origin resource. For cooperating resources the main origin can obtain only the size information. All URLs reported by the API are already known to the main origin. A similar security argument is used for enabling SharedArrayBuffers. 2: All entries in |
Sorry for the delay cycling back to this. I guess I'm having trouble reconciling in my head the following three claims, all of which have been made here:
In particular, I can think of three ways one could reasonably implement this:
I don't see how any of these three designs satisfy all of the above claims. (I'm excluding redesign of the engine to do all memory allocation in appropriately-scoped arenas from being "reasonable", since it's likely a very large amount of work with potentially significant side-effects on both speed and memory usage.) I'm also still quite concerned about the lack of mitigations to reduce the risk of web compatibility problems -- mitigations that seem likely to be reasonable. (WICG/performance-measure-memory#10 and WICG/performance-measure-memory#11 were opened above for some of these.) |
@dbaron thanks for the comment! I realize that I mixed the requirements for the API spec and API implementation in the summary. Sorry for this confusion. The claim about process-model independence was intended for the API spec. The API is defined using standard concepts without any assumptions about the process model. You're right that implementation has to necessarily depend on the process model. A fast implementation is possible if the browser doesn't put multiple web pages into the same process. Luckily, the browser has to isolate web pages that set COOP+COEP for security anyway. Since the API is only available to cross-origin isolated web pages, the precondition for the fast implementation will likely be fulfilled. I am using "likely" here because there are some caveats. For example, multiple instances of the same web page may share the same process. In such cases, the browser can either sacrifice the precision of the result by estimating the memory usage of a single instance or fall back to a slow traversal (or use a hybrid of the two options). I am currently working on a spec draft and plan to incorporate WICG/performance-measure-memory#10. I am not 100% sure about WICG/performance-measure-memory#11. It would make the API more verbose and remove the emphasis from |
The spec draft is available at https://wicg.github.io/performance-measure-memory/
I hope this allows the review to move forward. Please let me know if there is anything blocking it. |
I commented in WICG/performance-measure-memory#11 and unless I'm missing something, I think it is pretty critical. |
WICG/performance-measure-memory#11 was addressed a while ago. Is there anything else blocking this review? |
Request for Mozilla Position on an Emerging Web Specification
Other information
This proposal generalizes the previous performance-memory proposal that was discussed in Issue #85 and abandoned due to information leak of cross-origin resources.
What is different in the new API?
The previous discussion recognized the need for a memory measurement API even if it is fundamentally platform dependent. The main concerns were around implementability and process model assumptions. Here is a very brief summary (sorry if I missed some points):
The new API incorporates these suggestions except for the last one. The result breakdown provides more information and thus makes it easier for developers to see that the result is UA-specific.
The text was updated successfully, but these errors were encountered: