-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Use a map to back meta's cache. #13660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Looks good to me. For those following along at home, the motivation here is that it turns out to be bad for performance to use a lot of plain old Javascript objects as if they were |
|
@rwjblue I'm still making my way back from the face to face it is going to be a bit before I can benchmark this |
|
Updated:
@krisselden - No giant rush from my end. Please let me know if I can do anything to help... |
|
The following storage strategies exist and are used from the new
|
|
@chadhietala - I believe I have incorporated all of your suggestions from #13659. |
|
☔ The latest upstream changes (presumably #13658) made this pull request unmergeable. Please resolve the merge conflicts. |
|
@krisselden - I backported this into 2.6 for easier performance testing. The built assets are here, and you can install by changing |
|
@rwjblue are you guys going to publish the performance results? i would find that super interesting. |
|
Also interested in performance metrics (across devices and in some non-trivial apps). This stuff is always tricky, but I do think this path is good to explore, and I hope it helps out. |
|
We may not see immediate performance benefits if the inline cache budget is still being blown by other parts of the codebase. @krisselden, is there a good way to measure progress on a more inline-cache-specific metric? It would be awesome to prevent regressions in CI, so we can keep driving the usage monotonically downward. |
We need to find some criteria for merging speculative fixes (that require more steps to fully realize). I think your idea of seeing if we can see if we are reducing the pressure or not, coupled with ensuring minimal regressions will do the trick. |
|
I agree, we should definitely confirm that there are fewer IC misses on this branch are than on master, and also confirm that this isn't slower than the current implementation. However, I also think that this interface ( |
This is a first step to making the baking store of `Meta`'s underlying data structures swappable. The first implementation is using a simple array of key value pairs, but since we are using the `set`, `get`, `has`, `delete` interface we can change the backing data store at will (or even "upgrade" at runtime). The underlying implementation of the "lodash-stack" is mostly copied from lodash itself (with inline attribution), but in the longer term we should simply use this from lodash itself (and embed these modules from lodash-es in our build process).
* Use template strings instead of manual concat * Swap out module scope `var`s to `const`.
The following storage strategies exist and are used from the `Stack`
implementation:
* `ListCache`
* `Hash`
* `MapCache`
`ListCache`
===========
This is the initial storage mechanism. The underlying data format
is as follows:
```js
[
[key, value]
[key, value]
[key, value]
[key, value]
]
```
`Hash`
======
The underlying data format is as follows:
```js
{
[key]: value
}
```
The base object is an `EmptyObject` (roughly `Object.create(null)`).
`MapCache`
==========
`MapCache` defers to different underlying storage mechanisms based on
the key being stored:
* `string` -- Utilizes a `Hash` dedicated to string keys.
* `number` / `symbol` / `boolean` -- Utilizes a dedicated `Hash` for
these types (all are stored in the same `Hash`).
* Everything else falls back to using `ListCache`.
---
The overall strategy is to store items in the `ListCache` until it has
200 items in it, then we translate from `ListCache` to `MapCache` (which
actually uses 3 different structures internally depending on which
object type is being used as a key).
All credit to this mechanism goes to Lodash, they did incredible work
here and we are just utilizing their mechanisms. Once the performance of
this within Ember is validated, we will migrate away from this locally
vendored version of this file and simply use a stripped version of
`lodash` itself from the build.
|
I did some benchmarking of this using travis-web. Detailed gist is here. @krisselden - Have you had a chance to test this? |
|
Those five outlier tests are concerning. Worst case is worse, median case is better. This is what Safari optimizes for on consistency inside of JSC and Intel on their SSDs. Best-case/worst-case boundaries to prevent jank. Done again I'd be curious to see if it had the same number of outliers, and why those outliers were that much slower since everything else was in a tight cluster. |
|
☔ The latest upstream changes (presumably #13858) made this pull request unmergeable. Please resolve the merge conflicts. |
Sorta by definition, we should ignore the outliers, unless test isn't evenly distributing the work. |
|
@stefanpenner It's 5 "outliers" in 30 runs; so 1/6th of the total number of runs are between 2-3x slower than the median case. |
|
Superseded by #16222. |


This is a first step to making the baking store of
Meta's underlying data structures swappable. The first implementation is using a simple array of key value pairs, but since we are using theset,get,has,deleteinterface we can change the backing data store at will (or even "upgrade" at runtime).The underlying implementation of the "lodash-stack" is mostly copied from lodash itself (with inline attribution), but in the longer term we should simply use this from lodash itself (and embed these modules from lodash-es in our build process).
/cc @krisselden @ef4