Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance gets worse fast when there is more nodes in DOM #5

Open
tomaskikutis opened this issue Jan 16, 2016 · 4 comments
Open

Performance gets worse fast when there is more nodes in DOM #5

tomaskikutis opened this issue Jan 16, 2016 · 4 comments

Comments

@tomaskikutis
Copy link

Firstly, I want to thank you for your talk(Pocket-sized JS) and inspiration for less code.

I wanted to check out how this POC performs when there are more nodes in the DOM. The result I came up with is that every ~1500 DOM nodes add ~100ms of lag between clicking the button and seeing DOM updated. That's on my old laptop(1.60GHz, 4GB RAM). On my Samsung galaxy S3 neo it's ~300 extra DOM nodes = +100ms lag.

You can check it out yourself https://github.com/tomaskikutis/feather-app | http://heavy-feather.surge.sh

I agree that we should try to keep the count of DOM nodes to minimum, but at least 2000 nodes shouldn't hurt. Keep in mind the example is just plain hello world, there is not much going on.

What optimizations can you see ?

@HenrikJoreteg
Copy link
Owner

well, so, i agree, it's not ideal for the case where you're adding/removing massive amounts of DOM. but to be fair, neither is React :)

One potentially significant optimization may be to add a unique key attribute to the items in the list.

Anyway, I've done some similar similar experiments and walked away with this:

  1. Creating, not diffing, the new virtual dom is actually one of the slowest parts of the whole process.
  2. Diffing is the second slowest, patching seemed reasonable fast.
  3. The cool thing is, even when there's a delay, it doesn't really bother the main thread very much because the crunching is happening in the worker.

Anyway, thanks for reporting back, it's interesting to hear.

@tomaskikutis
Copy link
Author

I was not adding/removing massive amounts of DOM at once. Numbers I mentioned were the duration of modifying single DOM node - incrementing or decrementing the counter while there were already x elements in the DOM.

It's nice that web worker is taking the load, and UI stays responsive, but from UX standpoint it's still slow when you have to wait half a second for your action to take effect. Especially when all you are modifying is a single node.

I'm wondering if it's possible to use immutable objects in order to skip creation and diffing of vDOM where we know for sure data hasn't changed.

It would be great to write code as if it was rerendering on every change, but in reality have only the parts which really changed to be diffed at all.

@timsim00
Copy link

Some ideas here on how to speed this up.

The first suggestion was to use transferable objects instead of using JSON data to pass messages. The DOM manipulation instructions I was passing between the worker and the UI thread did not have a fixed structure. Thus, I would have to implement a custom binary protocol to make this work.
The second suggestion was to simply use JSON.stringify when passing messages. I guess this is similar to transferable objects, just that in this case, it is a big blob of 8-bit characters. There is also a comment about this by one of the IndexedDB authors.

@HenrikJoreteg
Copy link
Owner

I think we could get really far here with some simple memoizing of the functions that return new vdom. i've done this a bit in an experimental v2 of this approach and it seems to work quite well, but haven't really pushed it perf-wise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants