Description
Writing a reduce function like this looks really concise and clean:
arr.reduce(
(acc, { id, value }) => ({
...acc,
[id]: value,
}),
{}
)
The alternative, looks a little bit less pretty:
arr.reduce(
(acc, { id, value }) => {
acc[id]: value
return acc
},
{}
)
However, the first version is making a shallow copy of the accumulator object on every iteration, and because of this I expected it to be a bit slower.
I was interested in finding out how much slower this would actually be, so I made a jsperf yesterday to find out. I was pretty shocked about the results: it's not just a little bit slower, it's a HUGE difference...
Because the differences were so extreme, I initially didn't trust the results. So I also created a simple HTML page with a button that tests the same thing, and confirms the results. And I also discussed things with @amcgee @Birkbjo and @Mohammer5, and they couldn't find anything wrong with my setup.
@amcgee also prepared a jsperf to compare object-rest-spread to parameter assignment in isolation.
Basically, everything points to the object spread operator and object assign being way slower than parameter assignment. So my advice would be: don't use this pattern in iterations. Especially in a reduce function there is no need to do this, if you just make sure that the initialValue
of the accumulator object is "a fresh object", you really don't have to worry about doing mutations.
Let me know if you agree.