Skip to content
This repository was archived by the owner on Nov 28, 2020. It is now read-only.

Conversation

@ThePrimeagen
Copy link
Member

I am rather shocked at the local test for this, but let has a significant performance impact.

I am seeing locally that let kills the performance of lomutos by 1/2 while Hoare remains about the same. I think that is because Hoare does significantly less looping than lomutos.

@gareth-ellis @mhdawson please let me know If i have let up the JSON files correctly. I am not sure how the graphing runs.

@ThePrimeagen
Copy link
Member Author

This addresses #60 and nodejs/node#8637

@ThePrimeagen
Copy link
Member Author

@nodejs/benchmarking thoughts? comments?

I am rather shocked at the local test for this, but let has a significant performance impact.
@gareth-ellis
Copy link
Member

Looks like a good idea.

I had a PR open earlier in the year to address a change in buffer - nodejs/node#5819

At the time this was supposed to be due to a bug in crankshaft, that would be resolved when TurboFan becomes the default optimiser. I seem to remember having an option to enable TurboFan, but can't find it right now.

As far as the graphing goes, @mhdawson would be the one to comment. My understanding is that if everything is there then it should be pushed into the database.

@mhdawson
Copy link
Member

In addition to the charts side of things, we also have to add to the benchmark job itself as well. I'll try to look through the PR soon. I am traveling to a conference next week and then another one the week after and have to prepare multiple presentations so my time is a bit limited but I will try to remove as soon as I can.

@mhdawson
Copy link
Member

Some initial comments:

  1. You should use the next consecutive benchmark ids and also update: https://github.com/nodejs/benchmarking/blob/master/benchmarks/BenchmarkIDs.md

  2. We should explain how these additions fit into our coverage of the keys things we've decided that we should cover: https://github.com/nodejs/benchmarking/blob/master/docs/case_coverage.md. If its something different we should discuss and figure out if we need a new category in our use cases/key runtime attributes. (Although if you agree with what I've said in 3 maybe they don't so its just something we should handle a bit differently)

  3. I'm not quite sure this fits in with the same pattern as the other benchmarks we publish daily. The data is interesting in that once we see that performance reaches parity for let we may want to switch over, but I'm not sure we need that on the main benchmarking.nodejs.org page. I see that as a place people should go to check the effect of their PR and be able to see gains across node versions.

This seems more like something we'd want to track in the benchmarking team so that we can let people know when they can switch over to let but it won't help catch regressions right ? We still may want charts but maybe not on the front page. One option might be a sub-directory that you need to know the sublink to get at.

@ThePrimeagen
Copy link
Member Author

@mhdawson we should discuss this at the next meeting. Do we have a time/date for this quarter's meeting?

@mhdawson
Copy link
Member

mhdawson commented Nov 8, 2016

@michaelbpaulson sorry was out last few weeks, have set up issue to arrive on time for next meeting here: #69

@mhdawson
Copy link
Member

@michaelbpaulson are you planning to make it to an upcoming WG meeting so that we can discuss ? Otherwise we may just close this PR for now.

@mhdawson
Copy link
Member

mhdawson commented Aug 1, 2017

Closing for now based on discussion in last meeting. WIll reopen if/when michael p jumps back into the discussion.

@mhdawson mhdawson closed this Aug 1, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants