Evaluation Environment Update #253
gunnarmorling
announced in
Announcements
Replies: 2 comments 3 replies
-
If #182 is done right it may support builds using Docker and therefore non-java submissions. |
Beta Was this translation helpful? Give feedback.
2 replies
-
Quick update to the update: working with @rschwietzke on setting up a new evaluation environment, using AX161 from Hetzner, i.e. a dedicated server with AMD EPYC™ 7502P. Aforementioned move of the evaluation process to hyperfine is also progressing nicely (thanks a lot to @hundredwatt), so that we should be able to continue with evaluations soon. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey all,
I'm blown away by the massive interest in the 1BRC challenge, it's so great to see how the community is coming together to learn and inspire. The number of submissions is way beyond what I had expected, which, again, is great to see! Keep them coming. I'm planning to work through the evaluation backlog step by step.
One unforeseen challenge (sic!) came up by the realization that the evaluation environment I had chosen (a dedicated cloud environment, as described in the README) isn't quite as stable as I was assuming. Specifically, its performance increased substantially today for reasons still unknown to me. What could be great--were I to run an actual production workload there--throws a bit of a wrench into the works for this challenge, as it means that any new results wouldn't be comparable with any results so far.
I am therefore doing what I probably should have been doing from the get go, and am looking to set up a dedicated box for running the 1BRC, which shouldn't be subject to these kinds of changes. For those interested, the details are discussed here. At the same time, we are looking to run all contenders via hyperfine, which simplifies the process quite a bit and will allow to re-run all the entries in one go, when and where needed. I am planning to do so once we have a dedicated evaluation environment and will also update the leader board accordingly after that.
This means that evaluation is going to be a bit slower for the next few days probably, but on the upside, the process should be faster and more reliably afterwards. Thanks a lot for your patience and also of course for participating in the 1BRC. I'm really curious to see how far we'll get in the course of this challenge!
Best,
--Gunnar
Beta Was this translation helpful? Give feedback.
All reactions