-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks for the explicit-state model checker TLC #207
Comments
Thanks @lemmy for the suggestion ! One constraint we have for a new benchmark is to always deterministically compute the same thing. So we should avoid random aspects and side effects. So basically, two subsequent calls to run the workload should perform the exact same thing such that they stay comparable. If you are willing to open a PR, adding a new benchmark (or set of benchmarks) to renaissance is as simple as creating a subdirectory in the |
Thanks for the suggestion - this looks like an interesting workload. |
Is there an example that shows how to add a benchmark that comes as a self-contained jar file? |
All jar dependencies we have come from maven central and they are defined in the Wouldn't it be possible to have the required dependencies there and the thin benchmark layer as sources here ? |
That could be done but the jar would be tailored to the benchmark. Not sure we want to "pollute" maven central with such a jar. By the way, the jar can be pack200ed to a few hundred kbs. |
I see. And how about the second option ? If it's a matter of the work that has to be done to turn the sample application into a benchmark, we could do that part. |
It is probably best to ditch the idea of a self-contained jar and instead use the vanilla jar from maven central and check the specs (input) into this repository. Is the benchmark layer going to fork a VM? |
By default, the harness does not fork a new VM. |
That would be great indeed. It matches how the other benchmarks are built. For instance, we have several benchmarks that include any kind of input in a Alternatively, if the input is small it can also be inlined into the benchmark definition. |
Two questions:
|
You can use the
It would be good to keep that for dev only though to avoid potentially changing dependencies in the future.
We have users who prefer using JMH buils for convenience because they have tools to run such jars. However, the renaissance harness is the default way of running benchmarks because its goal is to augment it with tools to analyze the workloads automatically.
This is not a problem if your two workloads are distinct benchmarks because renaissance benchmarks are meant to be executed individually anyway. |
In addition to what Francois said: you should be aware that the Renaissance harness will call the |
@axel22 If that's a strict requirement, I don't see a way to move forward. I don't have the bandwidth to change TLC to support warmup runs (I once hacked something based on OSGi). Initially, I assumed renaissance would accept a macro-benchmark that runs for a few minutes and thus needs no warmups. TLC fits this type of benchmark very well. |
@lemmy It's still possible to have single runs, but it would require setting the default number of repetitions to |
https://github.com/tlaplus/tlaplus/tree/master/general/performance/Grid5k would be the spec that easily simulates two workloads: scalability and computation. |
The model checker (https://github.com/tlaplus/tlaplus) is optimized for throughput and has a single hot path. I can come up with benchmarks for a) scalability and b) throughput. One is a spec where evaluation of the next-state relation is cheap, the other has a costly next-state relation. Adjusting specs to the available hardware is just a matter or choosing a few parameters. Benchmarks can be distributed as two separate, self-contained jars.
The text was updated successfully, but these errors were encountered: