-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Wiki documentation for TLC options #297
Comments
I fixed the broken link and added a new link to https://github.com/tlaplus/tlaplus/blob/master/general/docs/current-tools.md#command-line-options. This current-tools.md document covers various aspects, including the |
Thank you @lemmy - I didn't find the command-line options MD file in my searches earlier this week... much better than just the code comment!
I spent considerable time figuring out how to use all RAM and CPUs available on my machine which was unexpected. TLC should automatically scale-up to all resources if it needs to, unless constrained through configuration. In the absence of that, I would document how that can be achieved. Also,
I had to spend some time figuring out what "Please allocate memory for the TLC process via the JVM mechanisms" really means since I'm not familiar with JVM memory limits. To make things more confusing, official Oracle JVM documents (https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionX.html) point out that Java processes should not be limited by Xmx:
|
@CIPop The TLA+ Toolbox implements what you described here, except it uses an opt-out mechanism instead of an opt-in one. One option is to configure a TLC run in the Toolbox and then reuse its generated VM and TLC parameters in VSCode, until someone implements better support for large-scale model checking in VSCode.
Are you sure that the JRockit documentation applies to your local VM? I believe JRockit was discontinued a while ago and partially merged into Hotspot. However, I haven't been keeping up with JVM development in recent years. It's possible that the art of VM tuning has completely changed. |
(I don't know how to send a PR towards the Wiki portion.)
I found the following two resources useful to better utilize the resources available on the machine running my model:
TLC option '-workers <cpu_count>` (defaults to 1). I think it would be interesting to add a link to all TLC options to https://github.com/tlaplus/vscode-tlaplus/wiki/Settings: https://github.com/tlaplus/tlaplus/blob/master/tlatools/org.lamport.tlatools/src/tlc2/TLC.java#L231
JVM option
-Xmx30g
(defaults to ~4GB) to increase TLC maximum memory usage. I think an example could be added to https://github.com/tlaplus/vscode-tlaplus/wiki/Java-OptionsAlso, there is a Wiki broken link at https://github.com/tlaplus/vscode-tlaplus/wiki/Settings: "TLC Model Checker: Statistics Sharing allows to opt-in / opt-out sharing of TLC usage statistics. More details." "More details" Link should be: https://github.com/tlaplus/tlaplus/blob/master/tlatools/org.lamport.tlatools/src/util/ExecutionStatisticsCollector.md
The text was updated successfully, but these errors were encountered: