-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Comparison Details #2
Comments
That's a fair question, I haven't tried to measure that. Based on my experiences, the short articles are difficult to measure but rust will likely win out just because there is less file/process overhead (the rust verifier can chew through the first 20 articles in the time it takes for the vanilla verifier to do its usual progress reporting stuff for one), and there seem to be some patterns that take a lot longer in vanilla. When 5x is the baseline it seems difficult to imagine that there is an example where that goes to 1x. Re: parser, I'm kind of hoping that this verifier gets some traction as an open source project within the Mizar community so that it's not just me doing the work. It's also the least important component re: proof export compared to accom + analyzer + checker, so it will likely be left for last if I'm the one doing it. |
Okay, I had some time to sit down and do some per-file measurements for
BCIIDEAL and LATSUM_1 are the only two data points below the Ignoring SUBSET and TARSKI which have absurd speedup numbers because Data setObtain by running
I will link to this post from the README for people interested in a more detailed performance comparison. |
Hello,
This isn't an "issue", per se, but I was wondering if there were any files where the "vanilla verifier" from Mizar outperforms
mizar-rs
? Or is the rust version better across the board?Best,
Alex
P.S. Will you continue and implement the parser, or is that "an exercise for the reader"?
The text was updated successfully, but these errors were encountered: