Conversation
Created comprehensive benchmark script to test different OrdinaryDiffEq.jl solver methods for DCON integration with singularities. Tests 7 solvers (Tsit5, AutoTsit5, Vern6/7/8, DP5, BS5) across 5 tolerance levels [1e-3, 1e-4, 1e-5, 1e-6, 1e-8]. Generates plots and performance data. Also includes a quick test version for validation before running the full benchmark suite. Co-authored-by: Nikolas Logan <logan-nc@users.noreply.github.com>
…actually works out of the box and makes good plots
…ark insights from addressing issue #115
|
@jhalpern30 feel free to hit merge if you approve. Close #115 if you do. |
|
@logan-nc looks good. My main question is do we want to have the benchmarking script committed? While useful, I could see it going out of date/breaking, since it is a rather large script that copies a decent amount of the source code in it. At least in the short term, I think having a script like this in the repo will be helpful, especially considering there's still work left to be done in testing this stuff, but just wanted to raise the question.
Yeah, this one is tricky. I tried to get this to work in the initial PR but ended up ditching it since it was a time sink and we just wanted to get the code to run through - this algorithm has some requirements on the format of the solution that aren't compatible with the existing code. If I remember correctly, it hangs because 1) it requires a vectorized matrix and 2) can't handle the |
…ning nothing from DCON. Now just checks that we get through the main executable without erroring out
|
I wanted the script pushed just in case anyone wanted to try it (maybe the results are different elsewhere? My laptop was running other stuff while this was going after all) or maybe even try other methods. You're probably right that it will quickly get out of sync, so maybe the right strategy for these things in general is to commit the benchmark, post the results, then commit a deletion of the benchmark before merging. That way folks can go back and cherry-pick the benchmark from the old commit recorded in the PR if they ever want to return to it. What do you think @jhalpern30? If you agree, go ahead and delete then merge. |
Yeah, I like that. That way we'll have documentation of how the plots on the PR were generated without polluting the main repo. I'll delete and merge |
Based on discussion with Nik, this script was used to generate plots in PR so we add then delete to have documentation on this PR for how the plots were made
This branch addresses #115 by adding an ODE solver benchmark that compares the step counts and wall times of different solvers for the DIII-D-like example case.
Interestingly, all the solvers at all the tolerances scanned have a pretty similar distribution of points as far as the fractions devoted to near-rational regions, etc.
The number of steps changes a lot with tolerances, but the vern8 solver is the only one that stands out as having significantly fewer steps across the board:
Unfortunately, the fewer steps doesn't necessarily translate to faster wall times.
The default
Tsit5solvers seems to be competitive for speeds at reasonable tolerances of ~1e-6 or above. The only consistently faster one on my laptop wasBS5, which also did take fewer steps. According to the ODE package docs,In the future, I'd like to see someone try to get
AutoTsit5or some other stiff solver working properly to see if they can do better approaching the rationals. I was running into errors with autodiff for complex numbers and don't think I fully explored all the possible options for these. I leave this for future work.