IterativeRefinementSolver performance #603
Replies: 3 comments 6 replies
-
|
If this will make calls, after the first, to the ConvexSolver faster, it should be tested. I just need to read the article again. I guess that the change is very small. I have another experimental idea for this routine. I believe the Quadruple precision type might be replaced by new matrix-vector methods. I hope it will give the same precision by using Kahn summation (or version of it). That might lead to much less memory being allocated. This will require more coding at a later time. |
Beta Was this translation helpful? Give feedback.
-
|
The key performance gain, I assume, will come from not creating a complete new sub-problem at each master iteration. Doesn't matter if you use Quadruple or something else. Having one common scaling factor is required to avoid recalculating the Q-inverse. (Deriving that scaling factor; my first attempt would be to only look at the primal infeasibilities. Or why not just scale it with something like 1_000 every time.) The |
Beta Was this translation helpful? Give feedback.
-
|
@Programmer-Magnus With your recent changes to IterativeRefinementSolver and IterativeRefinementSolverDouble or my attempt at IterativeRefinementSolver2, are there any performance gains? I mean actually faster solve times on your real life problems. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
@Programmer-Magnus
In the IterativeRefinementSolver the primal and dual scaling factors are always treated as independant and different from each other.
Solving quadratic programs to high precision using scaled
iterative refinement:
I think treating them as two different or one common scaling factor should be configurable, and one common should be the default.
Provided you fully exploit the fact that they are common this should be a huge performance gain.
Beta Was this translation helpful? Give feedback.
All reactions