Open
Description
openedon Jan 17, 2020
Hi,
as suggested in math #303 I repost this here:
How feasible it is to integrate quad-double precision, It has 212 bits in significand and is faster than MPFR:
- source code, it is on BSD license.
- article/documentation
- quad double precision homepage
I have just recently integrated boost multiprecision with a fairly large numerical computation software YADE.
Float128 isn't enough for my needs, yade would greatly benefit from quad-double (as it is much faster than MPFR). Also yade can serve as a really large testing ground. I have implemented CGAL numerical traits and EIGEN numerical traits to use multiprecision. And all of them are tested daily in our pipeline, see an example run here. Or a more specific example for float128: test (need to scroll up) and check
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment