Description
A lot of our tests use baseline files that have serialized numbers in them. We've had a lot of issues in the past comparing these baselines files because of minor changes in how floating point values are written to the output file (sometimes changes in which .NET we are running on, or Windows vs. Linux, etc. cause the values to slightly change).
One cause of these issues is because we are writing float
(32-bit floating points) to the file, but when we are parsing the values, we are using double
(64-bit floating points).
At times, we've decreased the digitsOfPrecision
low enough to tolerate these differences. However, there are cases where digitsOfPrecision
isn't enough, specifically when we have large values that differ by a digit in the exponential form, for example:
3.40282347E+38
vs
3.4028235E+38
To solve this issue, I've added a new option - to parse the numbers using float.Parse
instead of double.Parse
.
See the solution added in #3532.
We should go through the tests where we use a lowered digitsOfPrecision
, and see if using float.Parse
fixes the test on all platforms. This may allow us to remove the digitsOfPrecision
parameter altogether if all these places can be converted to UseSingle
.