Description
Currently the Hash
implementation just assumes that the FloatCore
type has a mantissa/exponent width like f64
. This is not necessarily true, one could have a valid custom FloatCore
type that does not fit these constraints.
Furthermore, the hash implementation is rather slow, because it reconstructs the type from integer_decode
(https://rust.godbolt.org/z/hTTqnaso6). That's 10-15 cycles every float for what should be a no-op - this can be more expensive than the hash function itself!
I propose we add a StdFloat
sealed trait that is implemented for f32
and f64
only, and use that as the bound for the Hash
implementation. Then the hash implementation can use .to_bits()
.
Unfortunately, this is a breaking change. However I must reiterate that the current implementation is invalid for the FloatCore
trait bound, it's possible to make a perfectly sane type implementing FloatCore
which would violate the Eq
/Hash
parity when used in OrderedFloat
due to no fault of the original type.
An alternative solution is to feed the sign, mantissa and exponent returned by f.integer_decode()
separately into the hasher. However this would make the hash implementation slower still.