Description
I'm working on stabilizing a crate for constraints and total orderings of floating-point types. My crate integrates with num-traits, and has introduced subsets of the Float
trait: Encoding
, Infinite
, Nan
, and Real
. The 0.2.*
series of num-traits introduces a Real
trait of its own, which I would prefer to use instead, but the blanket impl<T> Real for T where T: Float
makes this difficult. See this issue.
I know this is a long shot, since this would likely involve breaking changes, but is there any chance this impl
and the relationship between Real
and Float
could be reconsidered? Just a few thoughts:
- Today,
Real
does not really behave as a subset given the blanket implementation. Users either implementReal
xorFloat
, rather than implementingReal
and any additional traits to satisfyFloat
. That relationship seems a bit backwards to me. - The blanket implementation makes it difficult to generically implement
Real
andFloat
. In my crate, this is a problem, because I implement both based on constraints on a wrapped typeT
. Attempting an implementation ofFloat
will always clash with an implementation ofReal
regardless of the constraints. This is the difficulty I'm referencing in the title of this issue.
My guess is that the second point will almost always be a problem for parameterized types (i.e., wrappers) that may want to implement Float
and/or Real
.
The most obvious workaround for my crate is to remove the parameterized impl
s for Real
and Float
and duplicate a lot of code to implement them for the concrete type definitions that the crate re-exports. I can reduce duplication with a macro, but it makes the code more fragile, because type definitions that should pick up such implementations based on the input types alone would no longer get them implicitly.
Thanks for taking a look at this. Thoughts?