-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
special case complex division for zero imaginary part #48460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
What about But fixing that is likely more involved and maybe should be a different PR. Discrepancies like that are at least slightly more defensible than with literal zeros. |
Co-authored-by: Oscar Smith <oscardssmith@gmail.com>
The slightly more general case is
This works because in this case |
Sorry for not taking a deeper look at this the first time. This doesn't seem like the right place for a fix. This change seems worse than doing nothing, since It appears that the fastpath |
We can add a special case for zero real as well ;) |
IMO the fast path is worth it. It is occasionally slightly inaccurate, but it is 2x faster and is always pretty accurate. |
Alright. I'm having trouble imagining other inputs where the fastpath result would be egregious. Just make sure that the extra checks aren't eroding all the benefit over a pure-slowpath version (don't need to post benchmarks, just know so). |
Just to get a second data point I tried Rust (which is not really known for its numerics but still): fn main() {
println!("Complex float: {}", 1.0 / num::complex::Complex::new(0.2, 0.0));
println!("Float: {}", 1.0 / 0.2);
}
// Printing:
Complex float: 4.999999999999999+0i
Float: 5 |
for the record, this case is only off by 0.3 ULP (although there are probably cases where it's off by more, my guess is up to ~1.5 but I haven't done the math) |
Does Rust have an In Julia, I would accept Another thought: our automatic differentiation packages have explicit definitions of these computations, right? They don't just derive the result via these implementations? These sorts of special cases ruin derivatives. |
Good point. That would be another point in favor of the generalized version I gave. |
I think the resulting derivatives would still be incorrect based on |
I've done the math (i.e. typed it into wolfram alpha) and the derivatives work out correct to within epsilon also. |
These are imo bugs in the AD libraries. For example, ForwardDiff handles these type of problems on its master branch (by not considering Dual numbers with non-zero dual part to be equal to a real number). |
julia> using ForwardDiff
julia> ForwardDiff.jacobian(((c,d),)->[c,-d]/(c^2+d^2),[0.2,0.0])
2×2 Matrix{Float64}:
-25.0 0.0
0.0 -25.0
julia> ForwardDiff.jacobian(((c,d),)->[inv(c),-d/c],[0.2,0.0])
2×2 Matrix{Float64}:
-25.0 -0.0
0.0 -5.0 EDIT: The equation used here was wrong and is corrected in a later post. Also, all autodiff is irrelevant here because the input type is If Sorry, this might be an unnecessary diversion. I'll let the issue go unless someone wants to engage further. |
yeah that's because I typod. It was supposed to be |
Thanks. I should have caught that. The point on higher derivatives stands julia> ForwardDiff.hessian(((c,d),)->inv(c),[0.2,0.0])
2×2 Matrix{Float64}:
250.0 0.0
0.0 0.0
julia> ForwardDiff.hessian(((c,d),)->c/(c^2+d^2),[0.2,0.0])
2×2 Matrix{Float64}:
250.0 0.0
0.0 -250.0 Of course the nicer version of this (that doesn't fall into our aforementioned EDIT: This method is only used for |
Is this ready to merge? |
I fixed the test upstream so I vote for not changing anything until someone else complains about it. |
Fixes #48457