-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated the allclose
function to have less precision according to the dtype of input DNDarrays.
#1177
Conversation
…e dtype of input DNDarrays.
…to bugs/889-fix
…o bugs/889-fix
Thank you for the PR! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should discussed the addressed issues (in principle only regarding the choice of some default values) in the PR talk next week. Apart from that your changes look fine 👍
x: DNDarray, y: DNDarray, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False | ||
x: DNDarray, | ||
y: DNDarray, | ||
rtol: Optional[float] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs to be discussed (e.g. in the PR talk): should we keep the NumPy/PyTorch-standard and choose some float limits as default (but adapted to Heats float32-standard datatype) or do we want to proceed differently and let tolerances by default be chosen automatically depending on the input datatypes?
if dtype_precision != 64: | ||
adjustment_factor = 64 / dtype_precision | ||
|
||
if rtol is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand this correctly, the default atol
here is 1e-8 for float64 (as in NumPy). Surprisingly the same default atol
is chosen in PyTorch as well, where the default datatype is float32, so I am not even sure whether maybe even the PyTorch default is a bit inconsistent. Maybe this needs to be discussed together with the issue mentioned above.
Nevertheless, I think that adjustment_factor
needs to be chosen in another way: as a rule of thumb we know that machine precision for float64 is roughly of size 1e-16 and 1e-8 for float32; therefore, I think that just doubling the tolerance when switching from float64 to float32 will not be sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure...sir, I agree. I think adjustment_factor
can be changed depending on the choice of some default values we consider. Please tell me after you have discussed this and come to a conclusion.
Thank you for the PR! |
…to bugs/889-fix
…o bugs/889-fix
Thank you for the PR! |
Thank you for the PR! |
What's the status of this PR? I don't remember why it was assigned to me 🙈 |
Thank you for the PR! |
Dear @Sai-Suraj-27 , after many internal discussions we have decided to not merge this PR and not address the original issue #889 . The background is this:
Thanks for your work @Sai-Suraj-27 , and for helping us closing that old issue. |
Description
Updated, the
allclose
function (inheat/core/logical.py
) to have lesser precision according to the dtype of input DNDarrays.Issue/s resolved: #889
Changes proposed:
ht.allclose
will factor in the dtype of the inputs when determining the limits.Type of change
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
Yes.
ht.allclose