Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated the allclose function to have less precision according to the dtype of input DNDarrays. #1177

Closed

Conversation

Sai-Suraj-27
Copy link
Collaborator

@Sai-Suraj-27 Sai-Suraj-27 commented Jul 8, 2023

Description

Updated, the allclose function (in heat/core/logical.py) to have lesser precision according to the dtype of input DNDarrays.

Issue/s resolved: #889

Changes proposed:

  • Adjusting the tolerance values allows for a looser comparison of float32 arrays.
  • Now ht.allclose will factor in the dtype of the inputs when determining the limits.

Type of change

  • Bug fix (non-breaking change which fixes an issue)

Due Diligence

  • All split configurations tested
  • Multiple dtypes tested in relevant functions
  • Title of PR is suitable for corresponding CHANGELOG entry

Does this change modify the behaviour of other functions? If so, which?

Yes.
ht.allclose

@github-actions
Copy link
Contributor

Thank you for the PR!

Copy link
Collaborator

@mrfh92 mrfh92 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should discussed the addressed issues (in principle only regarding the choice of some default values) in the PR talk next week. Apart from that your changes look fine 👍

x: DNDarray, y: DNDarray, rtol: float = 1e-05, atol: float = 1e-08, equal_nan: bool = False
x: DNDarray,
y: DNDarray,
rtol: Optional[float] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be discussed (e.g. in the PR talk): should we keep the NumPy/PyTorch-standard and choose some float limits as default (but adapted to Heats float32-standard datatype) or do we want to proceed differently and let tolerances by default be chosen automatically depending on the input datatypes?

if dtype_precision != 64:
adjustment_factor = 64 / dtype_precision

if rtol is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand this correctly, the default atol here is 1e-8 for float64 (as in NumPy). Surprisingly the same default atol is chosen in PyTorch as well, where the default datatype is float32, so I am not even sure whether maybe even the PyTorch default is a bit inconsistent. Maybe this needs to be discussed together with the issue mentioned above.

Nevertheless, I think that adjustment_factor needs to be chosen in another way: as a rule of thumb we know that machine precision for float64 is roughly of size 1e-16 and 1e-8 for float32; therefore, I think that just doubling the tolerance when switching from float64 to float32 will not be sufficient.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure...sir, I agree. I think adjustment_factor can be changed depending on the choice of some default values we consider. Please tell me after you have discussed this and come to a conclusion.

@github-actions
Copy link
Contributor

Thank you for the PR!

@github-actions
Copy link
Contributor

Thank you for the PR!

@ClaudiaComito ClaudiaComito self-assigned this Jul 24, 2023
@github-actions
Copy link
Contributor

Thank you for the PR!

@ClaudiaComito
Copy link
Contributor

What's the status of this PR? I don't remember why it was assigned to me 🙈

@github-actions
Copy link
Contributor

github-actions bot commented Sep 4, 2023

Thank you for the PR!

@ClaudiaComito ClaudiaComito removed their assignment Sep 4, 2023
@ClaudiaComito ClaudiaComito self-requested a review September 4, 2023 08:08
@mtar mtar added PR talk and removed PR talk labels Sep 11, 2023
@ClaudiaComito
Copy link
Contributor

ClaudiaComito commented Sep 18, 2023

Dear @Sai-Suraj-27 , after many internal discussions we have decided to not merge this PR and not address the original issue #889 . The background is this: atol and rtol would have to be set manually for many operations anyway, the numpy defaults aren't particularly related to 64-bit precision (otherwise they would be much lower), and the numpy defaults are identical to the torch defaults.

ht.allclose will stick to the numpy defaults, deviations will have to be addressed in single test functions like we have been doing so far.

Thanks for your work @Sai-Suraj-27 , and for helping us closing that old issue.

@mtar mtar removed the PR talk label Nov 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ht.allclose should factor in the dtype of the inputs when determining the limits
4 participants