-
Notifications
You must be signed in to change notification settings - Fork 23.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[numpy] Add torch.nan_to_num #44592
[numpy] Add torch.nan_to_num #44592
Conversation
💊 CI failures summary and remediationsAs of commit 6141370 (more details on the Dr. CI page):
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
Codecov Report
@@ Coverage Diff @@
## master #44592 +/- ##
==========================================
- Coverage 68.15% 68.07% -0.09%
==========================================
Files 396 393 -3
Lines 51133 50990 -143
==========================================
- Hits 34851 34709 -142
+ Misses 16282 16281 -1
Continue to review full report at Codecov.
|
@mruberry Please take a look :) Have made a independent test as I dont think that
Should we just manually add |
Hi, For the formula issue, it is an oversight on our side sorry. Also for the backward function, I think you can use |
* update derivative not required ops. * add gradgradcheck to test. * use isinf.
Thanks that worked!
Ah. Right! Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The autograd part looks good to me. Just small comment on the test.
I'll let @mruberry review the rest.
On CPU With cpu_kernel(iter, [=](scalar_t a) -> scalar_t {
return (
at::_isnan(a)
? nan_replacement
: (a == std::numeric_limits<scalar_t>::infinity()
? pos_inf_replacement
: (a == -std::numeric_limits<scalar_t>::infinity()
? neg_inf_replacement
: a)));
With cpu_kernel(iter, [=](scalar_t a) -> scalar_t {
return at::_isfinite(a)
? a
: (at::_isnan(a) ? nan_replacement
: (a == std::numeric_limits<scalar_t>::infinity()
? pos_inf_replacement
: neg_inf_replacement));
|
@mruberry PTAL :) Have addressed the comments as well as updated the kernel. Errors are not related to this PR. Will rebase after addressing new comments if any. Thanks! |
test/test_unary_ufuncs.py
Outdated
for x in generate_numeric_tensors(dtype=dtype, device=device): | ||
if not contiguous: | ||
x = x.T | ||
self.compare_with_numpy(torch.nan_to_num, np.nan_to_num, x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this and the next comparison be covered by the existing unary ufunc tests?
That is, I'm wondering if only the tests where the extra args are set are needed in this test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about method variant but torch.nan_to_num
is verified. You know where the method variants are tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variants are tested for consistency here:
pytorch/test/test_unary_ufuncs.py
Line 200 in 7cde662
def test_variant_consistency(self, device, dtype, op): |
Which is not the most thorough test, but we didn't want to run the tests again for method variants because it would greatly increase test time (and cost).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. Will remove those two then. Thanks!
test/test_unary_ufuncs.py
Outdated
@dtypes(*(torch.testing.get_all_int_dtypes() + torch.testing.get_all_fp_dtypes(include_bfloat16=False))) | ||
def test_nan_to_num(self, device, dtype): | ||
for contiguous in [False, True]: | ||
for x in generate_numeric_tensors(dtype=dtype, device=device): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this test should use a different generator with more non-finite values, since the numeric tensors will be tested elsewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be fine as the generator has tensors with extremals, otherwise we'll will be creating almost similar function which does the same thing (takes dtype and device) and generates tensor. Let me know what you feel about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can just call make_tensor:
pytorch/torch/testing/_internal/common_utils.py
Line 1432 in 7cde662
def make_tensor(size, device: torch.device, dtype: torch.dtype, *, |
And add the extremal values if it's float. You could create a smaller 64x64 tensor, for example. generate_numeric_tensors will create a very large tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh. Sure. Thanks!
Thanks for the benchmark. PR continues to look good, of course. I just have a couple questions about refining the test in test_unary_ufunc.py to ensure it's not redundant with the other OpInfo tests. Maybe just testing some smaller tensors with more non-finite values would be best? |
Gentle Ping :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Exciting to have torch.nan_to_num!
Gentle Ping :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Thanks for the ping, and sorry for the delay. It's been a very busy week with the branch cut. Things should get back to normal soon. |
Congrats @kshitij12345! This is one of, if not the first, new function to be added to PyTorch 1.8! |
Summary: Related to #44592 This PR is to fix the deprecated warnings for the nan_to_num function. Below is the warning message when building the latest code. ``` ../aten/src/ATen/native/UnaryOps.cpp: In function ‘at::Tensor& at::native::nan_to_num_out(at::Tensor&, const at::Tensor&, c10::optional<double>, c10::optional<double>, c10::optional<double>)’: ../aten/src/ATen/native/UnaryOps.cpp:397:45: warning: ‘bool c10::isIntegralType(c10::ScalarType)’ is deprecated: isIntegralType is deprecated. Please use the overload with 'includeBool' parameter instead. [-Wdeprecated-declarations] if (c10::isIntegralType(self.scalar_type())) { ``` The deprecated warning is defined in `ScalarType.h`. https://github.com/pytorch/pytorch/blob/d790ec6de01a61fe81733c41a64b6092bacfb7bd/c10/core/ScalarType.h#L255-L260 Pull Request resolved: #46309 Reviewed By: mrshenli Differential Revision: D24310248 Pulled By: heitorschueroff fbshipit-source-id: 0f9f2ad304eb5a2da9d2b415343f2fc9029037af
Reference #42515
TODO: