Skip to content

Commit

Permalink
Fix upsample bicubic2d batching handling on CPU. (pytorch#52389)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#52389

Fixes: pytorch#49159

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D26496319

Pulled By: gchanan

fbshipit-source-id: d385cd683ef09e0596a9875ce84d03e6e77acc93
  • Loading branch information
gchanan authored and facebook-github-bot committed Feb 18, 2021
1 parent c7b0005 commit f72b4b8
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion aten/src/ATen/native/UpSampleBicubic2d.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ static void upsample_bicubic2d_out_frame(
const scalar_t* in = &idata[output_y * input_width + output_x];
scalar_t* out = &odata[output_y * output_width + output_x];

for (int64_t c = 0; c < channels; ++c) {
for (int64_t c = 0; c < channels * nbatch; ++c) {
out[0] = in[0];
in += input_width * input_height;
out += output_width * output_height;
Expand Down
2 changes: 1 addition & 1 deletion test/test_nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -8714,7 +8714,7 @@ def test_upsamplingBicubic2d(self):
kwargs = dict(mode='bicubic', align_corners=align_corners)
# test float scale factor up & downsampling
for device in device_list:
for scale_factor in [0.5, 1.5, 2]:
for scale_factor in [0.5, 1, 1.5, 2]:
in_t = torch.ones(2, 2, 2, 2).to(device)
out_t = F.interpolate(in_t, scale_factor=scale_factor, **kwargs)
out_size = int(math.floor(in_t.shape[-1] * scale_factor))
Expand Down

0 comments on commit f72b4b8

Please sign in to comment.