Skip to content

Conversation

vfdev-5
Copy link
Collaborator

@vfdev-5 vfdev-5 commented May 14, 2021

Description:

  • Added CUDA support for interpolation AA bilinear/bicubic
  • Updated tests

Comment on lines 161 to 165
// Setup local buffers
scalar_t wx[256];
scalar_t wy[256];
scalar_t buffer1[256];
scalar_t buffer2[256];
Copy link
Collaborator Author

@vfdev-5 vfdev-5 May 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is an assumption that resize scale is less than 127.
Otherwise, we can specify dynamic shared memory size before calling the cuda code, however we should then ensure that device has enough shared memory...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine for now as long as we raise an error if this configuration is not satisfied, which is what we currently do.

We will need to fix this for PyTorch though

@vfdev-5 vfdev-5 requested a review from fmassa May 14, 2021 22:43
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick PR Victor!

The PR looks good to merge, I just have one question before, can you have a look and let me know what you think?

Comment on lines 161 to 165
// Setup local buffers
scalar_t wx[256];
scalar_t wy[256];
scalar_t buffer1[256];
scalar_t buffer2[256];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine for now as long as we raise an error if this configuration is not satisfied, which is what we currently do.

We will need to fix this for PyTorch though

Comment on lines 271 to 272
TORCH_CHECK(rheight < 127, "Max supported scale factor is 127");
TORCH_CHECK(rwidth < 127, "Max supported scale factor is 127");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the max scale factor supported different if we use bicubic?

Copy link
Collaborator Author

@vfdev-5 vfdev-5 May 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching that, yes for bicubic max scale should be at most 63. In general the check should be

TORCH_CHECK(rheight < 255 / interp_size, "Max supported scale factor is 127 (bilinear), 63 (bicubic)");

@vfdev-5 vfdev-5 force-pushed the vfdev-5/interpolate-aa-cuda branch from c76e8fa to 4e43b5c Compare May 19, 2021 10:38
Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Can you rebase the PR so that we can proceed merging it?

@fmassa fmassa merged commit 3c47bfd into pytorch:master May 19, 2021
@vfdev-5 vfdev-5 deleted the vfdev-5/interpolate-aa-cuda branch May 19, 2021 14:02
facebook-github-bot pushed a commit that referenced this pull request May 19, 2021
)

Summary:
* Added CUDA support for interpolation AA bilinear/bicubic

* Fixed code formatting error

* Fixed cuda tests on cpu-only builds

* Fixed cuda internal torch check for bicubic mode

Reviewed By: cpuhrsch

Differential Revision: D28538753

fbshipit-source-id: 8b6c16db07a05fdba4b35cb8f5ec077b50691fb7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants