-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
112 combining spatial transforms #131
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do need tests for this transform and example of use. We could also add transforms for mirroring in XYZ dimensions and transposing two dimensions.
a208fe2
to
4d23cf1
Compare
this is ready from my side, could you review? @ericspod @atbenmurray @Nic-Ma |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added several comments inline.
As this PR is very big, need to double-check the math computations carefully later.
Thanks.
thanks! addressed most of the comments, the pr is too large to include a solution for the ordering of the operations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your transforms can support both CPU and GPU but all the unit tests only cover "device=None" or "CPU".
Could you please locally run all the unit tests on GPU to double confirm?
Others look good to me now.
As this PR is very big, I suggest waiting 1 more day for other guys to review.
Thanks.
Fixes #111 fixes #112.
Description
implemented in pytorch for the combined spatial transforms:
This PR includes a demo and a speed test.
Observations:
torch.multiprocessing.set_start_method('spawn')
beforehand if both transforms and networks request GPU processingStatus
Ready
Types of changes