Tags: StevenYangCC/pytorch
Tags
[1.7] Remove torch.vmap (pytorch#45571) torch.vmap is a prototype feature and should not be in the stable binary. This PR: - Removes the `torch.vmap` API - Removes the documentation entry for `torch.vmap` - Changes the vmap tests to use an internal API instead of `torch.vmap`. Test Plan: - Tested locally (test_torch, test_type_hints, test_vmap), but also wait for CI.
Revert "[release/1.6] .circleci: Don't use SCCACHE for windows releas… …e builds (pytorch#42024)" This reverts commit 994b37b.
[release/1.6] [JIT] Dont include view ops in autodiff graphs (pytorch… …#42029) * Dont include view ops in autodiff graphs * skip view ops in autodiff testing * two more tests * appease calng format * Pacify clang-format Co-authored-by: eellison <eellison@fb.com> Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
[jit] fix tuple alias analysis (pytorch#41992) Previously when analyzing a TupleConstruct, we ignored the aliasing information of the inputs and simply marked all elements of the returned tuple as wildcards. But since we can fully reason about the contents of a tuple statically, we should be able to assign them aliasing information. This analysis was not only incomplete but produced incorrect results, since if `a` is not a wildcard, `a noalias wilcard`. So if we looked at `tuple(a)` and reported the aliasing info as `tuple(wildcard)`, then `tuple[0] noalias a`, which is...wrong.
scatter/gather - check that inputs are of the same dimensionality (py… …torch#41890) Co-authored-by: Nikita Vedeneev <nik@quansight.com>
Update pthreadpool to pthreadpool:029c88620802e1361ccf41d1970bd5b07fd… …6b7bb. (pytorch#40524) (pytorch#41190) Summary: Pull Request resolved: pytorch#40524 Reviewed By: ezyang Differential Revision: D22215742 Pulled By: AshkanAliabadi fbshipit-source-id: ef594e0901337a92b21ddd44e554da66c723eb7c
Release GIL during DDP construction. (pytorch#40877) Summary: Pull Request resolved: pytorch#40495 As part of debugging flaky ddp_under_dist_autograd tests, I realized we were running into the following deadlock. 1) Rank 0 would go into DDP construction, hold GIL and wait for broadcast in DDP construction. 2) Rank 3 is a little slower and performs an RRef fetch call before the DDP construction. 3) The RRef fetch call is done on Rank 0 and tries to acquire GIL. 4) We now have a deadlock since Rank 0 is waiting for Rank 3 to enter the collective and Rank 3 is waiting for Rank 0 to release GIL. ghstack-source-id: 106534442 Test Plan: 1) Ran ddp_under_dist_autograd 500 times. 2) waitforbuildbot Differential Revision: D22205180 fbshipit-source-id: 6afd55342e801b9edb9591ff25158a244a8ea66a Co-authored-by: Pritam Damania <pritam.damania@fb.com>
.circleci: Fix upload to backup directory Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
[ONNX] Fix pow op export [1.5.1] (pytorch#39791) * [ONNX] Fix pow op export (pytorch#38065) Summary: Fix pow type cast for opset 9 and update opset 12 Pull Request resolved: pytorch#38065 Differential Revision: D21485353 Pulled By: malfet fbshipit-source-id: 3993e835ffad07b2e6585eb5cf1cb7c8474de2ec * Update ort-nighly version as suggested in pytorch#39685 (comment) * Apply changes from pytorch#37846 to `test_topk_smallest_unsorted` Co-authored-by: neginraoof <neginmr@utexas.edu>
PreviousNext