Skip to content

Commit

Permalink
Skipping two c10d tests only if there are multi-GPUs (pytorch#14860)
Browse files Browse the repository at this point in the history
Summary:
Otherwise, these tests will fail, even though there are never meant to run on single GPU machines.
Pull Request resolved: pytorch#14860

Differential Revision: D13369060

Pulled By: teng-li

fbshipit-source-id: 8a637a6d57335491ba8602cd09927700b2bbf8a0
  • Loading branch information
teng-li authored and facebook-github-bot committed Dec 7, 2018
1 parent ada8f82 commit bfa666e
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions test/test_c10d.py
Original file line number Diff line number Diff line change
Expand Up @@ -1565,6 +1565,7 @@ def test_fp16(self):
)

@skip_if_not_nccl
@skip_if_not_multigpu
def test_queue_reduction(self):
# Set up process group.
store = c10d.FileStore(self.file.name, self.world_size)
Expand Down Expand Up @@ -1592,6 +1593,7 @@ def test_queue_reduction(self):
torch.ones(10) * (self.world_size + 1) * len(devices) / 2.0)

@skip_if_not_nccl
@skip_if_not_multigpu
def test_sync_reduction(self):
# Set up process group.
store = c10d.FileStore(self.file.name, self.world_size)
Expand Down

0 comments on commit bfa666e

Please sign in to comment.