Skip to content

Comments

Lazily initialize CUDA devices#610

Merged
soumith merged 1 commit intotorch:masterfrom
colesbury:lazy
Nov 23, 2016
Merged

Lazily initialize CUDA devices#610
soumith merged 1 commit intotorch:masterfrom
colesbury:lazy

Conversation

@colesbury
Copy link
Contributor

Previously, cutorch would initialize every CUDA device and enable P2P
access between all pairs. This slows down start-up, especially with 8
devices. Now, THCudaInit does not initialize any devices and P2P access
is enabled lazily. Setting the random number generator seed also does
not initialize the device until random numbers are actually used.

Previously, cutorch would initialize every CUDA device and enable P2P
access between all pairs. This slows down start-up, especially with 8
devices. Now, THCudaInit does not initialize any devices and P2P access
is enabled lazily. Setting the random number generator seed also does
not initialize the device until random numbers are actually used.
@soumith soumith merged commit f46ca39 into torch:master Nov 23, 2016
@soumith
Copy link
Member

soumith commented Nov 23, 2016

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants