You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Especially w.r.t data loading, the data should automatically be transferred to the correct device.
Right now the batch is moved to the first GPU and users are moving the inputs to the correct device before inference (which works, but is less nice and might limit multi-gpu usage since the first GPU always gets filled first).
The text was updated successfully, but these errors were encountered:
Thanks a looooot! I managed to. The key point is that the model is forked onto each device and model.devices() will return a Vec<Device> whose first elem is the device the model currenctly on.
Especially w.r.t data loading, the data should automatically be transferred to the correct device.
Right now the batch is moved to the first GPU and users are moving the inputs to the correct device before inference (which works, but is less nice and might limit multi-gpu usage since the first GPU always gets filled first).
The text was updated successfully, but these errors were encountered: