You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The strategy followed there is to load an entire model onto each GPU and sending chunks of a batch through each GPU’s model copy at a time. Synthetic data generation has become an essential toolkit for every ML Engineer. So, it'd be beneficial to extend these examples to include some more use cases:
Keep the artifact serialization code under a thread to not block GPU execution
How can you help?
You could help us contribute an example on any of the above-mentioned use cases or you can come up with your own 🤗 Help us make the art of synthetic data generation scalable, easy, and accessible.
The text was updated successfully, but these errors were encountered:
The
inference/distributed
directory houses examples on running distributed inference withaccelerate
:The strategy followed there is to load an entire model onto each GPU and sending chunks of a batch through each GPU’s model copy at a time. Synthetic data generation has become an essential toolkit for every ML Engineer. So, it'd be beneficial to extend these examples to include some more use cases:
Some nice to haves:
How can you help?
You could help us contribute an example on any of the above-mentioned use cases or you can come up with your own 🤗 Help us make the art of synthetic data generation scalable, easy, and accessible.
The text was updated successfully, but these errors were encountered: