Provide option to reduce CPU RAM usage in Group Offload#11106
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
a-r-r-o-w
left a comment
There was a problem hiding this comment.
Awesome, changes look good! Maybe could refactor to something like this (but not necessary as this is only to reduce on level of indentation):
context = nullcontext() if self.stream is None else torch.cuda.stream(self.stream)
pinned_context = null_context() if self.stream is None else self._pinned_memory_tensors()
with context, pinned_context as pinned_memory:
...I haven't dug in much deeper but I suspect pinning and moving a larger number of tensors to GPU is slower than pinning and moving tensors individually with prefetching?
I don't know the exact reason either but my understanding is: Pinning tensors requires allocating new memory on CPU and introduces a synchronization each time it is done. What we want to do is pay this overhead cost upfront (which is what we do when low_cpu_mem_usage=False) for faster weight transfer overlaps with computation.
In this case, it seems like leaf_level low_cpu_mem_usage=True is able to hide some of this pinning and sync cost (behind the computation) because it is operating at a more granular level, and not with as many tensors as block_level low_cpu_mem_usage=True. With the latter, it looks like we are basically doing a big number of syncs alongside every computation (in which computation will finish very quickly and this cycle of slow pinning + fast computation happens repeatedly). Maybe we'll have to profile this and look at traces to see what's really going on
|
Btw, even if there is the current overhead with block_level low_cpu_mem_usage=True, we should still ship this due to benefit in the leaf_level case and investigate more later. Could also add a test for numerical correctness in low_cpu_mem_usage False vs True case |
What does this PR do?
Add a
low_cpu_mem_usageoption to group offloading so that pinning to CPU memory happens when a group is onloaded. The CPU RAM usage should be similar to sequential offloading.Benchmarked by running 5 forward passes through the Flux Transformer. Although I am observing that using this approach with blocks increases the inference time much more significantly than when using with leaf level offloading. I haven't dug in much deeper but I suspect pinning and moving a larger number of tensors to GPU is slower than pinning and moving tensors individually with prefetching?
Results:
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.