-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
[torch.compile] Hide KV cache behind torch.compile boundary #11677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
22 commits
Select commit
Hold shift + click to select a range
176dc6d
hide kv cache behind torch.compile
heheda12345 c5a5155
support pp & non-attn layers
heheda12345 de8324b
format
heheda12345 fa9b0bb
update cpu engine
heheda12345 bfa7c71
Merge branch 'main' of github.com:vllm-project/vllm into hide_kv_cache
heheda12345 3bb7d6d
support encoder-decoder and move kv_cache to Attention
heheda12345 2eeab7b
Merge branch 'main' of github.com:vllm-project/vllm into hide_kv_cache
heheda12345 7a3c154
fix bug
heheda12345 9f47942
update comment
heheda12345 4418608
update format
heheda12345 3590e55
remove unused code
heheda12345 beb0dee
layers->attn_layers
heheda12345 ffe8cdd
update test
heheda12345 2cb84f2
support pp virtual engine
heheda12345 76712f8
fix
heheda12345 bab3bea
revert unrealted change
heheda12345 10f5353
fix test
heheda12345 745e196
Merge branch 'main' of github.com:vllm-project/vllm into hide_kv_cache
heheda12345 dcded5b
fork new process for v1 teste
heheda12345 1173d20
Merge branch 'main' of github.com:vllm-project/vllm into hide_kv_cache
heheda12345 616b36c
fix bug in cpu test
heheda12345 ff088e1
fix bug in cpu test
heheda12345 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using fork_new_process_for_each_test os.fork() results in RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing. you must use the 'spawn' start method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zkf-qwj sorry, how is it related to this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@youkaichao fork_new_process_for_each_test() uses os.fork to create a process, os.fork() raises CUDA initialization errors. thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we use
fork_new_process_for_each_test
before initializing cuda.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@youkaichao Cuda here is lazy initializing.

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you using non-nvml devices like jetson? the common code path is
vllm/vllm/platforms/cuda.py
Line 318 in e584b85
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytorch:
`
pytorch\torch\cuda_init_.py:
def _lazy_init():
global _initialized, _queued_calls
if is_initialized() or hasattr(_tls, "is_initializing"):
return
def is_initialized():
r"""Returns whether PyTorch's CUDA state has been initialized."""
return _initialized and not _is_in_bad_fork()
_is_in_bad_fork = getattr(torch._C, "_cuda_isInBadFork", lambda: False)
pytorch\torch\csrc\cuda\Module.cpp:
static void forked_child() {
in_bad_fork = true;
torch::utils::set_requires_cuda_init(true);
}
`
os.fork() call forked_child(), in_bad_fork is True