-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]: Automate Speculative Decoding #4565
Comments
As much as I would love to take credit for bringing Speculative Decoding to vLLM, I'm relatively certain the praise belongs to @cadedaniel. 😁 |
It's indeed a good idea to make the speculative system smarter, to be able to automatically adjust according to the serving load and serving data. Along the same direction, there is one more thing that is not mentioned but is worth doing, which is dyanmical candidate tree topology. This is a generalization of the dynamical speculation length mentioned above, and will be possible after enabling the tree-based speculative decoding on vllm. We are actually actively exploring this direction. Another good thing about it is that it is orthogonal to the roadmaps above and thus will be compatible with them, as you mentioned. On the other hand, this direction also falls into the title of this RFC, the dynamical speculative decoding. So I mention it here to bring the community's attention to it and hope in the next step this implementation will be a contribution to vllm. More specifically, in the 1D sequential spec-decoding, the spec_length can be dynamically set according to the predicted acceptance rate. In the tree-style spec-decoding, which is a generalization of 1D, the tree topology including the tree size will be dynamically set according to an acceptance rate vector. And then further speed up can be expected. |
@LiuXiaoxuanPKU
|
Yes! In the research, we also explore the idea of dynamically adjust top k in for tree-style speculation. Our preliminary results are promising, but the results are based on the simulation. Once we have the tree-style speculative decoding in vllm, we can also add that. |
Thanks for the interest!
|
@LiuXiaoxuanPKU Thanks for your response, and best wishes to you as well! |
@LiuXiaoxuanPKU It's great to know that the vllm is looking into the tree-style 2D speculation. Actually, I'm developing an implementation of this tree-style 2D speculation, which works on any tree topology. And similar to what you mentioned, my estimation also shows that the results will be promising. We can expect further boost in this direction. When I finish the implementation, I would like to create a PR and integrate this into vllm's speculation. Some updates: I just notice that here #4669 people has Medusa/Eagle/Hydra implementation now. The tree-style speculation will be a good match with them. |
We recently showed that even a relatively simple speculation lookahead controller can speed up the decoding.
Yes, speculative decoding leads to slowdowns if the accuracy is too low. Our proposed alternative (the DSI algorithm) is always faster than speculative decoding and never slower than traditional autoregression (nonspeculative). We proved it mathematically and provided supporting experiments. I'm open to collaborations to push it forward. |
@LiuXiaoxuanPKU Thank you (and other co-workers) for the great work! |
@LiuXiaoxuanPKU what is the status of this issue? |
amazing works ! I would like to know whether lookahead decoding technology is in the planning of vllm, because it can speed up reasoning without the need for an additional draft model or any additional training. |
@brotherchen more info here on lookahead decoding in vLLM. currently not being worked on as far as I know. |
@LiuXiaoxuanPKU Very interesting work! Our recent work on speculative decoding reveals that executing the draft model and the target model in parallel can achieve an adaptive draft length, which can significantly improve the speculative decoding performance. I would like to know whether the draft worker and the target worker can execute in parallel within vllm? |
Currently no. The draft model and target model are executed sequentially. I image the asynchronous execution will be a big change for the current vllm's arch, but any discussion and contribution is welcomed! |
This hf blog looks great, can it be easily integrated into vllm? |
@LiuXiaoxuanPKU Amazing work! I have 2 questions regarding the implementation for
|
Motivation.
Speculative Decoding is a crucial feature for reducing latency, currently supported by vLLM (credit to @cadedaniel !). However, when deploying Speculative Decoding in real online LLM serving systems that use continuous batching, improvements are not always observed. Paradoxically, under conditions of high request rates or low speculation accuracy, latency may actually increase.
We propose to address these issues. We want to intelligently determines the optimal speculation length for each request, ranging from zero (no speculation) to multiple tokens. This determination is based on the concept of goodput, which reflects the current observed load across the entire system, thus allowing for most effective speculative execution.
The method is designed for versatility, compatible with various speculative decoding styles, from traditional, model-based approaches to model-free methods such as prompt lookup and tree-style decoding. This innovation builds on recent research by the vLLM team. We plan to release the detailed paper shortly.
Proposed Change.
Milestone 1: Implement a mechanism to disable speculative decoding (proposed length = verified length = 0), allowing users to manually decide when to cease speculative decoding. Based on prior empirical studies, we can initiate this process by monitoring the running_queue size. Speculative decoding will be suspended for incoming requests once the running_queue exceeds a predefined threshold. Cody will assist with this implementation, thanks @comaniac!
Milestone 2: Dynamically determine the proposed length for speculative decoding. We will utilize runtime information, such as batch size, in conjunction with profiled parameters like token acceptance rate and the comparative costs of running the draft versus the target model. This approach allows us to adjust the proposed length in real-time, optimizing performance based on current system conditions.
Milestone 3: Eliminate reliance on pre-profiled parameters and gather necessary information directly from runtime. We will collect data such as the token acceptance rate and the execution times for both the draft and target models from previous steps. This data will then be integrated into the goodput calculation, allowing for a more dynamic and responsive system configuration
Feedback Period.
No response
CC List.
No response
Any Other Things.
The text was updated successfully, but these errors were encountered: