-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Speculative Decoding #1738
Comments
In fact, we have already implemented the Medusa TreeMask version in LMDeploy. When batch=1, the acceleration ratio and RPS improvement relative to the main branch are consistent with those in the blog. And when the batch size increases, the overhead of Medusa prefill is greater than the benefit of generating multiple tokens at each iteration. We are currently working on solving this problem. Please stay tuned. |
EAGLE also has plans to support open source in the future. |
@zhyncs I implemented EAGLE in vllm and met the same probelm when the batch size increases. Meituan's solution introduce a novel sampling mechanism that leverages Thompson Sampling to regulate the generation processes. And someone use a trained control module (I forgot the source). Or, similar to VLLM's current approach, we can simply skip speculative decoding when the batch size exceeds a certain threshold. It's simple and effective, additional judgment conditions is useful for future enhancements. |
Thank you for sharing. In fact, this is currently how we do it internally as well, but this approach is still a bit rough. If we want speculative decoding to take effect by default without burdening the user's mind when they are not using it, we also need to dynamically adjust the threshold based on actual workloads, which introduces a certain level of complexity. In actual usage, the reception rate of Eagle is slightly higher than that of Medusa. Thompson Sampling Control Mechanism Currently not implemented in actual production environment. |
Can you reveal the schedule。Or share the development of the branch together,thanks!! |
@zhyncs Hey, could you let me know how things are going right now? Maybe there's something I can do to lend a hand? Appreciate it. |
I will split the internal implementation of the TreeMask version into multiple PRs and then submit them. |
Thank you, could you share the methods to solve the performance degradation after the batch size increases |
The overall design and detailed implementation have been discussed with @lzhangzz before, and there was improvement in small batch sizes, but it didn't work well in large batch sizes. As far as I know, the performance achieved on vLLM is also similar. |
EAGLE has a higher computational load than Medusa, but it has a higher acceptance rate. It performs better in large batches compared to Medusa. However, this is just a temporary solution. The reason this approach works, which involves trading more computation for reduced latency, is because in small batches, computational resources are not fully utilized. |
how is the attention kernel chosen during the verification stage? As mentioned in the FlashInfer blog, the computational intensity of the append/verification stage is between that of decode and prefill. It doesn't seem optimal to use either the decode or prefill kernel from the lmdeploy engine directly. It should be noted that when Q is relatively long, using the current prefill kernel for verification might not be the optimal approach |
@snippetzero The current implementation uses the prefill kernel in TurboMind. cc @lzhangzz |
Hi, It seems that I can not find any code related to Speculative Decoding in LMdeploy. Has it not to been pushed in the sources? If done, could you provide me commit id or simply some key words? |
I don't think it's available yet, and I'm not sure if this can be prioritised right now. Maybe @lvhan028 can comment? It certainly seems very exciting based on vLLM's findings:
and also based on together.ai's findings:
|
No, it hasn't |
I know there are a few people monitoring this, so I just want to make sure that lvhan028's response is not interpreted as lack of interest in this feature. The LMDeploy team is interested in implementing speculative decoding! Do not lose faith.
Very exciting! Especially if compatible with the other key features (AWQ, prefix cache, quantized KV cache). I will be patient. |
Some more/newer references relevant to this feature request: EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees
DISCO: DynamIc SpeCulation lookahead Optimization
Note: I haven't read into the details, but of course I suspect these reported speedup ratios are under very ideal circumstances in terms of compute density / model size relative to GPU hardware, and other factors, but even if it were to only give a 1.2x speedup (for example) in real-world circumstances, that would be very useful! |
Awesome |
Motivation
Speculative decoding can speed up generation more than 2x. This degree of speedup is an important feature for a production-grade LM deployment library, and it seems the methods are starting to mature enough to make their way into frameworks like TGI and vLLM, so might be a good time for LMDeploy to consider adding support for a popular/established speculative decoding method.
Related resources
Below is a copy-paste from a neat project called Spec-Bench. The ranking when running 33B models is similar. Please see the linked repo for latest data.
Note that MLPSpeculator is not included in the benchmark since it is newer. Another new method that isn't included in Spec-Bench as of writing:
The text was updated successfully, but these errors were encountered: