Skip to content

[Performance]: vllm Eagle performance is worse than expected #9565

Open
@LiuXiaoxuanPKU

Description

@LiuXiaoxuanPKU

Proposal to improve performance

The spec dec performance of Eagleis worse than expected as shown below:

Model: meta-llama/Meta-Llama-3.1-70B-Instruct
Draft model: yuhuili/EAGLE-LLaMA3-Instruct-70B
Hardware: 4xH100
Target model TP=4
Dataset: ShareGPT
vllm version: v0.6.1.post2

Screenshot 2024-10-21 at 3 08 39 PM

Even at low QPS, the performance is far from 2x speedup reported in the original eagle paper (light blue line is the original, the solid lines are with SD). We need to understand the performance gap here. Possible reasons include but not limited to

  1. Miss tree verification kernel: For each position, we are choosing token from the top1 candidate instead of topk candidates. The reason is that we have not integrated tree verification kernel.
  2. System overhead: unnecessary GPU/CPU communication somewhere.
  3. We are testing on ShareGPT dataset while the heads are not finetuned on the same dataset.

Profiling is required to understand the issue. Open this issue to track the progress.

Report of performance regression

No response

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

The output of `python collect_env.py`

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance-related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions