Skip to content

[Performance]: Move hash_request_tokens computation from input request threads #21247

@Jialin

Description

@Jialin

Proposal to improve performance

Currently, hash_request_tokens executes in engine core to compute hashes of blocks based on the request token IDs (and lora IDs, MM tokens, etc). And the current design make it to become the hard blocker of inferences.

As shown in the following charts, for small models opt128m with QPS 200 (input=700, output=1) scenarios, noticeable amount of time is used compute the hash.
Image

Ideally, in order to compute the hashes, all dependent metadata should be ready when the data received on input_socket processing threads who is running in parallel with engine core thread. With this move, we would be able to move the hashes computation out from critical path, as shown in the following chart.
Image

Report of performance regression

N/A

Misc discussion on performance

N/A

Your current environment (if you think it is necessary)

The output of `python collect_env.py`

N/A

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance-related issuesstaleOver 90 days of inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions