Skip to content

[V0][Bugfix] Fix Mamba cache crashing #15296

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

benchislett
Copy link
Contributor

@benchislett benchislett commented Mar 21, 2025

When a request is finished but the scheduler has no more requests, the finished_req_ids will be cleared from the scheduler but the model execution will be skipped. This means that the Mamba cache cannot free the slots for those requests, leading to a slow buildup of unavailable slots until there are none left.

This PR changes the behaviour of the LLM engine to only clear the finished_req_ids when there are scheduled requests to process.

FIX #13129

Signed-off-by: Benjamin Chislett <chislett.ben@gmail.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@youkaichao youkaichao requested a review from tlrmchlsmth March 22, 2025 05:51
@youkaichao
Copy link
Member

@tlrmchlsmth is the expert on mamba

Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. It looks like the fix is already in async_llm_engine.py but not llm_engine.py, so please merge in latest main. (And please make sure the code is equivalent in the two files as well). Thank you!

Copy link

mergify bot commented Apr 11, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @benchislett.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Apr 11, 2025
Signed-off-by: Benjamin Chislett <chislett.ben@gmail.com>
@benchislett
Copy link
Contributor Author

@tlrmchlsmth conflict resolved, diff looks good, ready to merge.

@mergify mergify bot removed the needs-rebase label Apr 11, 2025
@mgoin mgoin added bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed v0 labels Apr 11, 2025
@sssrijan-amazon
Copy link
Contributor

Is this change not going to be merged? Any updates?

Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benchislett could you merge in latest main?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed v0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: IndexError: pop from empty list For Jamba
5 participants