Skip to content

[V1][Feat] Fail request if FSM fails to advance #18780

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

atbe
Copy link

@atbe atbe commented May 27, 2025

Fix streaming requests hanging when structured output FSM fails to advance

Problem

When using structured outputs with the xgrammar backend, streaming requests would hang indefinitely if the FSM (Finite State Machine) failed to advance. This occurred when accept_tokens() returned False in the xgrammar backend, logging an error but not properly terminating the request.

Diagnosis

The issue was in the scheduler's update_from_output() method. When processing new tokens for structured output requests, the code called accept_tokens() but ignored its return value:

request.structured_output_request.grammar.accept_tokens(req_id, new_token_ids)

When the xgrammar FSM encountered an invalid token sequence, it would:

  1. Log an error: "Failed to advance FSM for request %s for tokens %s. Please file an issue."
  2. Return False from accept_tokens()
  3. Leave the FSM in an invalid state

Since the scheduler didn't check the return value, it continued processing as if nothing was wrong, causing the streaming response to hang indefinitely without sending a completion signal.

Solution

The fix checks the return value of accept_tokens() and properly terminates the request when it returns False:

if not request.structured_output_request.grammar.accept_tokens(req_id, new_token_ids):
    # Grammar FSM failed to advance - mark request as finished with error
    logger.error(
        "Structured output FSM failed to advance for request %s. "
        "Terminating request.", req_id)
    request.status = RequestStatus.FINISHED_ABORTED
    stopped = True
    self._free_request(request)

This ensures that:

  • The request is marked as FINISHED_ABORTED
  • Resources are properly freed
  • The streaming response terminates with finish_reason: "abort"
  • Clients receive a proper completion signal instead of hanging

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label May 27, 2025
Signed-off-by: Ibrahim Ahmed <abeahmed2@gmail.com>
@atbe atbe force-pushed the fix-hanging-requests-when-fsm-fails-to-advance-in-xgrammar branch from 389a97c to 49f2024 Compare May 27, 2025 22:38
Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one tiny comment.

fwiw I think it is better to raise exception and propagate it accordingly in the engine. but that is probably for another day.

@@ -779,8 +779,15 @@ def update_from_output(
# NOTE: structured_output_request
# should not be None if use_structured_output, we have
# check above, so safe to ignore type warning
request.structured_output_request.grammar.accept_tokens( # type: ignore[union-attr]
req_id, new_token_ids)
if not request.structured_output_request.grammar.accept_tokens( # type: ignore[union-attr]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's also add a note and create a bug for tracking this here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an issue #18783

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does that look @aarnphm

@cadedaniel
Copy link
Collaborator

Could we add a test to this PR?

@aarnphm
Copy link
Collaborator

aarnphm commented Jun 3, 2025

@cadedaniel for visibility https://vllm-dev.slack.com/archives/C07QQ8DAXMK/p1748388146836209

This happens in the case where "after a few hundred thousand requests are sent to the same instance".

For tests I think we might be able to reproduce something when we send the same requests repeatedly? might need some fine tune for this regression test.

@cadedaniel
Copy link
Collaborator

I think one could mock the output of the model to be an invalid token wrt the grammar.

@ekagra-ranjan
Copy link
Contributor

I didnt understand why would accept_tokens() not accept a token as per the grammar when the grammar itself decides the mask and makes sure the transition is valid. My understanding was that the valid token sampled with bitmask is guaranteed to be pass the accept_token() w/o error.

Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @atbe.

Agree with @cadedaniel that a test would be good.

Comment on lines +782 to +783
if not request.structured_output_request.grammar.accept_tokens( # type: ignore[union-attr]
req_id, new_token_ids):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest using a variable here to make things a bit clearer:

Suggested change
if not request.structured_output_request.grammar.accept_tokens( # type: ignore[union-attr]
req_id, new_token_ids):
accepted = request.structured_output_request.grammar.accept_tokens( # type: ignore[union-attr]
req_id, new_token_ids)
if not accepted:

russellb added a commit to russellb/vllm that referenced this pull request Jun 12, 2025
Closes vllm-project#19493
Closes vllm-project#18376
Related to vllm-project#18780

Several people have noticed errors when using both the `xgrammar` and
`guidance` backends where we would start generating invalid tokens for a
request and they would be continuously rejected by the backend currently
in use. The conditions seemed to be:

- Only impacts certain models
- Occurs with concurrent structured output requests

After further investigation once an easy way to reproduce was provided
via vllm-project#19493, I identified more details about the failure:

- When the failured occurred in my test using a concurrency of 2,
  whichever request came in first was always successful. It was the
  second request that would fail.

Debugging further identified that the bitmask was not being applied
correctly, but only for that second request. In the GPU model runner,
this translates to the 2nd row in the bitmask tensor and the 2nd row
of the logits tensor. I could see that a couple bytes were left
unmasked.

I suspect the reason the issue appears to be model specific has to do
with the vocab and what the tokens are that were left unmasked. I have
not verified this part for sure.

The reason it occurred with both structured output backends is because
we use the `xgrammar` library's implementation of applying the bitmask
in all cases.

Xgrammar on cuda, by default, uses a triton kernel for applying the
bitmask. I identified that by forcing it to use the `torch.compile`
implementation instead, the problem is resolved. The torch
implementation is used for all other accelerator types in Xgrammar's
logic, so it seems fine to just force the use of that implementation.

I have not yet narrowed down the problem in triton kernel, but this
change works around the problem for vLLM.

We can move back to Xgrammar's wrapper that chooses which implementation
to use once we can verify everything is working properly again.

Signed-off-by: Russell Bryant <rbryant@redhat.com>
@russellb
Copy link
Member

PR related to the root cause of the problem that was occurring here: #19565

@aarnphm
Copy link
Collaborator

aarnphm commented Jun 12, 2025

@russellb I think we should still have this orthogonal to #19565. If the FSM fails to advance, we should gracefully fail this request, wdyt?

@aarnphm aarnphm changed the title Fail request if FSM fails to advance [V1][Feat] Fail request if FSM fails to advance Jun 12, 2025
@aarnphm aarnphm self-requested a review June 12, 2025 18:23
@russellb
Copy link
Member

@russellb I think we should still have this orthogonal to #19565. If the FSM fails to advance, we should gracefully fail this request, wdyt?

I agree that this change is still an improvement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

6 participants