fix kl mismatch by resetting prefix cache#1650
Merged
Conversation
samsja
reviewed
Jan 24, 2026
Jackmin801
reviewed
Jan 24, 2026
Member
Jackmin801
left a comment
There was a problem hiding this comment.
Good catch! Can we modify the /update_weights and /reload_weights to do the reset instead of having a new path?
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
Jackmin801
reviewed
Jan 24, 2026
|
|
||
|
|
||
| @router.post("/load_lora_adapter") | ||
| async def load_lora_adapter(request: Request): |
Member
There was a problem hiding this comment.
can we use the basemodel dataclass definition so the swagger docs are correct? its useful for debug sometimes
samsja
approved these changes
Jan 24, 2026
samsja
pushed a commit
that referenced
this pull request
Jan 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
After bumping verifiers in #1572, the kl mismatch increased, because Verifiers stopped returning prompt logprobs in PrimeIntellect-ai/verifiers#666 and vLLM defaults to prefix caching in that case, which needs to be reset after weight updates for correctness.
Note
Ensures KV cache invalidation when model weights/adapters change to prevent stale-prefix usage.
update_weightsandreload_weightsRPCs inserver.pyPOST /load_lora_adapterserver endpoint wrapping vLLM’s loader; on success resets prefix cache and returns{status: ok}; propagates error responses/load_lora_adapter(was/v1/load_lora_adapter); updated docstrings/commentsWritten by Cursor Bugbot for commit 17539f9. This will update automatically on new commits. Configure here.