Skip to content

Conversation

Frapschen
Copy link
Contributor

What type of PR is this?
/kind feature

What this PR does / why we need it:
support vLLM cache salting in prefix aware scorer

Which issue(s) this PR fixes:
Fixes #1631

Does this PR introduce a user-facing change?:

support vLLM cache salting in prefix aware scorer

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 25, 2025
Copy link

netlify bot commented Sep 25, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit 2f5ac56
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/68d64841ee54ec000985df0d
😎 Deploy Preview https://deploy-preview-1646--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 25, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @Frapschen. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 25, 2025
@nirrozenbaum
Copy link
Contributor

/cc @liu-cong

Copy link
Contributor

@liu-cong liu-cong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, mostly nits, just want to make sure the parsing aligns with vllm API.

// Prompt is the prompt that was sent in the request body.
Prompt string `json:"prompt,omitempty"`
// CacheSalt is parameters from the vLLM security feature.
CacheSalt string `json:"cache_salt,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm, did you test with both completion and chatcompletions request with vllm and make sure the parsing here works?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I have checked the request body definition in VLLM:

Both of them have cache_salt. So I sent the below curl:

for completion:

curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{
"model": "food-review-1",
"prompt": "Write as if you were a critic: San Francisco",
"max_tokens": 100,
"cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ==",
"temperature": 0
}'

parse result:
image

for chatcompletions:

curl -X POST -i ${IP}:${PORT}/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "model": "food-review-1",
        "max_tokens": 100,
        "temperature": 0,
        "cache_salt": "Z3V2bmV3aGxza3ZubGFoZ3Zud3V3ZWZ2bmd0b3V2bnZmc2xpZ3RoZ2x2aQ==",
        "messages": [
          {
            "role": "developer",
            "content": "You are a helpful assistant."
          },
          {
            "role": "user",
            "content": "Linux is said to be an open source kernel because "
          }
        ]
  }'

parse result:
image

Frapschen and others added 2 commits September 26, 2025 15:31
Co-authored-by: Cong Liu <conliu@google.com>
@Frapschen Frapschen requested a review from liu-cong September 26, 2025 08:04
@liu-cong
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 29, 2025
@liu-cong
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 29, 2025
@ahg-g
Copy link
Contributor

ahg-g commented Sep 29, 2025

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ahg-g, Frapschen

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 29, 2025
@k8s-ci-robot k8s-ci-robot merged commit 198e6ca into kubernetes-sigs:main Sep 29, 2025
11 checks passed
@Frapschen Frapschen deleted the support-vLLM-cache-salting branch September 29, 2025 23:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support vLLM cache salting

5 participants