-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Enables offline /score for embedding models #12021
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
96484b7
to
d37339e
Compare
@maxdebayser @gmarinho2 This looks like it only touches the offline entrypoint, but the PR title mentions It's not 100% clear to me from the linked issue either what was intended- is there more work planned to support the online interface or are we only aiming for offline? |
@joerunde, we're aiming for both. @gmarinho2 started with the offline API first. |
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left some suggestions, but it looks good to me. I think we can open this as a PR now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments.
I also suggest splitting out the logic for scoring and general embedding models into separate functions.
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Have you addressed this? |
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
vllm/entrypoints/llm.py
Outdated
@@ -1032,6 +1133,7 @@ def score( | |||
A list of ``ScoringRequestOutput`` objects containing the | |||
generated scores in the same order as the input prompts. | |||
""" | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid unnecessary line changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still see this line in the PR. Never mind, the line is for _embedding_score
. Let's just merge this.
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Added the |
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the confusion, looks good now.
…2021) Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Enables LLM.score() for all embedding models. The request_id consists of the request_ids of each embedding in the pair, separated by "_". The prompt_token_ids are the concatenation of all the token ids, in order and separated by the padding token when it is available. This PR is the first of two for completing the issue. The second PR will implement the same feature in the OpenAI API.
Issue: [Feature]: Enable /score endpoint for all embedding models (1/2)