Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing Echo in OpenAI endpoint #201

Closed
andreamad8 opened this issue Jun 22, 2023 · 4 comments
Closed

Implementing Echo in OpenAI endpoint #201

andreamad8 opened this issue Jun 22, 2023 · 4 comments
Labels

Comments

@andreamad8
Copy link

Maybe not too urgent, but would be nice to have echo in the OpenAI interface, this can facilitate scoring (e.g., QA dataset)

@zhuohan123
Copy link
Member

As you mentioned, the main blocker for adding echo is to let the vLLM engine also compute the logits for prompt tokens. This is in our plan. However, feel free to contribute and I believe this should be a very good issue to get familiar with vLLM and understand the structure of vLLM better.

@matheper
Copy link

matheper commented Aug 4, 2023

Hi guys,

I'm also interested in having the echo feature implemented for my specific use case. I would love to try to contribute to this issue. I've been using vLLM and it's been great so far!
I'd appreciate any suggestions on how to get started. I noticed the model forward has everything needed for computing the logits for the prompt tokens, but I am not sure how to make it work with the engine. Is the idea to compute one token at a time also for the prompt tokens, or to return them alongside the first sampled token?

@winglian
Copy link
Contributor

As you mentioned, the main blocker for adding echo is to let the vLLM engine also compute the logits for prompt tokens. This is in our plan. However, feel free to contribute and I believe this should be a very good issue to get familiar with vLLM and understand the structure of vLLM better.

@zhuohan123 could you point me in the right direction of where to look to start implementing this feature?

@hmellor
Copy link
Collaborator

hmellor commented Feb 2, 2024

@andreamad8 looks like this was solved in #1504, this issue can be closed 😄

yukavio pushed a commit to yukavio/vllm that referenced this issue Jul 3, 2024
SUMMARY:
* switch over to GCP VM's for building stage of "remote push"

NOTE: this is just the start. i'll redo the benchmarking and nightly
workflows in an upcoming PR.

TEST PLAN:
runs on remote push

Co-authored-by: andy-neuma <andy@neuralmagic.com>
maxdebayser pushed a commit to maxdebayser/vllm that referenced this issue Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants