Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: how to return logits #9784

Open
1 task done
psh0628-eng opened this issue Oct 29, 2024 · 0 comments
Open
1 task done

[Usage]: how to return logits #9784

psh0628-eng opened this issue Oct 29, 2024 · 0 comments
Labels
usage How to use vllm

Comments

@psh0628-eng
Copy link

Your current environment

Collecting environment information...
WARNING 10-29 04:40:34 cuda.py:22] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
Traceback (most recent call last):
  File "/workspace/collect_env.py", line 743, in <module>
    main()
  File "/workspace/collect_env.py", line 722, in main
    output = get_pretty_env_info()
  File "/workspace/collect_env.py", line 717, in get_pretty_env_info
    return pretty_str(get_env_info())
  File "/workspace/collect_env.py", line 549, in get_env_info
    vllm_version = get_vllm_version()
  File "/workspace/collect_env.py", line 270, in get_vllm_version
    from vllm import __version__, __version_tuple__
ImportError: cannot import name '__version_tuple__' from 'vllm' (/workspace/code/vllm/vllm/__init__.py)

but I am using v0.6.0 vllm

How would you like to use vllm

I tried to return logits part of output and like

{"text":["this is new vllm with logits bla bla....."],"logits":[1.0,2.0]}

I found each model (e.g.. llama.py) call and return logits from the function compute_logits. and then LocalOrDistributedWorkerBase::execute_model has logits value output = self.model_runner.execute_model(...) > 0 in worker_base.py

I lost connection to return output to RequestOutput to print like above. I appreciate if anyone can help. thanks

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@psh0628-eng psh0628-eng added the usage How to use vllm label Oct 29, 2024
@psh0628-eng psh0628-eng changed the title [Usage]: hot to return logits [Usage]: how to return logits Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

1 participant