Skip to content

fix: do not print perf stat when NaN #1375

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 15, 2024
Merged

Conversation

leseb
Copy link
Contributor

@leseb leseb commented Nov 14, 2024

b415326 fix: do not print perf stat when NaN

commit b415326
Author: Sébastien Han seb@redhat.com
Date: Thu Nov 14 11:04:47 2024 +0100

fix: do not print perf stat when NaN

If the chat is exited or interrupted it will still print the stats with
NaN values which is unnecessary.

Signed-off-by: Sébastien Han <seb@redhat.com>

If the chat is exited or interrupted it will still print the stats with
NaN values which is unnecessary.

Signed-off-by: Sébastien Han <seb@redhat.com>
Copy link

pytorch-bot bot commented Nov 14, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1375

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ No Failures

As of commit b415326 with merge base 4697764 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Nov 14, 2024
@Jack-Khuu
Copy link
Contributor

PR looks good, but can you share an example of when this would get triggered (i.e. when are we seeing NaN via manually kill)?

@leseb
Copy link
Contributor Author

leseb commented Nov 15, 2024

PR looks good, but can you share an example of when this would get triggered (i.e. when are we seeing NaN via manually kill)?

$ python3.10 torchchat.py chat llama3.1 
NumExpr defaulting to 12 threads.
PyTorch version 2.6.0.dev20241002 available.
lm_eval is not installed, GPTQ may not be usable
Using device=mps 
Loading model...
Time to load model: 15.06 seconds
-----------------------------------------------------------
Starting Interactive Chat
Entering Chat Mode. Will continue chatting back and forth with the language model until the models max context length of 8192 tokens is hit or until the user says /bye
Do you want to enter a system prompt? Enter y for yes and anything else for no. 

User: /bye
Exiting Chat.


      Average tokens/sec (total): nan                 
Average tokens/sec (first token): nan                 
Average tokens/sec (next tokens): nan 

@Jack-Khuu Jack-Khuu merged commit d7b681a into pytorch:main Nov 15, 2024
52 checks passed
vmpuri pushed a commit that referenced this pull request Feb 4, 2025
If the chat is exited or interrupted it will still print the stats with
NaN values which is unnecessary.

Signed-off-by: Sébastien Han <seb@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants