Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chainlit bonus material fixes #361

Merged
merged 9 commits into from
Sep 18, 2024
Merged

Chainlit bonus material fixes #361

merged 9 commits into from
Sep 18, 2024

Conversation

d-kleine
Copy link
Contributor

@d-kleine d-kleine commented Sep 18, 2024

  • moved to device
    • idx and model were on different devices when using CUDA
  • fixed path
  • fixed readme command
  • added .files folder from chainlit to .gitignore
    • .files directory is used by Chainlit to store temporary files during runtime, which should not be included in version control

@d-kleine
Copy link
Contributor Author

d-kleine commented Sep 18, 2024

Please double-check if this works fine for you too 🙂

I think it would be good to add a welcome message for the user to see that the chatbot is working before typing, currently there is none. What do you also think about adding with torch.no_grad(): to the inference code?

@d-kleine d-kleine marked this pull request as ready for review September 18, 2024 03:04
@rasbt rasbt changed the title ch05/06 fixes Chainlit bonus material fixes Sep 18, 2024
Copy link
Owner

@rasbt rasbt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thanks for the PR! There was only a minor thing where the device movement can be simplified (i.e., the cloning and detaching shouldn't be necessary)

ch05/06_user_interface/app_orig.py Outdated Show resolved Hide resolved
ch05/06_user_interface/app_own.py Outdated Show resolved Hide resolved
@rasbt
Copy link
Owner

rasbt commented Sep 18, 2024

I think it would be good to add a welcome message for the user to see that the chatbot is working before typing

Thanks for suggesting, but I would just stay with simplicity here.

with torch.no_grad():

This is a good suggestion, but it's already handled in the def generate() function we are calling there.

@d-kleine
Copy link
Contributor Author

d-kleine commented Sep 18, 2024

This is a good suggestion, but it's already handled in the def generate() function we are calling there.

Okay, I see. I think it would be helpful to add this info as a code comment as this not directly obvious from the code, and it would be on par with your written code that uses with torch.no_grad(): when generating the output text

@rasbt
Copy link
Owner

rasbt commented Sep 18, 2024

Re: The warning you are getting without cloning and detaching

Oh interesting, I am not getting the warning. Maybe it's an older PyTorch version or a Windows thing?

@d-kleine
Copy link
Contributor Author

d-kleine commented Sep 18, 2024

Oh interesting, I am not getting the warning. Maybe it's an older PyTorch version or a Windows thing?

I ran the code both on Linux as well as Windows. I think this has been the fix suggested by the error message, so that's why I added it to the code. Just tested your changes: without clone().detach() it works for me too, without any warning:

2024-09-18 14:37:33 - Your app is available at http://localhost:8000
2024-09-18 14:37:35 - Translation file for de not found. Using default translation en-US.
2024-09-18 14:37:36 - Translated markdown file for de not found. Defaulting to chainlit.md.
2024-09-18 14:37:44 - Translation file for de not found. Using default translation en-US.

LGTM 🙂

@rasbt rasbt merged commit eefe4bf into rasbt:main Sep 18, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants