-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only warn of rate-limits when using HF endpoint #58
Conversation
2ea196d
to
e71fb16
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we know have the adaptor setting, I'd rather check its value than doing so w/ the URL. Wdyt?
e71fb16
to
1d14bcb
Compare
Yes good idea, I changed this to just check the adopter value now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nits on naming, otherwise we should be good to go!
Co-authored-by: Luc Georges <McPatate@users.noreply.github.com>
I am trying the llm-vscode extension with llm-ls on a locally hosted endpoint (running a custom fine-tuned model), however the extension still gives a warning that I might get rate limited by HuggingFace.
Since inference doesn't run on a HuggingFace server this warning is not necessary.