-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
decapoda-research/llama-13b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' #310
Comments
When I try this
The error becomes this:
I wonder, where does this I didn't specify that. At last, I want to know, how can I serve my local vicuna-13B model? Is that supported? |
Another question: I want to specify multiple local loras when luanching server, How can I achieve this? |
Yes, it's supported. you can specify another params |
it seems this feature is already in roadmap (LoRA blending): #57 |
Hi @sleepwalker2017, I think the issue with loading the adapter is the same as the issue in #311, which should be fixed by #317. There should be no issue with serving your local vicuna-13b model following this change, but let me know if you run into more issues. Regarding your second question: when you say you want to specify multiple local LoRAs, do you mean you wish to merge them together? If so, we support this, just not during initialization. We have some docs on LoRA merging here. But if you mean you want to be able to call different adapters for each request, you can do so by specifying the |
Thank you, I checked this, the 1st issue has been solved. About the 2nd question, when I launch server with this: And then I send request with different lora_ids like this: adapters = ['mattreid/alpaca-lora-13b', "merror/llama_13b_lora_beauty", 'shibing624/llama-13b-belle-zh-lora', 'shibing624/ziya-llama-13b-medical-lora']
def build_request(output_len):
global req_cnt
idx = req_cnt % len(test_data)
lora_id = idx % 4
input_dict = {
"inputs": test_data[idx],
"parameters": {
"adapter_id": adapters[lora_id],
"max_new_tokens": 256,
"top_p": 0.7
}
}
req_cnt += 1
return input_dict I want to know whether the adapter computation of these requests are merged. |
When I use my code to benchmark lora-x on A30*2, I got the poor benchmark. There must be something wrong. Please take a look, thank you. |
Here is my code:
Here is the error message:
I need some help, thank you!
Why does it have to access the huggingface.co even if I supplied the local folder path?
The text was updated successfully, but these errors were encountered: