Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Lora Adapter generated from mistral-finetune #546

Open
tensimixt opened this issue Jul 19, 2024 · 1 comment
Open

Support Lora Adapter generated from mistral-finetune #546

tensimixt opened this issue Jul 19, 2024 · 1 comment

Comments

@tensimixt
Copy link

tensimixt commented Jul 19, 2024

Feature request

Recent mistral models inlcuding mistral 7b v0.3 instruct have consolidated.safetensors which have different weights key names compared to what LoRAx expects. Also there are keys like lm_head, embed_tokens, layernorm and postattention_layernorm that vllm finds difficult to deal with.

Are you able to implement an update where a user who has generated a lora safetensors file from mistral-finetune can simply load this directly as a lora adapter into LoRAx and it just works instead of having to first try to map the weights to another weights key name convention as well as figuring out how to deal with unfamiliar keys such as layernorm and postattention_layernorm.

Motivation

Mistral-Finetune will become widely used so users who have generated lora safetensors from this should be able to simply plug and play their lora adapters into LoRAx.

Your contribution

I am happy to provide a lora safetensors if needed to help you understand the problem better.

@vgkavayah
Copy link

Is an update available for this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants