Replies: 1 comment
-
Have you found the answer? I faced the same issue |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to fine tune MiniGPT-4 on a custom dataset and LlaMa-2 7b. I was able to successfully fine-tune the model using Vicuna on the same dataset.
I tried using the fine-tune script, with the pretrained model after stage 1 7b and Llama-2-chat-7b from Huggingface:
It seems there is a dimensionality mismatch between the stage 1 pretrained model and that of LlaMa-2, which is making me think that the pretrained weights were those for Vicuna, but not LlaMa?
If this is the case, is there anybody that could provide me with the stage-1 pretrained weights for Llama-2-chat-7b?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions