Closed
Description
Issues Converting Hugging Face Model to gguf Format
I encountered problems while attempting to convert a Hugging Face model to the gguf format on an Ubuntu system. Here is my environment information:
- PyTorch: 2.2.1
- CUDA: 12.1.1
- Python: 3.11
I used the following commands to clone and prepare the llama.cpp project:
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip3 install -r requirements.txt
I successfully merged the safetensors file and saved it as a PyTorch model. However, I ran into an issue when using the convert.py script to convert the model to the GGUF format with f16 precision. The command I used was:
python3 convert.py /hy-tmp/converted_model --outfile /hy-tmp/model.gguf --outtype f16
The error that occurred is: KeyError: 'transformer.h.0.attn.c_attn.bias'
Could anyone assist me in resolving this issue?
An error occurred:KeyError: 'transformer.h.0.attn.c_attn.bias'
Could anyone help me?