Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importer for GPTQ quantized LLaMA models #301

Merged
merged 3 commits into from
Mar 21, 2023
Merged

Conversation

comex
Copy link
Contributor

@comex comex commented Mar 19, 2023

Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa

Current status: Seems to be working now.

I was originally hoping to validate the results by matching the Python implementation's output exactly, but precision and non-associativity issues make this very difficult, including when performing matrix multiplications and, especially, computing norms.

Anyway, design details:

The models being imported store per-layer weights in essentially q4_1 format, although the addend and scale are shared across an entire row rather than every group of 32 weights. This script duplicates the addend and scale to match ggml's expectations, at the cost of wasting some memory.

However, there are two differences which I accommodated changing the output format (and adding corresponding support to main.cpp) rather than having the script match the existing one:

  • The tok_embeddings and output weights (i.e. the weights that aren't per-layer) are f16 instead of q4_1. They could be converted to q4_1, and the impact of the loss of precision would probably be low, but this would rule out exactly matching the Python implementation's output for validation.

  • There is no sharding, since the input doesn't have it, and for a CPU-only implementation it seems more useful to avoid having to deal with multiple files.

The new format is differentiated from existing q4_1 format by changing the 'f16' header flag to a new value, 4. That said, I think a cleaner approach would be to change main.cpp to support loading each tensor with an arbitrary sharding configuration and type rather than hardcoding specific combinations of types. So far I've wasted too much time debugging to try implementing this...

Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa

Current status: Something is busted.  The output starts out decent, but
quickly degrades into gibberish.  This doesn't happen with either the
original GPTQ-for-LLaMa using the same weights, or llama.cpp when using
weights quantized by its own quantizer.  Is there a bug in the
conversion script that somehow only comes into play with a large context
size?

I did notice one potential issue.  It's clearly not the main cause of
the gibberish, since it doesn't happen when using q4_1 weights quantized
by llama.cpp itself, but it seems concerning.  When doing a matrix
multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when
the multiplication is not done with BLAS, the intermediate results are
stored in the smaller format rather than f32.  This seems like an
unnecessary waste of precision, especially in the q4_1 case.

I was originally hoping to validate the results by matching the Python
implementation's output exactly, but precision and non-associativity
issues make this very difficult, including when performing matrix
multiplications and, especially, computing norms.

Anyway, design details:

The models being imported store per-layer weights in essentially q4_1
format, although the addend and scale are shared across an entire row
rather than every group of 32 weights.  This script duplicates the
addend and scale to match ggml's expectations, at the cost of wasting
some memory.

However, there are two differences which I accommodated changing the
output format (and adding corresponding support to main.cpp) rather than
having the script match the existing one:

- The tok_embeddings and output weights (i.e. the weights that aren't
  per-layer) are f16 instead of q4_1.  They could be converted to q4_1,
  and the impact of the loss of precision would probably be low, but
  this would rule out exactly matching the Python implementation's
  output for validation.

- There is no sharding, since the input doesn't have it, and for a
  CPU-only implementation it seems more useful to avoid having to deal
  with multiple files.

The new format is differentiated from existing q4_1 format by changing
the 'f16' header flag to a new value, 4.  That said, I think a cleaner
approach would be to change main.cpp to support loading each tensor with
an arbitrary sharding configuration and type rather than hardcoding
specific combinations of types.  So far I've wasted too much time
debugging to try implementing this...
@eous
Copy link

eous commented Mar 19, 2023

Dunno if its the same thing but when dealing with hugging face llama models we had to unpermute the wq/wk attention layers

w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)

@ggerganov
Copy link
Owner

Thank you for looking into this - looks like a very good progress so far.

I did notice one potential issue. It's clearly not the main cause of the gibberish, since it doesn't happen when using q4_1 weights quantized by llama.cpp itself, but it seems concerning. When doing a matrix multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when the multiplication is not done with BLAS, the intermediate results are stored in the smaller format rather than f32. This seems like an unnecessary waste of precision, especially in the q4_1 case.

We are wasting precision for performance. The multiplication is much faster when using the smaller format, mainly because we reduce the memory traffic. If my analysis is correct, the multiplication becomes memory bound at higher number of threads and therefore, we prefer to reduce the memory size of the data to gain performance.

@gjmulder gjmulder added enhancement New feature or request model Model specific labels Mar 20, 2023
@BadisG
Copy link

BadisG commented Mar 20, 2023

I asked gpt4 if he saw some errors on your code, here's his answer:

It's possible that there could be issues with the math operations in the code that cause the described output degradation. Here are some potential causes:

Precision loss: As you mentioned earlier, precision loss could be a significant factor in the degradation of the output.
When working with lower-precision data types, such as float16 or custom quantization like q4_1, small errors can accumulate during calculations, leading to incorrect results.

Type casting: The code uses type casting in multiple places, such as converting tensor data types or viewing tensor elements as different data types (e.g., viewing float32 elements as int32). 
These type conversions could introduce errors or result in unintended behavior, particularly if there is a mismatch between the expected and actual data types.

Matrix operations: The code involves several matrix operations, such as transposing, reshaping, and concatenating tensors. 
If there are any errors or inconsistencies in these operations, they could lead to the gibberish output you're experiencing.

Order of operations: The order in which mathematical operations are performed can impact the final result due to rounding and truncation errors. 
If the order of operations is different between the original GPTQ-for-LLaMa implementation and your C++ implementation, it could contribute to the output degradation.
In the convert_q4 function, you mentioned that the intermediate results are stored in smaller formats rather than f32 when performing matrix multiplications. 
You can try changing the data type to float32 for better precision. For example, modify the following lines in the convert_q4 function:

zeros = model[f"{src_name}.zeros"].numpy()
scales = model[f"{src_name}.scales"].numpy()
to:

zeros = model[f"{src_name}.zeros"].numpy().astype(np.float32)
scales = model[f"{src_name}.scales"].numpy().astype(np.float32)

Additionally, you can convert the qweight tensor to float32 before reshaping it in the convert_q4 function:

qweight = model[f"{src_name}.qweight"].numpy().T.astype(np.float32)

In the convert_non_q4 function, instead of conditionally converting the tensor v to float32 only when the tensor shape has a single dimension, you can always convert it to float32:

Replace:
if len(shape) == 1:
    print("  Converting to float32")
    v = v.to(torch.float32)

with:
print("  Converting to float32")
v = v.to(torch.float32)

These changes should help improve the precision of the output. However, keep in mind that these modifications may also increase the memory usage and computation time. 
It is important to test the modified code and observe the impact on both the output quality and the performance.

I'm sorry in advance if it didn't help at all 😅

@comex
Copy link
Contributor Author

comex commented Mar 21, 2023

Dunno if its the same thing but when dealing with hugging face llama models we had to unpermute the wq/wk attention layers

w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)

That was exactly the issue! Thanks.

@comex comex marked this pull request as ready for review March 21, 2023 00:44
@comex comex changed the title [WIP, broken] Importer for GPTQ quantized LLaMA models Importer for GPTQ quantized LLaMA models Mar 21, 2023
@BadisG
Copy link

BadisG commented Mar 21, 2023

I got this error when trying to convert this alpaca 4bit file: https://huggingface.co/ozcur/alpaca-native-4bit/tree/main
(I renamed the alpaca7b-4bit.pt to llama7b-4bit.pt to match your command lines)

D:\Large Language Models\CONVERTISSEURS\gptq to ggml>python convert-gptq-to-ggml.py llama7b-4bit.pt
tokenizer.model out.bin
Traceback (most recent call last):
  File "D:\Large Language Models\CONVERTISSEURS\gptq to ggml\convert-gptq-to-ggml.py", line 33, in <
module>
    assert tokenizer.vocab_size() == n_vocab
AssertionError

I guess this converter won't work on models that aren't the llama raw model right?

@comex
Copy link
Contributor Author

comex commented Mar 21, 2023

I guess this converter won't work on models that aren't the llama raw model right?

I haven't tested it with any other models but I'd like for it to work with Alpaca. I'll look into it if I have a chance.

@BadisG
Copy link

BadisG commented Mar 21, 2023

I guess this converter won't work on models that aren't the llama raw model right?

I haven't tested it with any other models but I'd like for it to work with Alpaca. I'll look into it if I have a chance.

Great! Looking forward to it!
I just tested your convert file with the regular llama model, it works flawlessly.

I was really eager to try this new type of quantizer on llama.cpp, I'm glad someone did it at the end! I really appreciate your efforts and I won't be the only one trust me! 😄

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🦙

@qwopqwop200
Copy link

qwopqwop200 commented Mar 22, 2023

Changed GPTQ to support grouping. Because of this, I think the current code may not work.
Additionally, Grouping can significantly reduce the performance loss of quantization by using a little extra memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request model Model specific
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants