Skip to content

Commit

Permalink
iq1_s: use IQ2_XXS for attn_output
Browse files Browse the repository at this point in the history
At a cost of 0.04 extra bpw this gives a big improvement in PPL.
  • Loading branch information
Kawrakow committed Feb 12, 2024
1 parent 92e1d21 commit 584b369
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -10112,6 +10112,9 @@ static ggml_type get_k_quant_type(quantize_state_internal & qs, ggml_type new_ty
if (qs.i_ffn_down < qs.n_ffn_down/8) new_type = GGML_TYPE_Q2_K;
++qs.i_ffn_down;
}
else if (name.find("attn_output.weight") != std::string::npos) {
if (ftype == LLAMA_FTYPE_MOSTLY_IQ1_S) new_type = GGML_TYPE_IQ2_XXS;
}
} else if (name.find("attn_v.weight") != std::string::npos) {
if (ftype == LLAMA_FTYPE_MOSTLY_Q2_K) {
new_type = qs.model.hparams.n_gqa() >= 4 ? GGML_TYPE_Q4_K : GGML_TYPE_Q3_K;
Expand Down

0 comments on commit 584b369

Please sign in to comment.