Skip to content

Commit

Permalink
Enable finetuning with torchao quantized model (#33361)
Browse files Browse the repository at this point in the history
enable training
  • Loading branch information
SunMarc authored Sep 13, 2024
1 parent 6cc4dfe commit 0963229
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions src/transformers/quantizers/quantizer_torchao.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,8 @@ def is_serializable(self):

@property
def is_trainable(self):
# torchao does not have official support for QAT (Quantization Aware Training)
# but torchao support nf4/PEFT, but it is not integrated yet
# TODO: if this is supported in the future, do a version check here.
return False
supported_quant_types_for_training = [
"int8_weight_only",
"int8_dynamic_activation_int8_weight",
]
return self.quantization_config.quant_type in supported_quant_types_for_training

0 comments on commit 0963229

Please sign in to comment.