-
Notifications
You must be signed in to change notification settings - Fork 29.9k
Description
System Info
transformers
version: 4.49.0- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.13.2
- Huggingface_hub version: 0.29.2
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: 0.16.8
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: Tesla V100-PCIE-32GB
Who can help?
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
To reproduce the problem:
- Pass some DeepSpeed config file to the TrainingArguments
- Pass custom optimizer to the Trainer
Example:
class CustomOptim(torch.optim.Adam):
def step(self, *args, **kwargs):
print("steped!")
super().step(*args, **kwargs)
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
dataset = load_dataset("Salesforce/wikitext", "wikitext-103-raw-v1")
train_ds = dataset["train"]
eval_ds = dataset["validation"]
def tokenize(examples):
return tokenizer(
examples["text"], return_special_tokens_mask=True, truncation=True
)
train_ds = train_ds.map(tokenize, batched=True, remove_columns=["text"])
eval_ds = eval_ds.map(tokenize, batched=True, remove_columns=["text"])
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir="./opt-125m-finetuned",
do_eval=False,
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
max_steps=1,
push_to_hub=False,
deepspeed="/data/vladislav/ds_config.json",
)
optimizer = CustomOptim(model.parameters())
scheduler = get_constant_schedule(optimizer)
trainer = Trainer(
model=model,
args=training_args,
optimizers=(optimizer, scheduler),
train_dataset=train_ds,
eval_dataset=eval_ds,
data_collator=data_collator,
)
trainer.train()
DeepSpeed config:
{
"train_batch_size": 32,
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"zero_optimization": {
"stage": 0,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_bucket_size": 50000000,
"allgather_bucket_size": 500000000
}
}
In this example CustomOptimizer will be overwritten and no print would appear.
Expected behavior
Hi HuggingFace team,
While working with the transformers.Trainer
to train a LLM, I encountered an unexpected behavior when combining a custom optimizer with a DeepSpeed config in TrainingArguments.
It seems that when a custom optimizer is passed to the Trainer, but a DeepSpeed config is also provided, the custom optimizer is silently ignored and overwritten by DeepSpeed. This was quite surprising, and from what I can tell, this behavior is not mentioned in the documentation for either Trainer or TrainingArguments.
To avoid confusion and help users catch this early, it would be helpful if an explicit warning or exception were raised in such scenarios.
Thanks for the great work on the library!