Skip to content

Very High Loss (~15) and Instability with Previously-working Config From A While Ago #2224

Open
@e-p-armstrong

Description

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

Training should proceed roughly as it did many versions ago, without a catastrophic loss graph.

Current behaviour

Wanting to track down whether poor recent SFT performance is my fault or the cause of underlying software changes, I recently tried to use a very old config for finetuning a continued-pretrain mistral 7b.

The original loss was normal back in the day, and looked like this:
image

This original loss was recorded on 2024-09-25.

However using the config (slight changes to make it not error, both versions provided below) with the latest docker image:
image

Starts at 15.3 loss and is spikey as all hell.

Notes:
8x A40 Runpod instance, both times
command used to run: accelerate launch --use_deepspeed -m axolotl.cli.train ./pathtoyamlfile.yaml
Some changes had to be made to the original config to make it not error on the newest axolotl version. Specifically: deepspeed had to be changed from zero2 to zero1 due to #2191 and the datasets had to be changed from type: sharegpt to having them be manually specified.

Steps to reproduce

Rent 8x A40 instance on Runpod
Train Mistral 7b base with given deepspeed, hyperparams, on a sharegpt dataset
Observe spikey loss

Config yaml

I had to redact some details due to NDAs, but here are BOTH the original and the new config.yamls. The original one is first, and is what had the normal graph. The broken one for new axo is second. Note that runs with both is_mistral_derived_model=True and =False were tried, but failed both times, with the same high loss values.

Config 1 (original but reliant on deprecated settings )

base_model: Heralax/redacted-custom-7b-mistral-base
tokenizer_type: AutoTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: json
    data_files: redacted_sharegpt_chatml_dataset.jsonl
    ds_type: json
    type: sharegpt
    conversation: chatml
  # ...quite a few more datasets were used here but are redacted in this shared version

dataset_prepared_path: 1b_run_prepared
output_dir: ./1b_out_pretrained_base

sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
shuffle_merged_datasets: true

wandb_project: 
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 6
micro_batch_size: 2
eval_batch_size: 1
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000020
weight_decay: 0
# Gradient clipping max norm
max_grad_norm: 1.0
noisy_embedding_alpha: 5
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint: 
logging_steps: 1
xformers_attention:
flash_attention: true

chat_template: chatml

warmup_ratio: 0.5
auto_resume_from_checkpoints: false
#warmup_ratio: 0.5
eval_steps: 10
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
debug:
deepspeed: deepspeed_configs/zero2.json
special_tokens:
  pad_token: "<|end_of_text|>"

Config 2 (NEW/BROKEN):

base_model: Heralax/redacted-custom-7b-mistral-base
tokenizer_type: AutoTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: json
    data_files: redacted_sharegpt_chatml_dataset.jsonl
    ds_type: json
    type: sharegpt
    conversation: chatml
  # ...quite a few more datasets were used here but are redacted in this shared version

dataset_prepared_path: 1b_run_prepared
output_dir: ./1b_out_pretrained_base

sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
shuffle_merged_datasets: true

wandb_project: 
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 6
micro_batch_size: 2
eval_batch_size: 1
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000020
weight_decay: 0
# Gradient clipping max norm
max_grad_norm: 1.0
noisy_embedding_alpha: 5
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint: 
logging_steps: 1
xformers_attention:
flash_attention: true

chat_template: chatml

warmup_ratio: 0.5
auto_resume_from_checkpoints: false
#warmup_ratio: 0.5
eval_steps: 10
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
debug:
deepspeed: deepspeed_configs/zero2.json
special_tokens:
  pad_token: "<|end_of_text|>"

Areas that were changed have been indicated with configs.



### Possible solution

Since only two things were changed from the old to the new, if this is user error it is either in the datasets area or with something deepspeed related. If it is axolotl's fault (or more likely, something upstream) then it happened between 2024-09-25 and now.

### Which Operating Systems are you using?

- [X] Linux
- [ ] macOS
- [ ] Windows

### Python Version

3.11.10

### axolotl branch-commit

latest docker

### Acknowledgements

- [X] My issue title is concise, descriptive, and in title casing.
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of axolotl.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue.

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions