Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linting and Formatting for FMS-Acceleration-Peft package #23

Merged
merged 3 commits into from
May 28, 2024

Conversation

achew010
Copy link
Collaborator

@achew010 achew010 commented May 28, 2024

Description

This PR addresses #9 with the following additions

  • Completed Linting and Formatting of fms-acceleration-plugin with 10.00/10 rating
  • Made changes to fms-acceleration-plugin tox.ini to support linting via tox
  • Reactivated Linting and Formatting Workflow in Github Actions

Tests

Tested with sample experiments on accelerated-peft-bnb and accelerated-peft-autogptq to check for breakages

accelerated-peft-bnb

export CUDA_VISIBLE_DEVICES=0,1
accelerate launch \
 --config_file scripts/benchmarks/accelerate.yaml \
 --num_processes=2 \
 --main_process_port=29500 -m tuning.sft_trainer \
 --model_name_or_path mistralai/Mistral-7B-v0.1 \
 --acceleration_framework_config_file sample-configurations/accelerated-peft-bnb-nf4-sample-configuration.yaml \
 --packing True \
 --max_seq_len 4096 \
 --fp16 True \
 --learning_rate 2e-4 \
 --torch_dtype float16 \
 --peft_method lora \
 --r 16 \
 --lora_alpha 16 \
 --lora_dropout 0.0 \
 --target_modules q_proj k_proj v_proj o_proj \
 --use_flash_attn True \
 --response_template '\n### Response:' \
 --dataset_text_field 'output' \
 --include_tokens_per_second True \
 --num_train_epochs 1 \
 --gradient_accumulation_steps 1 \
 --gradient_checkpointing True \
 --evaluation_strategy no \
 --save_strategy no \
 --weight_decay 0.01 \
 --warmup_steps 10 \
 --adam_epsilon 1e-4 \
 --lr_scheduler_type linear \
 --logging_strategy steps \
 --logging_steps 10 \
 --max_steps 30 \
 --training_data_path benchmark_outputs/data/cache.json \
 --per_device_train_batch_size 4 \
 --output_dir benchmark_outputs/exp_39/hf \
 --skip_memory_metrics False

accelerated-peft-autogptq

export CUDA_VISIBLE_DEVICES=0,1
accelerate launch \
 --config_file scripts/benchmarks/accelerate.yaml \
 --num_processes=2 \
 --main_process_port=29500 -m tuning.sft_trainer \
 --model_name_or_path TheBloke/Mistral-7B-v0.1-GPTQ \
 --acceleration_framework_config_file sample-configurations/accelerated-peft-autogptq-sample-configuration.yaml \
 --packing True \
 --max_seq_len 4096 \
 --learning_rate 2e-4 \
 --fp16 True \
 --torch_dtype float16 \
 --peft_method lora \
 --r 16 \
 --lora_alpha 16 \
 --lora_dropout 0.0 \
 --target_modules q_proj k_proj v_proj o_proj \
 --use_flash_attn True \
 --response_template '\n### Response:' \
 --dataset_text_field 'output' \
 --include_tokens_per_second True \
 --num_train_epochs 1 \
 --gradient_accumulation_steps 1 \
 --gradient_checkpointing True \
 --evaluation_strategy no \
 --save_strategy no \
 --weight_decay 0.01 \
 --warmup_steps 10 \
 --adam_epsilon 1e-4 \
 --lr_scheduler_type linear \
 --logging_strategy steps \
 --logging_steps 10 \
 --max_steps 30 \
 --training_data_path benchmark_outputs/data/cache.json \
 --per_device_train_batch_size 4 \
 --output_dir benchmark_outputs/exp_51/hf \
 --skip_memory_metrics False

Pytest Results on FMS-HF-Tuning

tests/acceleration/test_acceleration_framework.py::test_framework_intialized_properly
tests/acceleration/test_acceleration_framework.py::test_framework_intialized_properly
tests/acceleration/test_acceleration_framework.py::test_framework_intialized_properly
  /workspace/.local/lib/python3.10/site-packages/peft/utils/save_and_load.py:168: UserWarning: Setting `save_embedding_layers` to `True` as the embedding layer has been resized during finetuning.
    warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================== 4 passed, 9 warnings in 19.44s ================================

@achew010 achew010 requested a review from fabianlim as a code owner May 28, 2024 02:43
@fabianlim fabianlim merged commit fd39348 into foundation-model-stack:dev May 28, 2024
3 checks passed
@fabianlim
Copy link
Contributor

made a mistake in the merge. it was not squashed. this PR needs to be redone

@achew010 achew010 deleted the linting_and_formatting branch May 29, 2024 02:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants