Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added yaml files for transformer 4.38.2 and updated performance test accordingly #11730

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from
Draft
32 changes: 32 additions & 0 deletions .github/workflows/llm_performance_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,38 @@ jobs:
sed -i 's/batch2/batch4/g' run.py
python run.py
mv *.csv test_batch4

- name: Test on xpu(transformers==4.38.2)
shell: bash
run: |
source /opt/intel/oneapi/setvars.sh
export USE_XETLA=OFF
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
# upgrade transformers for model stablelm/stablelm-zephyr-3b & Gemma/gemma-7b-it
python -m pip install transformers==4.38.2
# batch_size 1
cp python/llm/test/benchmark/arc-perf-transformers-438.yaml python/llm/dev/benchmark/all-in-one/config.yaml
cd python/llm/dev/benchmark/all-in-one
# change csv name
sed -i 's/test1_batch4/test2_batch1/g' run.py
python run.py
mv *.csv test_batch1
# batch_size 2
cd ../../../../../
cp python/llm/test/benchmark/arc-perf-transformers-438-batch2.yaml python/llm/dev/benchmark/all-in-one/config.yaml
cd python/llm/dev/benchmark/all-in-one
# change csv name
sed -i 's/batch1/batch2/g' run.py
python run.py
mv *.csv test_batch2
# batch_size 4
cd ../../../../../
cp python/llm/test/benchmark/arc-perf-transformers-438-batch4.yaml python/llm/dev/benchmark/all-in-one/config.yaml
cd python/llm/dev/benchmark/all-in-one
# change csv name
sed -i 's/batch2/batch4/g' run.py
python run.py
mv *.csv test_batch4

- name: Test on xpu(transformers==4.40.0)
shell: bash
Expand Down
18 changes: 18 additions & 0 deletions python/llm/test/benchmark/arc-perf-transformers-438-batch2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# For the models that require transformers 4.38.2
repo_id:
- 'stablelm/stablelm-zephyr-3b'
- 'Gemma/gemma-7b-it'
local_model_hub: '/mnt/disk1/models'
warm_up: 1
num_trials: 3
num_beams: 1 # default to greedy search
low_bit: 'sym_int4' # default to use 'sym_int4' (i.e. symmetric int4)
batch_size: 2 # default to 1
in_out_pairs:
- '32-32'
- '1024-128'
- '2048-256'
test_api:
- "transformer_int4_fp16_gpu" # on Intel GPU
cpu_embedding: False # whether put embedding to CPU (only avaiable now for gpu win related test_api)
task: 'continuation' # task can be 'continuation', 'QA' and 'summarize'
20 changes: 20 additions & 0 deletions python/llm/test/benchmark/arc-perf-transformers-438-batch4.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# For the models that require transformers 4.38.2
repo_id:
- 'stablelm/stablelm-zephyr-3b'
- 'Gemma/gemma-7b-it'
local_model_hub: '/mnt/disk1/models'
warm_up: 1
num_trials: 3
num_beams: 1 # default to greedy search
low_bit: 'sym_int4' # default to use 'sym_int4' (i.e. symmetric int4)
batch_size: 4 # default to 1
in_out_pairs:
- '32-32'
- '1024-128'
- '2048-256'
test_api:
- "transformer_int4_fp16_gpu" # on Intel GPU
cpu_embedding: False # whether put embedding to CPU (only avaiable now for gpu win related test_api)
exclude:
- 'Gemma/gemma-7b-it:2048'
task: 'continuation' # task can be 'continuation', 'QA' and 'summarize'
18 changes: 18 additions & 0 deletions python/llm/test/benchmark/arc-perf-transformers-438.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# For the models that require transformers 4.38.2
repo_id:
- 'stablelm/stablelm-zephyr-3b'
- 'Gemma/gemma-7b-it'
local_model_hub: '/mnt/disk1/models'
warm_up: 1
num_trials: 3
num_beams: 1 # default to greedy search
low_bit: 'sym_int4' # default to use 'sym_int4' (i.e. symmetric int4)
batch_size: 1 # default to 1
in_out_pairs:
- '32-32'
- '1024-128'
- '2048-256'
test_api:
- "transformer_int4_fp16_gpu" # on Intel GPU
cpu_embedding: False # whether put embedding to CPU (only avaiable now for gpu win related test_api)
task: 'continuation' # task can be 'continuation', 'QA' and 'summarize'