-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ CI/Build ] LM Eval Harness Based CI Testing #5838
Merged
robertgshaw2-neuralmagic
merged 20 commits into
vllm-project:main
from
neuralmagic:lm-eval-harness-testing
Jun 29, 2024
Merged
Changes from 19 commits
Commits
Show all changes
20 commits
Select commit
Hold shift + click to select a range
292503c
Added lm-eval
robertgshaw2-neuralmagic f763f4d
removed run-gsm.sh
robertgshaw2-neuralmagic 1c44e65
Update run-lm-eval-vllm.sh
robertgshaw2-neuralmagic b0b1b20
format
robertgshaw2-neuralmagic 0002918
fix llama3 config
robertgshaw2-neuralmagic 90195d0
added examples
robertgshaw2-neuralmagic bd00b0f
make it work
robertgshaw2-neuralmagic 771db58
fix lm-eval launch
robertgshaw2-neuralmagic feeaf38
fix workflow for automation
robertgshaw2-neuralmagic cdcbfa2
stash
e236951
stash
095e4a5
format
5e1190c
updated files
5116707
fix pipeline
cce7e36
format
230626a
finalizing
3e4ca9b
format + tweaked configs
7c785b4
Merge branch 'upstream-main' into lm-eval-harness-testing
724b805
Merge branch 'main' into lm-eval-harness-testing
90212dc
Update test-pipeline.yaml
robertgshaw2-neuralmagic File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Meta-Llama-3-70B-Instruct.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m meta-llama/Meta-Llama-3-70B-Instruct -b 32 -l 250 -f 5 | ||
model_name: "meta-llama/Meta-Llama-3-70B-Instruct" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.892 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.892 | ||
limit: 250 | ||
num_fewshot: 5 |
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Meta-Llama-3-8B-Instruct-FP8.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m neuralmagic/Meta-Llama-3-8B-Instruct-FP8 -b 32 -l 250 -f 5 -t 1 | ||
model_name: "neuralmagic/Meta-Llama-3-8B-Instruct-FP8" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.756 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.752 | ||
limit: 250 | ||
num_fewshot: 5 |
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Meta-Llama-3-8B-Instruct.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m meta-llama/Meta-Llama-3-8B-Instruct -b 32 -l 250 -f 5 -t 1 | ||
model_name: "meta-llama/Meta-Llama-3-8B-Instruct" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.756 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.752 | ||
limit: 250 | ||
num_fewshot: 5 |
11 changes: 11 additions & 0 deletions
11
.buildkite/lm-eval-harness/configs/Mixtral-8x7B-Instruct-v0.1.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# bash .buildkite/lm-eval-harness/run-lm-eval-gsm-hf-baseline.sh -m neuralmagic/Mixtral-8x7B-Instruct-v0.1 -b 32 -l 250 -f 5 -t 4 | ||
model_name: "mistralai/Mixtral-8x7B-Instruct-v0.1" | ||
tasks: | ||
- name: "gsm8k" | ||
metrics: | ||
- name: "exact_match,strict-match" | ||
value: 0.616 | ||
- name: "exact_match,flexible-extract" | ||
value: 0.632 | ||
limit: 250 | ||
num_fewshot: 5 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
Meta-Llama-3-70B-Instruct.yaml | ||
Mixtral-8x7B-Instruct-v0.1.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
Meta-Llama-3-8B-Instruct.yaml | ||
Meta-Llama-3-8B-Instruct-FP8.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
#!/bin/bash | ||
# We can use this script to compute baseline accuracy on GSM for transformers. | ||
# | ||
# Make sure you have lm-eval-harness installed: | ||
# pip install git+https://github.com/EleutherAI/lm-evaluation-harness.git@9516087b81a61d0e220b22cc1b75be76de23bc10 | ||
|
||
usage() { | ||
echo`` | ||
echo "Runs lm eval harness on GSM8k using huggingface transformers." | ||
echo "This pathway is intended to be used to create baselines for " | ||
echo "our automated nm-test-accuracy workflow" | ||
echo | ||
echo "usage: ${0} <options>" | ||
echo | ||
echo " -m - huggingface stub or local directory of the model" | ||
echo " -b - batch size to run the evaluation at" | ||
echo " -l - limit number of samples to run" | ||
echo " -f - number of fewshot samples to use" | ||
echo | ||
} | ||
|
||
while getopts "m:b:l:f:" OPT; do | ||
case ${OPT} in | ||
m ) | ||
MODEL="$OPTARG" | ||
;; | ||
b ) | ||
BATCH_SIZE="$OPTARG" | ||
;; | ||
l ) | ||
LIMIT="$OPTARG" | ||
;; | ||
f ) | ||
FEWSHOT="$OPTARG" | ||
;; | ||
\? ) | ||
usage | ||
exit 1 | ||
;; | ||
esac | ||
done | ||
|
||
lm_eval --model hf \ | ||
--model_args pretrained=$MODEL,parallelize=True \ | ||
--tasks gsm8k --num_fewshot $FEWSHOT --limit $LIMIT \ | ||
--batch_size $BATCH_SIZE |
51 changes: 51 additions & 0 deletions
51
.buildkite/lm-eval-harness/run-lm-eval-gsm-vllm-baseline.sh
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
#!/bin/bash | ||
# We can use this script to compute baseline accuracy on GSM for vllm. | ||
# We use this for fp8, which HF does not support. | ||
# | ||
# Make sure you have lm-eval-harness installed: | ||
# pip install lm-eval==0.4.2 | ||
|
||
usage() { | ||
echo`` | ||
echo "Runs lm eval harness on GSM8k using huggingface transformers." | ||
echo "This pathway is intended to be used to create baselines for " | ||
echo "our automated nm-test-accuracy workflow" | ||
echo | ||
echo "usage: ${0} <options>" | ||
echo | ||
echo " -m - huggingface stub or local directory of the model" | ||
echo " -b - batch size to run the evaluation at" | ||
echo " -l - limit number of samples to run" | ||
echo " -f - number of fewshot samples to use" | ||
echo " -t - tensor parallel size to run at" | ||
echo | ||
} | ||
|
||
while getopts "m:b:l:f:t:" OPT; do | ||
case ${OPT} in | ||
m ) | ||
MODEL="$OPTARG" | ||
;; | ||
b ) | ||
BATCH_SIZE="$OPTARG" | ||
;; | ||
l ) | ||
LIMIT="$OPTARG" | ||
;; | ||
f ) | ||
FEWSHOT="$OPTARG" | ||
;; | ||
t ) | ||
TP_SIZE="$OPTARG" | ||
;; | ||
\? ) | ||
usage | ||
exit 1 | ||
;; | ||
esac | ||
done | ||
|
||
lm_eval --model vllm \ | ||
--model_args pretrained=$MODEL,tensor_parallel_size=$TP_SIZE \ | ||
--tasks gsm8k --num_fewshot $FEWSHOT --limit $LIMIT \ | ||
--batch_size $BATCH_SIZE |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
#!/bin/bash | ||
|
||
usage() { | ||
echo`` | ||
echo "Runs lm eval harness on GSM8k using vllm and compares to " | ||
echo "precomputed baseline (measured by HF transformers.)" | ||
echo | ||
echo "usage: ${0} <options>" | ||
echo | ||
echo " -c - path to the test data config (e.g. configs/small-models.txt)" | ||
echo " -t - tensor parallel size" | ||
echo | ||
} | ||
|
||
SUCCESS=0 | ||
|
||
while getopts "c:t:" OPT; do | ||
case ${OPT} in | ||
c ) | ||
CONFIG="$OPTARG" | ||
;; | ||
t ) | ||
TP_SIZE="$OPTARG" | ||
;; | ||
\? ) | ||
usage | ||
exit 1 | ||
;; | ||
esac | ||
done | ||
|
||
# Parse list of configs. | ||
IFS=$'\n' read -d '' -r -a MODEL_CONFIGS < $CONFIG | ||
|
||
for MODEL_CONFIG in "${MODEL_CONFIGS[@]}" | ||
do | ||
LOCAL_SUCCESS=0 | ||
|
||
echo "=== RUNNING MODEL: $MODEL_CONFIG WITH TP SIZE: $TP_SIZE===" | ||
|
||
export LM_EVAL_TEST_DATA_FILE=$PWD/configs/${MODEL_CONFIG} | ||
export LM_EVAL_TP_SIZE=$TP_SIZE | ||
pytest -s test_lm_eval_correctness.py || LOCAL_SUCCESS=$? | ||
|
||
if [[ $LOCAL_SUCCESS == 0 ]]; then | ||
echo "=== PASSED MODEL: ${MODEL_CONFIG} ===" | ||
else | ||
echo "=== FAILED MODEL: ${MODEL_CONFIG} ===" | ||
fi | ||
|
||
SUCCESS=$((SUCCESS + LOCAL_SUCCESS)) | ||
|
||
done | ||
|
||
if [ "${SUCCESS}" -eq "0" ]; then | ||
exit 0 | ||
else | ||
exit 1 | ||
fi |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
""" | ||
LM eval harness on model to compare vs HF baseline computed offline. | ||
Configs are found in configs/$MODEL.yaml | ||
* export LM_EVAL_TEST_DATA_FILE=configs/Meta-Llama-3-70B-Instruct.yaml | ||
* export LM_EVAL_TP_SIZE=4 | ||
* pytest -s test_lm_eval_correctness.py | ||
""" | ||
|
||
import os | ||
from pathlib import Path | ||
|
||
import lm_eval | ||
import numpy | ||
import yaml | ||
|
||
RTOL = 0.02 | ||
TEST_DATA_FILE = os.environ.get( | ||
"LM_EVAL_TEST_DATA_FILE", | ||
".buildkite/lm-eval-harness/configs/Meta-Llama-3-8B-Instruct.yaml") | ||
|
||
TP_SIZE = os.environ.get("LM_EVAL_TP_SIZE", 1) | ||
|
||
|
||
def launch_lm_eval(eval_config): | ||
model_args = f"pretrained={eval_config['model_name']}," \ | ||
f"tensor_parallel_size={TP_SIZE}" | ||
|
||
results = lm_eval.simple_evaluate( | ||
model="vllm", | ||
model_args=model_args, | ||
tasks=[task["name"] for task in eval_config["tasks"]], | ||
num_fewshot=eval_config["num_fewshot"], | ||
limit=eval_config["limit"], | ||
batch_size="auto") | ||
|
||
return results | ||
|
||
|
||
def test_lm_eval_correctness(): | ||
eval_config = yaml.safe_load( | ||
Path(TEST_DATA_FILE).read_text(encoding="utf-8")) | ||
|
||
# Launch eval requests. | ||
results = launch_lm_eval(eval_config) | ||
|
||
# Confirm scores match ground truth. | ||
for task in eval_config["tasks"]: | ||
for metric in task["metrics"]: | ||
ground_truth = metric["value"] | ||
measured_value = results["results"][task["name"]][metric["name"]] | ||
print(f'{task["name"]} | {metric["name"]}: ' | ||
f'ground_truth={ground_truth} | measured={measured_value}') | ||
assert numpy.isclose(ground_truth, measured_value, rtol=RTOL) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not super familiar but is this using
LLM
or completion server?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LLM
, I removed the server so we could test the official integration rather thanlocal-completions