Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
More poking of lm-eval-accuracy action
Browse files Browse the repository at this point in the history
  • Loading branch information
mgoin authored Apr 15, 2024
1 parent c46ca5e commit 67843fe
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 2 deletions.
2 changes: 1 addition & 1 deletion .github/actions/nm-lm-eval-accuracy/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ runs:
source $(pyenv root)/versions/${{ inputs.python }}/envs/${VENV}/bin/activate
pip3 install git+https://github.com/EleutherAI/lm-evaluation-harness.git@262f879a06aa5de869e5dd951d0ff2cf2f9ba380
pip3 install pytest
pip3 install pytest openai==1.3.9
SUCCESS=0
pytest .github/scripts/test_lm_eval_sweep.py -s -v || SUCCESS=$?
Expand Down
8 changes: 7 additions & 1 deletion .github/scripts/test_lm_eval_sweep.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,13 @@ def __exit__(self, exc_type, exc_val, exc_tb):
@pytest.mark.parametrize("model_id, eval_def", MODEL_TEST_POINTS)
def test_lm_eval_correctness(model_id, eval_def):

vllm_args = ["--model", model_id, "--disable-log-requests"]
vllm_args = [
"--model",
model_id,
"--disable-log-requests",
"--max_model_len",
4096
]

if eval_def.enable_tensor_parallel:
tp = torch.cuda.device_count()
Expand Down

0 comments on commit 67843fe

Please sign in to comment.