Skip to content

[CI] Automatically download models for e2e test #1442

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions .github/workflows/model_loader.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
#
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
# This file is a part of the vllm-ascend project.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

name: Download New ModelScope Models

on:
pull_request:
paths:
- ci_model_list.json

workflow_dispatch:
inputs:
model:
required: true
type: string
description: 'Model name to be downloaded'

# Bash shells do not use ~/.profile or ~/.bashrc so these shells need to be explicitly
# declared as "shell: bash -el {0}" on steps that need to be properly activated.
# It's used to activate ascend-toolkit environment variables.
defaults:
run:
shell: bash -el {0}

jobs:
download-models:
name: download models from modelscope
runs-on: linux-arm64-npu-0

Check failure on line 42 in .github/workflows/model_loader.yaml

View workflow job for this annotation

GitHub Actions / lint (3.10)

label "linux-arm64-npu-0" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-14.0", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-13.0", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "macos-12.0", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows", "linux-arm64-npu-1", "linux-arm64-npu-2", "linux-arm64-npu-4", "linux-arm64-npu-static-8", "ubuntu-24.04-arm". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
container:
image: m.daocloud.io/quay.io/ascend/cann:8.1.rc1-910b-ubuntu22.04-py3.10
env:
HF_ENDPOINT: https://hf-mirror.com
MS_TOKEN: ${{ secrets.MS_TOKEN }}
HF_TOKEN: ${{ secrets.HF_TOKEN }}
BASE_REF: ${{ github.event.pull_request.base.ref || 'main' }}
BASE_CONFIG: ci_model_list.json
NEW_CONFIG: ci_model_list_new.json
steps:
- name: Config mirrors
run: |
sed -i 's|ports.ubuntu.com|mirrors.tuna.tsinghua.edu.cn|g' /etc/apt/sources.list
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
apt-get update -y
apt install git -y
git config --global url."https://gh-proxy.test.osinfra.cn/https://github.com/".insteadOf https://github.com/
git config --global --add safe.directory /__w/vllm-ascend/vllm-ascend

- name: Install dependencies
run: |
pip install modelscope

- name: Download via workflow_dispatch
if: github.event_name == 'workflow_dispatch'
run: |
echo "Triggered by dispatch, downloading ${{ inputs.model }}"
modelscope download ${{ inputs.model }}

- name: Checkout vllm-project/vllm-ascend repo
uses: actions/checkout@v4

- name: Fetch base branch
run: |
git fetch origin $BASE_REF

- name: Get current model config
run: |
cp ci_model_list.json $NEW_CONFIG
git show origin/$BASE_REF:ci_model_list.json \
> $BASE_CONFIG || echo '{"models":[]}' > $BASE_CONFIG

- name: Download via config file detection
if: github.event_name == 'pull_request'
run: |
python .github/workflows/scripts/download_new_models.py \
--base-config $BASE_CONFIG \
--new-config $NEW_CONFIG
49 changes: 49 additions & 0 deletions .github/workflows/scripts/download_new_models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import json
from argparse import ArgumentParser

from modelscope import snapshot_download


def load_models_from_json(file):
with open(file, "r", encoding="utf-8") as f:
data = json.load(f)
return set(data.get("models", []))


def download_models(models: set[str]):
for model in models:
print(f"[download] Downloading model: {model}")
snapshot_download(model_id=model)


def main(origin_config: str, new_config: str):
previous_models = load_models_from_json(origin_config)
current_models = load_models_from_json(new_config)

new_models = current_models - previous_models
if new_models:
print(f"[info] Detected {len(new_models)} new model(s):")
for model in new_models:
print(f" - {model}")
download_models(new_models)
else:
print("[info] No new models detected.")


if __name__ == "__main__":
parser = ArgumentParser(
description="Download new models from models_config.json")
parser.add_argument(
"--base-config",
type=str,
help="Path to the models configuration file",
)
parser.add_argument(
"--new-config",
type=str,
default="models_config_new.json",
help="Path to the new models configuration file",
)
args = parser.parse_args()

main(args.base_config, args.new_config)
5 changes: 5 additions & 0 deletions ci_model_list.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"models": [
"Qwen/Qwen2.5-0.5B-Instruct"
]
}
Loading