Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add A10G support in CI #176

Merged
merged 20 commits into from
Apr 25, 2024
47 changes: 24 additions & 23 deletions .github/workflows/regression_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,44 +22,45 @@ jobs:
matrix:
include:
- name: CUDA 2.2.2
runs-on: 4-core-ubuntu-gpu-t4
runs-on: linux.g5.12xlarge.nvidia.gpu
torch-spec: 'torch==2.2.2'
gpu-arch-type: "cuda"
gpu-arch-version: "12.1"
- name: CUDA 2.3 RC
runs-on: 4-core-ubuntu-gpu-t4
runs-on: linux.g5.12xlarge.nvidia.gpu
torch-spec: 'torch==2.3.0 --index-url https://download.pytorch.org/whl/test/cu121'
gpu-arch-type: "cuda"
gpu-arch-version: "12.1"
- name: CUDA Nightly
runs-on: 4-core-ubuntu-gpu-t4
runs-on: linux.g5.12xlarge.nvidia.gpu
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cu121'
gpu-arch-type: "cuda"
gpu-arch-version: "12.1"
- name: CPU 2.2.2
runs-on: 32-core-ubuntu
runs-on: linux.4xlarge
torch-spec: 'torch==2.2.2 --index-url https://download.pytorch.org/whl/cpu'
gpu-arch-type: "cpu"
gpu-arch-version: ""
- name: CPU 2.3 RC
runs-on: 32-core-ubuntu
runs-on: linux.4xlarge
torch-spec: 'torch==2.3.0 --index-url https://download.pytorch.org/whl/test/cpu'
gpu-arch-type: "cpu"
gpu-arch-version: ""
- name: Nightly CPU
runs-on: 32-core-ubuntu
runs-on: linux.4xlarge
torch-spec: '--pre torch --index-url https://download.pytorch.org/whl/nightly/cpu'

runs-on: ${{ matrix.runs-on }}
steps:
- uses: actions/checkout@v2
gpu-arch-type: "cpu"
gpu-arch-version: ""

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'

- name: Install dependencies
run: |
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
with:
runner: ${{ matrix.runs-on }}
gpu-arch-type: ${{ matrix.gpu-arch-type }}
gpu-arch-version: ${{ matrix.gpu-arch-version }}
script: |
python -m pip install --upgrade pip
pip install ${{ matrix.torch-spec }}
pip install -r requirements.txt
pip install -r dev-requirements.txt

- name: Install package
run: |
pip install .

- name: Run tests
run: |
pytest test --verbose -s
1 change: 1 addition & 0 deletions test/integration/test_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -449,6 +449,7 @@ def test_dynamic_quant_per_tensor_numerics_cpu(self):
for row in test_cases:
self._test_dynamic_quant_per_tensor_numerics_impl(*row)

@unittest.skip("test case incorrect on A10G")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@unittest.skipIf(not torch.cuda.is_available(), "Need CUDA available")
def test_dynamic_quant_per_tensor_numerics_cuda(self):
# verifies that dynamic quant per tensor in plain pytorch matches
Expand Down
Loading