Skip to content

Commit

Permalink
Set TORCH_CUDA_ARCH_LIST to 8.0 and above
Browse files Browse the repository at this point in the history
Only compiling on Compute Capability 8.0 and above, see https://developer.nvidia.com/cuda-gpus. I.e. NVIDIA Ampere generation devices or newer.
  • Loading branch information
weiji14 committed May 6, 2024
1 parent 501aa9d commit a1b1faa
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions recipes/flash-attn/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ build:
- FLASH_ATTENTION_SKIP_CUDA_BUILD=FALSE
- FLASH_ATTENTION_FORCE_CXX11_ABI=FALSE
- MAX_JOBS=$CPU_COUNT
- TORCH_CUDA_ARCH_LIST="8.0;8.6;8.9;9.0+PTX"
skip: true # [cuda_compiler_version in (undefined, "None")]
skip: true # [not linux]

Expand Down

0 comments on commit a1b1faa

Please sign in to comment.