[FA][Upstream PT] XPU out of memory
raised by FA kernel with upstream pytorch
#2042
Labels
Milestone
XPU out of memory
raised by FA kernel with upstream pytorch
#2042
flash attention benchmark fails with changes to use upstream pytorch.
It should be a torch issue.
CI:
https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/10609254853/job/29404643614
Repro:
use this poc branch
feature/deprecate_benchmark_ipex
scripts/compile-triton.sh --venv source .venv/bin/activate scripts/test-triton.sh --attention
Related:
#1905
The text was updated successfully, but these errors were encountered: