Skip to content

Enable pytorch attention by default on AMD gfx1151#8282

Merged
comfyanonymous merged 1 commit intomasterfrom
comfyanonymous-patch-1
May 26, 2025
Merged

Enable pytorch attention by default on AMD gfx1151#8282
comfyanonymous merged 1 commit intomasterfrom
comfyanonymous-patch-1

Conversation

@comfyanonymous
Copy link
Member

No description provided.

@comfyanonymous comfyanonymous merged commit e5799c4 into master May 26, 2025
6 checks passed
@comfyanonymous comfyanonymous deleted the comfyanonymous-patch-1 branch May 26, 2025 08:29
@kasper93
Copy link

kasper93 commented May 26, 2025

@comfyanonymous: Could you do the same for gfx1201? It's faster there too. (gfx1200, probably too, but not tested that)

EDIT: Also not using bfloat16 causes OOMs See #7400 (comment)

EDIT2: #8289

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants