Skip to content

Fix device mismatch in AttentionSharingUnit for SD 1.5 (Grok @ xAI) #128

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

bazik210
Copy link

@bazik210 bazik210 commented May 5, 2025

Resolved RuntimeError: Expected all tensors to be on the same device in AttentionSharingUnit and AttentionSharingPatcher by ensuring consistent CUDA/CPU device placement. Added explicit device synchronization for LoRA layers, temporal layers, and LayerNorm weights/biases in forward and initialization. Ensures compatibility with CUDA-enabled systems while maintaining CPU fallback. Fixes crashes during diffusion model sampling in ComfyUI workflows.

loxotron added 2 commits May 6, 2025 00:02
Resolved RuntimeError: Expected all tensors to be on the same device in AttentionSharingUnit and AttentionSharingPatcher by ensuring consistent CUDA/CPU device placement. Added explicit device synchronization for LoRA layers, temporal layers, and LayerNorm weights/biases in forward and initialization. Ensures compatibility with CUDA-enabled systems while maintaining CPU fallback. Fixes crashes during diffusion model sampling in ComfyUI workflows.
Replaced torch.median with torch.mean in estimate_augmented to avoid
empty tensor (shape=[0]) errors on DirectML. Added NaN/inf checks and
debug logs for robustness. Preserved original 8 augmentations.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant