Skip to content

Qwen Img fp16 NaN issue #10751

@realisticdreamer114514

Description

@realisticdreamer114514

Custom Node Testing

Your question

Continuing #10668 but a new issue because I'll point out an obvious likely fix right now, need guidance from technically proficient people here

tldr; Qwen img models will produce NaN in latent, which is the cause of black preview/output at unet infer/vae decode stage across models, when running/cast to fp16 on some hardware. In qwen img models it is likely a unet issue. torch.clamp(65536) when --fp16-unet would be a start to fix this but at which point exactly?

Again, are there other possible fixes that doesn't involve editing the code base e.g. venv dependency/flags/node usage fixes?

Logs

Other

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    StaleThis issue is stale and will be autoclosed soon.User SupportA user needs help with something, probably not a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions