-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Fix safetensor cause python crash on some windows platform #12611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -37,6 +37,7 @@ | |
|
|
||
| MMAP_TORCH_FILES = args.mmap_torch_files | ||
| DISABLE_MMAP = args.disable_mmap | ||
| USE_ALT_SFT_LOADER = args.sft_alt_loader | ||
|
|
||
|
|
||
| if True: # ckpt/pt file whitelist for safe loading of old sd files | ||
|
|
@@ -80,7 +81,7 @@ def encode(*args, **kwargs): # no longer necessary on newer torch | |
| "U16": torch.uint16, | ||
| } | ||
|
|
||
| def load_safetensors(ckpt): | ||
| def load_safetensors(ckpt, device): | ||
| f = open(ckpt, "rb") | ||
| mapping = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) | ||
| mv = memoryview(mapping) | ||
|
|
@@ -102,7 +103,12 @@ def load_safetensors(ckpt): | |
| with warnings.catch_warnings(): | ||
| #We are working with read-only RAM by design | ||
| warnings.filterwarnings("ignore", message="The given buffer is not writable") | ||
| sd[name] = torch.frombuffer(mv[start:end], dtype=_TYPES[info["dtype"]]).view(info["shape"]) | ||
| tensor = torch.frombuffer(mv[start:end], dtype=_TYPES[info["dtype"]]).view(info["shape"]) | ||
| if DISABLE_MMAP: | ||
| tensor = tensor.to(device=device, copy=True) | ||
| elif (device != 'cpu' if isinstance(device, str) else device.type != 'cpu'): | ||
| tensor = tensor.to(device) | ||
| sd[name] = tensor | ||
|
|
||
| return sd, header.get("__metadata__", {}), | ||
|
|
||
|
|
@@ -113,8 +119,8 @@ def load_torch_file(ckpt, safe_load=False, device=None, return_metadata=False): | |
| metadata = None | ||
| if ckpt.lower().endswith(".safetensors") or ckpt.lower().endswith(".sft"): | ||
| try: | ||
| if enables_dynamic_vram(): | ||
| sd, metadata = load_safetensors(ckpt) | ||
| if USE_ALT_SFT_LOADER or enables_dynamic_vram(): | ||
| sd, metadata = load_safetensors(ckpt, device) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this seems a little strange, in that load_safetensors is very deliberately an MMAP. The main difference between load_safetensors and the sft package is the MMAP open flags so what you are doing here is a different kind of mmap. You need to be careful here in that you are adding a path where non-dynamic is using this zero-copy sft loader which is incompatible with the non-dynamic memory pinner. Try a workflow with --disable-mmap --novram and im suspicious you will see "Pin error"
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Log: Copy tensor: Log: No copy tensor |
||
| if not return_metadata: | ||
| metadata = None | ||
| else: | ||
|
|
||


There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "sft_alt_loader|sft-alt-loader|USE_ALT_SFT_LOADER" --type py -A2 -B2Repository: Comfy-Org/ComfyUI
Length of output: 1160
Help text lacks clarity;
sftabbreviation is inconsistent with neighboring flags.The flag
--sft-alt-loaderwith help text"Use alternate method to load safetensors."is too vague—users encountering Windows crashes with large models won't know this is what they need. Additionally, the abbreviatedsftis inconsistent with neighboring flags like--disable-mmapand--mmap-torch-files, which use full words.Renaming to
--safetensors-alt-loaderis feasible because this is a newly added flag with no existing users. If renamed, update the attribute reference incomfy/utils.pyline 40 fromargs.sft_alt_loadertoargs.safetensors_alt_loader.✏️ Suggested improvement
Then in
comfy/utils.pyline 40:📝 Committable suggestion
🤖 Prompt for AI Agents