-
Notifications
You must be signed in to change notification settings - Fork 942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There appear to be 1 leaked semaphore objects to clean up at shutdown #8
Comments
I had the same issue: |
Same thing here for me and in the end I'm missing the safety_checker CoreML model |
Just updated the OS to 13.1 preview, still facing the same error. |
8 GB will cause run out of memory issue. As suggested by Yasuhito. Best if you can ask a compiled model from someone... or try running again and again with Terminal only when logging in |
@mariapatulea never worked 4 me |
I think this is an issue with tqdm and floating point refs on the progress bar. I get the same issue and don't have coreml installed.
|
Hi there! Has somebody found any solution to this problem? |
I'm facing the same issue on M1 chip. |
Check the solution: AUTOMATIC1111/stable-diffusion-webui#1890 |
I've got the same problem in stable diffusion V 1.5.1 running on Macbook M2: |
The line you quoted is just a warning, and does not cause any issues. The most common reason why conversions fail is running out of memory, just like in OP's case, look for a line that says or contains "Killed". |
i am using macbook pro ventura m2 chip and facing the same issue |
Problem solved on my side by downgrading Python to 3.10.13 |
I got this error with PyTorch mps while running |
I think it might be RAM related even if package versons help - they may just use memory better. It consistently failed for me and then I closed everything on my Mac that I could and it ran fine without changing versions. 🤷 |
I agree it's not a RAM issue, I have 96GB of RAM on a custom-built M2 model and I'm getting the error. I can guarantee it has nothing to do with RAM |
+1 with the error. |
Getting the same error when training Dreambooth. Did anyone figure out a solution to this?
|
It's not the same error though.
The warning about the semaphore, just like in the OP (where the real error was |
I have the same error on a M3 model with 36GB memory! :( |
Same issue on M3 with 128GB ram |
@LukaVerhoeven nice config^ 🙂 |
Was hoping on no memory issues with this setup 😒 |
It seems related to device type (Mac mps type). When I move mps type tensor to cpu(), the problem no longer appears. |
same error on M3 Max 96GB while trying to run invokeAI, any solution? |
Removing tqdm solved my issue. Thank you! |
In my opinion because you run it on the docker so that the shm size is so small,you can run |
Same here on Apple M3 Max 36GB MacBook Pro. Never installed CoreML. Upgrading from |
This might be relevant: What worked for me was just rebooting my device... EDIT: It seems restarted does not always fix the issue |
Can you explain how you did this, exactly? I've tried all of the other solutions that people have reported, but nothing has worked yet... running SD in ComfyUI on my M3 Max 64GB. |
Got the same error after updated to MacOS 14.5 |
I have to say SD is not compatible with macOS for now and no solution to your problem.
…---- Replied Message ----
| From | ***@***.***> |
| Date | 05/22/2024 05:52 |
| To | apple/ml-stable-diffusion ***@***.***> |
| Cc | Alan ***@***.***>,
Comment ***@***.***> |
| Subject | Re: [apple/ml-stable-diffusion] There appear to be 1 leaked semaphore objects to clean up at shutdown (Issue #8) |
It seems related to device type (Mac mps type). When I move mps type tensor to cpu(), the problem no longer appears.
Can you explain how you did this, exactly? I've tried all of the other solutions that people have reported, but nothing has worked yet... running SD in ComfyUI on my M3 Max 64GB.
Got the same error after updated to MacOS 14.5
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
I reinstalled Python, all packages and ComfyUI, and it works now. |
Thanks! conda update fixed it for me, running on M3. |
How to remove? The app is using that, right? |
Where you able to fix the issue? |
I am trying to build a transformer from scratch, while try to train it on mps (gpu) I get this error |
I have 128gb ram and still get this error |
Did you find solution to this issue? |
nope |
I have 128gb ram m2 mac and i'm running into this issue: "/.pyenv/versions/3.12.4/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown" |
same, m2 pro 16gb, do we have any fix? |
Chiming in here to echo @atiorh #349 (comment), from what I can gather the Like people here have mentioned, simplest solution is to make sure your mac has enough RAM (ideally 2-3x the model size), make sure to disable |
I have 128gb ram and i'm getting memory leaks, are you saying 128gb is still not enough memory? and i dont really have other programs open while im doing this |
According to this comment it may be an issue with the Core ML framework itself and not an actual memory "leak", but it appears to be triaged so hopefully a fix is in the works. The workaround from Toby suggests using |
This is all above my paygrade. I'm not sure what skip_model_load=True even is or how to implement that without some kind of tutorial or instructions |
When you avoid using |
i dont remember what i was using at the time. |
here's one workflow that goes to 100% and then gives me the semaphore error https://comfyworkflows.com/workflows/28794c8c-af07-424b-8363-d7e2be237770 |
same issue on 2020 Apple M1 chip. |
same issue here. |
Can't complete the conversion Models to Core ML
The text was updated successfully, but these errors were encountered: