Replies: 2 comments
-
|
Hi and thanks for the feedback. Multiple worker is already supported: you just have to raise multiple containers on multiple computer. have a look to docs/architecture.md page to better understand it (mainly the worker need to see all the other component) and in deployment/ you can look already an example of server and worker deployment splitted. If you use the -nvidia image GPU will be used. For inference is used ONNX, PyTorch https://github.com/NeptuneHub/AudioMuse-AI/blob/main/docs/ARCHITECTURE.md is used for secondary functionality (like memory cleaning). |
Beta Was this translation helpful? Give feedback.
-
|
Hey thanks for the reply! I must have missed that part in the
architecture.md. I was able to clear the message I posted by updating torch
to 2.9 and cuda to 13 in the Worker instance. As we speak I'm almost done
with the first run analysis. It has processed about 173 albums in just
about 9 hours.
…On Sun, Jan 4, 2026 at 1:54 AM NeptuneHub ***@***.***> wrote:
Hi and thanks for the feedback. Multiple worker is already supported: you
just have to raise multiple containers on multiple computer.
have a look to docs/architecture.md page to better understand it (mainly
the worker need to see all the other component) and in deployment/ you can
look already an example of server and worker deployment splitted.
If you use the -nvidia image GPU will be used. For inference is used ONNX,
PyTorch
https://github.com/NeptuneHub/AudioMuse-AI/blob/main/docs/ARCHITECTURE.md
is used for secondary functionality (like memory cleaning).
—
Reply to this email directly, view it on GitHub
<#267 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEKEER5F5UNSNKBUTRZO3P34FC2KHAVCNFSM6AAAAACQTTPKOKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKNBQGM2DGNQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a medium large library, about 12,000 tracks. The ability have multiple workers would improve speed. It seems the GPU is hardly being used and a sustained work-load would improve scanning speed. Also it seems the CUDA Binaries in the worker don't support sm_120. I see this message sporadically in the logs.
My environment
Docker Desktop on Win 11
Latest NVIDIA GameReady Drivers and CUDA Toolkit
Beta Was this translation helpful? Give feedback.
All reactions