Replies: 1 comment
-
|
Thank you for sharing this. At the moment I don’t see the need to update the CUDA drivers in the near future. The last dependencies update it is the one of some days ago of the v0.8.7 and it was done from one reason: all the dependencies was set to support Essentia (that was not updated for a while) and ends up to be very old. So now that we don’t use Essentia from a while I decided to switch from Ubuntu 22 to Ubuntu 24 base image and accordingly update most of the dependencies (but not CUDA). The main idea behind this update is avoid to arrive at some point that I can’t implement something new because some libraries mismatch. So I don’t plan to do this too frequently. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
First of all, I'd like to profoundly thank you for developing this wonderful tool. It has marked the tipping point for me to switch to Jellyfin from the steadily enshittifying Plex.
Jellyfin combined with the Symphonium app plus AudioMuse-AI has been an absolute blessing to behold.
I am, however, worried about one thing: as you know, many NVIDIA GPUs used in home servers are being relegated to legacy status. For instance, the GTX 10-series (Pascal - Compute Capability 6.1) has been officially moved to the legacy/security-only branch with the release of the 590 drivers. While these cards are perfectly capable for audio analysis, they are essentially capped at CUDA 13.0 and the 580-series LTS drivers.
Right now this isn't a problem—I'm using a GTX 1060 to run the worker container flawlessly. My fear is that eventually, the prerequisites or the base NVIDIA Docker images for the project might move to a version that drops Pascal support entirely.
I'm not asking for this to be supported ad infinitum, but I would love to know if there is a plan to keep the current CUDA/driver requirements stable for a while, or if I should start planning a hardware upgrade to avoid being stuck with CPU-bound analysis in the near future.
As this is mostly a one-man-army project, I definitely don't expect you to maintain multiple CUDA build targets at once. All I'm looking for is a bit of insight into the roadmap so I can plan accordingly.
Once again, thank you for your hard work; the project is absolutely outstanding.
Beta Was this translation helpful? Give feedback.
All reactions