AudioMuse-AI is an open-source, Dockerized environment that brings automatic playlist generation to your self-hosted music library. Using tools such as Librosa and ONNX, it performs sonic analysis on your audio files locally, allowing you to curate playlists for any mood or occasion without relying on external APIs.
Deploy it easily on your local machine with Docker Compose or Podman, or scale it in a Kubernetes cluster (supports AMD64 and ARM64). It integrates with the main music servers' APIs such as Jellyfin, Navidrome, LMS, Lyrion, and Emby. More integrations may be added in the future.
AudioMuse-AI lets you explore your music library in innovative ways, just start with an initial analysis, and you’ll unlock features like:
- Clustering: Automatically groups sonically similar songs, creating genre-defying playlists based on the music's actual sound.
- Instant Playlists: Simply tell the AI what you want to hear—like "high-tempo, low-energy music" and it will instantly generate a playlist for you.
- Music Map: Discover your music collection visually with a vibrant, genre-based 2D map.
- Playlist from Similar Songs: Pick a track you love, and AudioMuse-AI will find all the songs in your library that share its sonic signature, creating a new discovery playlist.
- Song Paths: Create a seamless listening journey between two songs. AudioMuse-AI finds the perfect tracks to bridge the sonic gap.
- Sonic Fingerprint: Generates playlists based on your listening habits, finding tracks similar to what you've been playing most often.
- Song Alchemy: Mix your ideal vibe, mark tracks as "ADD" or "SUBTRACT" to get a curated playlist and a 2D preview. Export the final selection directly to your media server.
- Text Search: search your song with simple text that can contains mood, instruments and genre like calm piano songs.
More information like ARCHITECTURE, ALGORITHM DESCRIPTION, DEPLOYMENT STRATEGY, FAQ, GPU DEPLOYMENT, HARDWARE REQUIREMENTS and CONFIGURATION PARAMETERS can be found in the docs folder.
The full list or AudioMuse-AI related repository are:
- AudioMuse-AI: the core application, it run Flask and Worker containers to actually run all the feature;
- AudioMuse-AI Helm Chart: helm chart for easy installation on Kubernetes;
- AudioMuse-AI Plugin for Jellyfin: Jellyfin Plugin;
- AudioMuse-AI MusicServer: Open Subsonic like Music Server with integrated sonic functionality.
And now just some NEWS:
- Version 0.8.0 finaly out of BETA and with new CLAP model that enable the search of song by text that contains genre, instruments and moods.
Important: Despite the similar name, this project (AudioMuse-AI) is an independent, community-driven effort. It has no official connection to the website audiomuse.ai.
We are not affiliated with, endorsed by, or sponsored by the owners of audiomuse.ai.
- Quick Start Deployment
- Hardware Requirements
- Docker Image Tagging Strategy
- Key Technologies
- How To Contribute
- Star History
Get AudioMuse-AI running in minutes with Docker Compose.
If you need more deployment example take a look at DEPLOYMENT page.
For a full list of configuration parameter take a look at PARAMETERS page.
For the architecture design of AudioMuse-AI, take a look to the ARCHITECTURE page.
Prerequisites:
- Docker and Docker Compose installed
- A running media server (Jellyfin, Navidrome, Lyrion, or Emby)
- See Hardware Requirements
Steps:
-
Create your environment file:
cp deployment/.env.example deployment/.env
-
Edit
.envwith your media server credentials:For Jellyfin:
MEDIASERVER_TYPE=jellyfin JELLYFIN_URL=http://your-jellyfin-server:8096 JELLYFIN_USER_ID=your-user-id JELLYFIN_TOKEN=your-api-token
For Navidrome:
MEDIASERVER_TYPE=navidrome NAVIDROME_URL=http://your-navidrome-server:4533 NAVIDROME_USER=your-username NAVIDROME_PASSWORD=your-password
For Lyrion:
MEDIASERVER_TYPE=lyrion LYRION_URL=http://your-lyrion-server:9000
For Emby:
MEDIASERVER_TYPE=emby EMBY_URL=http://your-emby-server:8096 EMBY_USER_ID=your-user-id EMBY_TOKEN=your-api-token
-
Start the services:
docker compose -f deployment/docker-compose.yaml up -d
-
Access the application: Open your browser at
http://localhost:8000 -
Run your first analysis:
- Navigate to "Analysis and Clustering" page
- Click "Start Analysis" to scan your library
- Wait for completion, then explore features like clustering and music map
Stopping the services:
docker compose -f deployment/docker-compose.yaml downAudioMuse-AI has been tested on:
- Intel: HP Mini PC with Intel i5-6500, 16 GB RAM and NVMe SSD
- ARM: Raspberry Pi 5, 8 GB RAM and NVMe SSD
Suggested requirements:
- A 4-core Intel or ARM CPU (produced in 2015 or later) with AVX support
- 8 GB RAM
- SSD storage
You can check the Tested Hardware and Configuration notes to see which hardware has already been validated.
For more information about the GPU deployment requirements have a look to the GPU page.
Our GitHub Actions workflow automatically builds and pushes Docker images. Here's how our tags work:
- :latest
- Builds from the main branch.
- Represents the latest stable release.
- Recommended for most users.
- :devel
- Builds from the devel branch.
- Contains features still in development, not fully tested, and they could not work.
- Use only for development.
- :vX.Y.Z (e.g., :v0.1.4-alpha, :v1.0.0)
- Immutable tags created from specific Git releases/tags.
- Ensures you're running a precise, versioned build.
- Use for reproducible deployments or locking to a specific version.
IMPORTANT: the -nvidia images are experimental. Try them if you want to help us improve the support, but we do not recommend using them for daily production use.
AudioMuse AI is built upon a robust stack of open-source technologies:
- Flask: Provides the lightweight web interface for user interaction and API endpoints.
- Redis Queue (RQ): A simple Python library for queueing jobs and processing them in the background with Redis.
- Supervisord: Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
- Essentia-tensorflow An open-source library for audio analysis, feature extraction, and music information retrieval. (used only until version v0.5.0-beta)
- MusicNN Tensorflow Audio Models from Essentia Leverages pre-trained MusicNN models for feature extraction and prediction. More details and models.
- Librosa Library for audio analysis, feature extraction, and music information retrieval. (used from version v0.6.0-beta)
- CLAP (Contrastive Language-Audio Pretraining) Neural network for audio-text matching, enabling natural language music search and text-based playlist generation.
- ONNX Open Neural Network Exchange format and ONNX Runtime for fast, portable, cross-platform model inference. (Used from v0.7.0-beta, replaces TensorFlow)
- Tensorflow Platform developed by Google for building, training, and deploying machine learning and deep learning models. (Used only in versions before v0.7.0-beta)
- scikit-learn Utilized for machine learning algorithms:
- voyager Approximate Nearest Neighbors used for the /similarity interface. Used from v0.6.3-beta
- PostgreSQL: A powerful, open-source relational database used for persisting:
- Ollama Enables self-hosting of various open-source Large Language Models (LLMs) for tasks like intelligent playlist naming.
- Docker / OCI-compatible Containers – The entire application is packaged as a container, ensuring consistent and portable deployment across environments.
Contributions, issues, and feature requests are welcome!
For more details on how to contribute please follow the Contributing Guidelines
