This branch contains the source code and the pretrained model that is submitted to the Sony MDX Challenge Track B.
The separated stems each have a different frequency cutoff applied. This is inherent to the approach of the MDX-Net code, which means that you would not obtain lossless stem tracks as compared to the original.
Setup git-lfs first. You will need it to download the models inside this repository. You'd also need conda.
After all those are installed, clone this branch:
git clone -b leaderboard_B https://github.com/kuielab/mdx-net-submission.git
In the cloned repository directory, do
conda env create -f environment.yml -n mdx-submit
conda activate mdx-submit
pip install -r requirements.txt
python download_demucs.py
Every time when you open a new terminal, conda will default to environment base
.
Just do
conda activate mdx-submit
to go back into the environment you have installed MDX's dependencies in.
For custom models (such as the higher quality vocal model trained by UVR team), please replace the relevant models in ./onnx/
.
After successful installation, you can put the songs you wish to separate as ./data/test/SONGNAME/mixture.wav
, and run either run.sh
or
python predict_blend.py
After the separation completes, the results will be saved in ./data/results/baseline/SONGNAME/
.