Tools for downloading, sorting and analyzing the Cambridge Multitracks library for machine learning applications.
Here, I train a UNet CNN to isolate and mask microphone bleed in similar manner to musical source separation. Pairs of coincident transient events are extracted and aligned from "overhead" and "snare" drum microphones, and saved as a dataset for training.
In this project I use several hundred hours of vocal tracks to train an unsupervised signal reconstruction model, intended for repairing dropouts in low-latency audio calls between musicians.
- Download repo and install dependencies
git clone https://github.com/carlmoore256/Cambridge-Multitrack-Dataset
pip install -r requirements.txt
- Download all available multitrack stems and unzip them into provided directory (default is "./multitracks")
python download_stems.py
- (Optional) Download only a single genre using the --genre argument
python download_stems.py --genre Pop
This may not seem like an exhaustive list, but this is actually just the html tags. Browse the website for a better idea about what's included in each of these.
- Folder checks avoid re-downloading parts of your local library
- Zip files are stored in "./temp" until they are unzipped
- Download n examples from a randomly selected pool, instead of downloading the entire library
python download_stems.py --subset 10
Labels for stems are generated by matching a list of search parameters to filenames. extract_labels.py saves a json map of files and associated labels.
- Modify or create keywords.txt to specify custom search parameters
python extract_labels.py -kw /path/to/keywords.txt
To improve the accuracy of the dataset, YAMNet can be used to generate a dataset map containing sample indicies of "verified" matches between the audio content and a provided keyword label. This process also removes periods of silence using the strip_silence function.
- Cross check all stems marked as "vox" with AudioSet classes "Speech" and "Singing", while rejecting "Silence"
python yamnet_verify.py -kw vox --approve Speech Singing --reject Silence
- Change the dB threshold for silence removal before processing (reduces number of inferences required)
python yamnet_verify.py -kw bass --approve "Bass guitar" --reject Silence --thresh 35
- Combine filters for specific tones, such as a clean guitar tone
python yamnet_verify.py -kw gtr --approve Guitar --reject Distortion Silence
Automatically find stems containing x, y pairs of correlated transients for regressive models
- Create a dataset containing correlated transients from overhead mics (x) and snare mics (y), with a window size of 8192 samples/clip
python transient_verify.py --xkey overhead --ykey snare --ws 8192