Skip to content

Tools for downloading, sorting & analyzing multitrack recording sessions for machine learning applications

Notifications You must be signed in to change notification settings

carlmoore256/Cambridge-Multitrack-Dataset

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cambridge-Multitrack-Dataset

Tools for downloading, sorting and analyzing the Cambridge Multitracks library for machine learning applications.

example-correlated-transients-03

Here, I train a UNet CNN to isolate and mask microphone bleed in similar manner to musical source separation. Pairs of coincident transient events are extracted and aligned from "overhead" and "snare" drum microphones, and saved as a dataset for training.

In this project I use several hundred hours of vocal tracks to train an unsupervised signal reconstruction model, intended for repairing dropouts in low-latency audio calls between musicians.

Installation

  • Download repo and install dependencies
git clone https://github.com/carlmoore256/Cambridge-Multitrack-Dataset
pip install -r requirements.txt

Getting Started

Build a local library using the download utility

  • Download all available multitrack stems and unzip them into provided directory (default is "./multitracks")
python download_stems.py

example-of-folder-structure

This will take a long time, since we are retrieving several hundred GBs of WAV files

  • (Optional) Download only a single genre using the --genre argument
python download_stems.py --genre Pop

Available genre filters: Pop, Electronica, Acoustic, HipHop

This may not seem like an exhaustive list, but this is actually just the html tags. Browse the website for a better idea about what's included in each of these.

  • Folder checks avoid re-downloading parts of your local library
  • Zip files are stored in "./temp" until they are unzipped

Download a smaller subset

  • Download n examples from a randomly selected pool, instead of downloading the entire library
python download_stems.py --subset 10

Dataset Label Mapping

Labels for stems are generated by matching a list of search parameters to filenames. extract_labels.py saves a json map of files and associated labels.

Create a custom mapping

  • Modify or create keywords.txt to specify custom search parameters
python extract_labels.py -kw /path/to/keywords.txt

Filter/Verify Audio Stems

To improve the accuracy of the dataset, YAMNet can be used to generate a dataset map containing sample indicies of "verified" matches between the audio content and a provided keyword label. This process also removes periods of silence using the strip_silence function.

Create custom verified dataset map using YAMNet

python yamnet_verify.py -kw vox --approve Speech Singing --reject Silence
  • Change the dB threshold for silence removal before processing (reduces number of inferences required)
python yamnet_verify.py -kw bass --approve "Bass guitar" --reject Silence --thresh 35
  • Combine filters for specific tones, such as a clean guitar tone
python yamnet_verify.py -kw gtr --approve Guitar --reject Distortion Silence

Create a transient-aligned dataset

Automatically find stems containing x, y pairs of correlated transients for regressive models

  • Create a dataset containing correlated transients from overhead mics (x) and snare mics (y), with a window size of 8192 samples/clip

example-correlated-transients

python transient_verify.py --xkey overhead --ykey snare --ws 8192

About

Tools for downloading, sorting & analyzing multitrack recording sessions for machine learning applications

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published