This repository holds the project files of the bin2bin_v2 model for generative inpainting, developed for the second IEEE-IS² Music Packet Loss Concealment Challenge.
Below are links to the datasets used for training the model:
While to generate the synthesized sequences, MIDI files were taken from the MAESTRO collection and wav files were synthesized with FluidSynth and the GeneralUser GS soundfont, for a total of 45 hours.
To start train the bin2bin network from scratch, set the appropriate dataset path variable(s) in train_config.yaml, then run train_b2b_lpc.py while passing the chosen dataset name as an argument.
To start generate inpainted files, set the appropriate paths in fw_config.yaml, then run inpaint_b2b_lpc.py.
A pretrained Generator checkpoint is available here;
🎵 Listen to some examples of repaired music sequences here 🎵
For further informations, please contact c.aironi@staff.univpm.it