DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability

October 11, 2022 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Acoustics, Speech, and Signal Processing

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .gitlab-ci.yml, LICENSE, README.md, config, continue_train_both.py, continue_train_single.py, infer.py, model, my_audio, requirements.txt, roll2midi.ipynb, sampling.py, task, test.py, train_spec_roll.py, utils, visualization_master.ipynb

Authors Kin Wai Cheuk, Ryosuke Sawata, Toshimitsu Uesaka, Naoki Murata, Naoya Takahashi, Shusuke Takahashi, Dorien Herremans, Yuki Mitsufuji arXiv ID 2210.05148 Category cs.SD: Sound Cross-listed cs.AI, cs.LG, eess.AS Citations 21 Venue IEEE International Conference on Acoustics, Speech, and Signal Processing Repository https://github.com/sony/DiffRoll โญ 80 Last Checked 1 month ago
Abstract
In this paper we propose a novel generative approach, DiffRoll, to tackle automatic music transcription (AMT). Instead of treating AMT as a discriminative task in which the model is trained to convert spectrograms into piano rolls, we think of it as a conditional generative task where we train our model to generate realistic looking piano rolls from pure Gaussian noise conditioned on spectrograms. This new AMT formulation enables DiffRoll to transcribe, generate and even inpaint music. Due to the classifier-free nature, DiffRoll is also able to be trained on unpaired datasets where only piano rolls are available. Our experiments show that DiffRoll outperforms its discriminative counterpart by 19 percentage points (ppt.) and our ablation studies also indicate that it outperforms similar existing methods by 4.8 ppt. Source code and demonstration are available https://sony.github.io/DiffRoll/.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound