AudioGen: Textually Guided Audio Generation

September 30, 2022 ยท Entered Twilight ยท ๐Ÿ› International Conference on Learning Representations

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .DS_Store, 32factor_1streams_2048codesPerBook, 32factor_1streams_2048codesPerBook_noMixing, audiogen_arch.pdf, audiogen_arch.png, audiogen_arch.svg, audiogen_teaser.mp4, diffsound_trim, gt_trim, huji_logo.png, index.html, large_128factor_4streams_512codesPerBook, large_32factor_1streams_2048codesPerBook, large_32factor_1streams_2048codesPerBook_cfg1, large_32factor_1streams_2048codesPerBook_cfg2, large_32factor_1streams_2048codesPerBook_cfg3, large_32factor_1streams_2048codesPerBook_cfg4, large_32factor_1streams_2048codesPerBook_cfg5, large_64factor_2streams_1024codesPerBook, len4_audioCont_audioPrefix1, len4_audioCont_audioPrefix1_noText, len4_audioCont_audioPrefix1_randomText, meta_logo.png, paper.pdf, style.css

Authors Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Dรฉfossez, Jade Copet, Devi Parikh, Yaniv Taigman, Yossi Adi arXiv ID 2209.15352 Category cs.SD: Sound Cross-listed cs.CL, cs.LG, eess.AS Citations 401 Venue International Conference on Learning Representations Repository https://github.com/felixkreuk/audiogen โญ 23 Last Checked 29 days ago
Abstract
We tackle the problem of generating audio samples conditioned on descriptive text captions. In this work, we propose AaudioGen, an auto-regressive generative model that generates audio samples conditioned on text inputs. AudioGen operates on a learnt discrete audio representation. The task of text-to-audio generation poses multiple challenges. Due to the way audio travels through a medium, differentiating ``objects'' can be a difficult task (e.g., separating multiple people simultaneously speaking). This is further complicated by real-world recording conditions (e.g., background noise, reverberation, etc.). Scarce text annotations impose another constraint, limiting the ability to scale models. Finally, modeling high-fidelity audio requires encoding audio at high sampling rate, leading to extremely long sequences. To alleviate the aforementioned challenges we propose an augmentation technique that mixes different audio samples, driving the model to internally learn to separate multiple sources. We curated 10 datasets containing different types of audio and text annotations to handle the scarcity of text-audio data points. For faster inference, we explore the use of multi-stream modeling, allowing the use of shorter sequences while maintaining a similar bitrate and perceptual quality. We apply classifier-free guidance to improve adherence to text. Comparing to the evaluated baselines, AudioGen outperforms over both objective and subjective metrics. Finally, we explore the ability of the proposed method to generate audio continuation conditionally and unconditionally. Samples: https://felixkreuk.github.io/audiogen
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound