Submitted by valdanylchuk t3_xy3zfe in MachineLearning
LM-based; in contrast to other recent audio generation experiments which worked from transcribed text or midi notes, AudioLM works directly based on the audio signal, resulting in outstanding consistency and high fidelity sound.
Google blog post from yesterday: https://ai.googleblog.com/2022/10/audiolm-language-modeling-approach-to.html
Demo clip on Youtube: https://www.youtube.com/watch?v=_xkZwJ0H9IU
Paper: https://arxiv.org/abs/2209.03143
Abstract:
>We introduce AudioLM, a framework for high-quality audio generation with long-term consistency. AudioLM maps the input audio to a sequence of discrete tokens and casts audio generation as a language modeling task in this representation space. We show how existing audio tokenizers provide different trade-offs between reconstruction quality and long-term structure, and we propose a hybrid tokenization scheme to achieve both objectives. Namely, we leverage the discretized activations of a masked language model pre-trained on audio to capture long-term structure and the discrete codes produced by a neural audio codec to achieve high-quality synthesis. By training on large corpora of raw audio waveforms, AudioLM learns to generate natural and coherent continuations given short prompts. When trained on speech, and without any transcript or annotation, AudioLM generates syntactically and semantically plausible speech continuations while also maintaining speaker identity and prosody for unseen speakers. Furthermore, we demonstrate how our approach extends beyond speech by generating coherent piano music continuations, despite being trained without any symbolic representation of music.
E_Snap t1_irg5x3z wrote
That is incredible. And to think I was watching musicians joke about graphic artists’ job security just a few days ago.