Dynamic Spatial Audio Soundscapes with Latent Diffusion Models

Christian Templin, Hao Wang

Stevens Institute of Technology

Abstract: Spatial audio is an integral part of immersive entertainment, such as VR/AR, and has seen increasing popularity in cinema and music as well. The most common format of spatial audio is described as first-order Ambisonics (FOA). We seek to extend recent advancements in FOA generative AI models to enable the generation of 3D scenes with dynamic sound sources. Our proposed end-to-end model comes in two variations which vary in their user input and level of precision in sound source localization. In addition to our model, we also present a new dataset of simulated spatial audio-caption pairs. Evaluation of our models demonstrate that they are capable of matching the semantic alignment and audio quality of state of the art models while capturing the desired spatial attributes.

Descriptive Model Demos

Prompt: "A police siren wails from the front and slowly moves counter-clockwise to the back"

Download WAV

Prompt: "A lion roars from the back right and below"

Download WAV

Prompt: "A laughing man slowly moves counter-clockwise from the left to the front"

Download WAV

Prompt: "A bird chrips from the front and above, then flies below"

Download WAV