Scene-Aware Audio Rendering via Deep Acoustic Analysis


Abstract

We present a new method to capture the acoustic characteristics of real-world rooms using commodity devices, and use the captured characteristics to generate similar sounding sources with virtual models. Given the captured audio and an approximate geometric model of a real-world room, we present a novel learning-based method to estimate its acoustic material properties. Our approach is based on deep neural networks that estimate the reverberation time and equalization of the room from recorded audio. These estimates are used to compute material properties related to room reverberation using a novel material optimization objective. We use the estimated acoustic material characteristics for audio rendering using interactive geometric sound propagation and highlight the performance on many real-world scenarios. We also perform a user study to evaluate the perceptual similarity between the recorded sounds and our rendered audio.

Paper

Scene-Aware Audio Rendering via Deep Acoustic Analysis, IEEE VR Journal track (conditionally accepted).
Zhenyu Tang, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois, and Dinesh Manocha

@misc{tang2019sceneaware,
    title={Scene-Aware Audio Rendering via Deep Acoustic Analysis},
    author={Zhenyu Tang and Nicholas J. Bryan and Dingzeyu Li and Timothy R. Langlois and Dinesh Manocha},
    year={2019},
    eprint={1911.06245},
    archivePrefix={arXiv},
    primaryClass={cs.SD}
}

Demo

Code

Please stay tuned