Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation


Abstract

We present a novel hybrid sound propagation algorithm for interactive applications. Our approach is designed for dynamic scenes and uses a neural network-based learned scattered field representation along with ray tracing to generate specular, diffuse, diffraction, and occlusion effects efficiently. We use geometric deep learning to approximate the acoustic scattering field using spherical harmonics. We use a large 3D dataset for training, and compare its accuracy with the ground truth generated using an accurate wave-based solver. The additional overhead of computing the learned scattered field at runtime is small and we demonstrate its interactive performance by generating plausible sound effects in dynamic scenes with diffraction and occlusion effects. We demonstrate the perceptual benefits of our approach based on an audio-visual user study.

Paper

Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation, IEEE VR 2021
Zhenyu Tang, Hsien-Yu Meng, and Dinesh Manocha

@inproceedings{tang2021learning,
  title={Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation}, 
  author={Zhenyu Tang and Hsien-Yu Meng and Dinesh Manocha},
  booktitle={2021 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
  year={2021},
  organization={IEEE}
}

Demo

Talk

Zhenyu’s conference talk at IEEE VR 2021

Code

Github