Point-based Acoustic Scattering for Interactive Sound Propagation via Surface Encoding

Point-based Acoustic Scattering for Interactive Sound Propagation via Surface Encoding

Hsien-Yu Meng, Zhenyu Tang, Dinesh Manocha

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 909-915. https://doi.org/10.24963/ijcai.2021/126

We present a novel geometric deep learning method to compute the acoustic scattering properties of geometric objects. Our learning algorithm uses a point cloud representation of objects to compute the scattering properties and integrates them with ray tracing for interactive sound propagation in dynamic scenes. We use discrete Laplacian-based surface encoders and approximate the neighborhood of each point using a shared multi-layer perceptron. We show that our formulation is permutation invariant and present a neural network that computes the scattering function using spherical harmonics. Our approach can handle objects with arbitrary topologies and deforming models, and takes less than 1ms per object on a commodity GPU. We have analyzed the accuracy and perform validation on thousands of unseen 3D objects and highlight the benefits over other point-based geometric deep learning methods. To the best of our knowledge, this is the first real-time learning algorithm that can approximate the acoustic scattering properties of arbitrary objects with high accuracy.
Keywords:
Computer Vision: 2D and 3D Computer Vision
Multidisciplinary Topics and Applications: Interactive Entertainment