Pay Attention to Devils: A Photometric Stereo Network for Better Details
Pay Attention to Devils: A Photometric Stereo Network for Better Details
Yakun Ju, Kin-Man Lam, Yang Chen, Lin Qi, Junyu Dong
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 694-700.
https://doi.org/10.24963/ijcai.2020/97
We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods.
Keywords:
Computer Vision: 2D and 3D Computer Vision
Computer Vision: Computational Photography, Photometry, Shape from X
Machine Learning Applications: Applications of Supervised Learning