Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention

Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention

Zizhang Wu, Zhuozheng Li, Zhi-Gang Fan, Yunzhe Wu, Yuanzhu Gan, Jian Pu

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 1551-1559. https://doi.org/10.24963/ijcai.2023/172

The monocular depth estimation task has recently revealed encouraging prospects, especially for the autonomous driving task. To tackle the ill-posed problem of 3D geometric reasoning from 2D monocular images, multi-frame monocular methods are developed to leverage the perspective correlation information from sequential temporal frames. However, moving objects such as cars and trains usually violate the static scene assumption, leading to feature inconsistency deviation and misaligned cost values, which would mislead the optimization algorithm. In this work, we present CTA-Depth, a Context-aware Temporal Attention guided network for multi-frame monocular Depth estimation. Specifically, we first apply a multi-level attention enhancement module to integrate multi-level image features to obtain an initial depth and pose estimation. Then the proposed CTA-Refiner is adopted to alternatively optimize the depth and pose. During the CTA-Refiner process, context-aware temporal attention (CTA) is developed to capture the global temporal-context correlations to maintain the feature consistency and estimation integrity of moving objects. In particular, we propose a long-range geometry embedding (LGE) module to produce a long-range temporal geometry prior. Our approach achieves significant improvements (e.g., 13.5% for the Abs Rel metric on the KITTI dataset) over state-of-the-art approaches on three benchmark datasets.
Keywords:
Computer Vision: CV: 3D computer vision
Computer Vision: CV: Scene analysis and understanding   
Machine Learning: ML: Attention models