Confidence-based Self-Corrective Learning: An Application in Height Estimation Using Satellite LiDAR and Imagery
Confidence-based Self-Corrective Learning: An Application in Height Estimation Using Satellite LiDAR and Imagery
Zhili Li, Yiqun Xie, Xiaowei Jia
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
AI for Good. Pages 6049-6057.
https://doi.org/10.24963/ijcai.2023/671
Widespread, and rapid, environmental transformation is underway on Earth driven by human activities. Climate shifts such as global warming have led to massive and alarming loss of ice and snow in the high-latitude regions including the Arctic, causing many natural disasters due to sea-level rise, etc. Mitigating the impacts of climate change has also become a United Nations' Sustainable Development Goal for 2030. The recent launch of the ICESat-2 satellites target on heights in the polar regions. However, the observations are only available along very narrow scan lines, leaving large no-data gaps in-between. We aim to fill the gaps by combining the height observations with high-resolution satellite imagery that have large footprints (spatial coverage). The data expansion is a challenging task as the height data are often constrained on one or a few lines per image in real applications, and the images are highly noisy for height estimation. Related work on image-based height prediction and interpolation relies on specific types of images or does not consider the highly-localized height distribution. We propose a spatial self-corrective learning framework, which explicitly uses confidence-based pseudo-interpolation, recurrent self-refinement, and truth-based correction with a regression layer to address the challenges. We carry out experiments on different landscapes in the high-latitude regions and the proposed method shows stable improvements compared to the baseline methods.
Keywords:
AI for Good: Computer Vision
AI for Good: Machine Learning
AI for Good: Multidisciplinary Topics and Applications
AI for Good: Uncertainty in AI