Optimization Learning: Perspective, Method, and Applications

Optimization Learning: Perspective, Method, and Applications

Risheng Liu

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Early Career. Pages 5164-5168. https://doi.org/10.24963/ijcai.2020/728

Numerous tasks at the core of statistics, learning, and vision areas are specific cases of ill-posed inverse problems. Recently, learning-based (e.g., deep) iterative methods have been empirically shown to be useful for these problems. Nevertheless, integrating learnable structures into iterations is still a laborious process, which can only be guided by intuitions or empirical insights. Moreover, there is a lack of rigorous analysis of the convergence behaviors of these reimplemented iterations, and thus the significance of such methods is a little bit vague. We move beyond these limits and propose a theoretically guaranteed optimization learning paradigm, a generic and provable paradigm for nonconvex inverse problems, and develop a series of convergent deep models. Our theoretical analysis reveals that the proposed optimization learning paradigm allows us to generate globally convergent trajectories for learning-based iterative methods. Thanks to the superiority of our framework, we achieve state-of-the-art performance on different real applications.
Keywords:
Machine Learning: Deep Learning
Computer Vision: Structural and Model-Based Approaches, Knowledge Representation and Reasoning
Computer Vision: Biomedical Image Understanding
Constraints and SAT: Constraint Optimization