Distilling Governing Laws and Source Input for Dynamical Systems from Videos

Distilling Governing Laws and Source Input for Dynamical Systems from Videos

Lele Luan, Yang Liu, Hao Sun

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3898-3904. https://doi.org/10.24963/ijcai.2022/541

Distilling interpretable physical laws from videos has led to expanded interest in the computer vision community recently thanks to the advances in deep learning, but still remains a great challenge. This paper introduces an end-to-end unsupervised deep learning framework to uncover the explicit governing equations of dynamics presented by moving object(s), based on recorded videos. Instead in the pixel (spatial) coordinate system of image space, the physical law is modeled in a regressed underlying physical coordinate system where the physical states follow potential explicit governing equations. A numerical integrator-based sparse regression module is designed and serves as a physical constraint to the autoencoder and coordinate system regression, and, in the meanwhile, uncover the parsimonious closed-form governing equations from the learned physical states. Experiments on simulated dynamical scenes show that the proposed method is able to distill closed-form governing equations and simultaneously identify unknown excitation input for several dynamical systems recorded by videos, which fills in the gap in literature where no existing methods are available and applicable for solving this type of problem.
Keywords:
Multidisciplinary Topics and Applications: Physical Science
Computer Vision: Interpretability and Transparency
Computer Vision: Video analysis and understanding   
Machine Learning: Autoencoders
Machine Learning: Explainable/Interpretable Machine Learning