Motion Invariance in Visual Environments

Motion Invariance in Visual Environments

Alessandro Betti, Marco Gori, Stefano Melacci

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2009-2015. https://doi.org/10.24963/ijcai.2019/278

The puzzle of computer vision might find new challenging solutions when we realize that most successful methods are working at image level, which is remarkably more difficult than processing directly visual streams, just as it happens in nature. In this paper, we claim that the processing of a stream of frames naturally leads to formulate the motion invariance principle, which enables the construction of a new theory of visual learning based on convolutional features. The theory addresses a number of intriguing questions that arise in natural vision, and offers a well-posed computational scheme for the discovery of convolutional filters over the retina. They are driven by the Euler- Lagrange differential equations derived from the principle of least cognitive action, that parallels the laws of mechanics. Unlike traditional convolutional networks, which need massive supervision, the proposed theory offers a truly new scenario in which feature learning takes place by unsupervised processing of video signals. An experimental report of the theory is presented where we show that features extracted under motion invariance yield an improvement that can be assessed by measuring information-based indexes.
Keywords:
Machine Learning: Learning Theory
Machine Learning: Unsupervised Learning
Computer Vision: Motion and Tracking
Computer Vision: Computer Vision