Estimation and Comparison of Linear Regions for ReLU Networks

Estimation and Comparison of Linear Regions for ReLU Networks

Yuan Wang

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3544-3550. https://doi.org/10.24963/ijcai.2022/492

We study the relationship between the arrangement of neurons and the complexity of the ReLU-activated neural networks measured by the number of linear regions. More specifically, we provide both theoretical and empirical evidence for the point of view that shallow networks tend to have higher complexity than deep ones when the total number of neurons is fixed. In the theoretical part, we prove that this is the case for networks whose neurons in the hidden layers are arranged in the forms of 1x2n, 2xn and nx2; in the empirical part, we implement an algorithm that precisely tracks (hence counts) all the linear regions, and run it on networks with various structures. Although the time complexity of the algorithm is quite high, we verify that the problem of calculating the number of linear regions of a ReLU network is itself NP-hard. So currently there is no surprisingly efficient way to solve it. Roughly speaking, in the algorithm we divide the linear regions into subregions called the "activation regions", which are convex and easy to propagate through the network. The relationship between the number of the linear regions and that of the activation regions is also discussed.
Keywords:
Machine Learning: Learning Theory
Machine Learning: Optimisation
Machine Learning: Theory of Deep Learning