Rethink the Connections among Generalization, Memorization, and the Spectral Bias of DNNs

Rethink the Connections among Generalization, Memorization, and the Spectral Bias of DNNs

Xiao Zhang, Haoyi Xiong, Dongrui Wu

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 3392-3398. https://doi.org/10.24963/ijcai.2021/467

Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance, challenging the bias-variance trade-off in classical learning theory. Recent studies claimed that DNNs first learn simple patterns and then memorize noise; some other works showed a phenomenon that DNNs have a spectral bias to learn target functions from low to high frequencies during training. However, we show that the monotonicity of the learning bias does not always hold: under the experimental setup of deep double descent, the high-frequency components of DNNs diminish in the late stage of training, leading to the second descent of the test error. Besides, we find that the spectrum of DNNs can be applied to indicating the second descent of the test error, even though it is calculated from the training set only.
Keywords:
Machine Learning: Deep Learning
Machine Learning: Learning Theory