Trajectory-Dependent Generalization Bounds for Pairwise Learning with φ-mixing Samples

Trajectory-Dependent Generalization Bounds for Pairwise Learning with φ-mixing Samples

Liyuan Liu, Hong Chen, Weifu Li, Tieliang Gong, Hao Deng, Yulong Wang

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5743-5751. https://doi.org/10.24963/ijcai.2025/639

Recently, the mathematical tool from fractal geometry (i.e., fractal dimension) has been employed to investigate optimization trajectory-dependent generalization ability for some pointwise learning models with independent and identically distributed (i.i.d.) observations. This paper goes beyond the limitations of pointwise learning and i.i.d. samples, and establishes generalization bounds for pairwise learning with uniformly strong mixing samples. The derived theoretical results fill the gap of trajectory-dependent generalization analysis for pairwise learning, and can be applied to wide learning paradigms, e.g., metric learning, ranking and gradient learning. Technically, our framework brings concentration estimation with Rademacher complexity and trajectory-dependent fractal dimension together in a coherent way for felicitous learning theory analysis. In addition, the efficient computation of fractal dimension can be guaranteed for random algorithms (e.g., stochastic gradient descent algorithm for deep neural networks) by bridging topological data analysis tools and the trajectory-dependent fractal dimension.
Keywords:
Machine Learning: ML: Learning theory