Towards Improved Risk Bounds for Transductive Learning
Towards Improved Risk Bounds for Transductive Learning
Bowei Zhu, Shaojie Li, Yong Liu
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 7245-7253.
https://doi.org/10.24963/ijcai.2025/806
Transductive learning is a popular setting in statistic learning theory, reasoning from observed, specific training cases to specific test cases, which has been widely used in many fields such as graph neural networks and semi-supervised learning. Existing results provide fast rates of convergence based on the traditional local techniques, which need the surrogate function that upper bounds the uniform error within a localized region to be ``sub-root''. We derive new version of concentration inequality for empirical processes in transductive learning and apply generic chaining technique to relax the assumptions and gain tighter results in empirical risk minimization. Furthermore, we concentrate on the generalization of moment penalization algorithm. We design a novel estimator based on the second moment (variance) penalization and derive its learning rates, which is the first theoretical generalization analysis considering variance-based algorithms.
Keywords:
Machine Learning: ML: Learning theory
Machine Learning: ML: Semi-supervised learning
