Where and How to Enhance: Discovering Bit-Width Contribution for Mixed Precision Quantization
Where and How to Enhance: Discovering Bit-Width Contribution for Mixed Precision Quantization
Haidong Kang, Lianbo Ma, Guo Yu, Shangce Gao
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5517-5526.
https://doi.org/10.24963/ijcai.2025/614
Mixed precision quantization (MPQ) is an effective quantization approach to achieve accuracy-complexity trade-off of neural network, through assigning different bit-widths to network activations and weights in each layer. The typical way of existing MPQ methods is to optimize quantization policies (i.e., bit-width allocation) in a gradient descent manner, termed as Differentiable MPQ (DMPQ). At the end of the search, the bit-width associated to the quantization parameters which has the largest value will be selected to form the final mixed precision quantization policy, with the implicit assumption that the values of quantization parameters reflect the operation contribution to the accuracy improvement. While much has been discussed about the MPQ’s improvement, the bit-width selection process has received little attention. We study this problem and argue that the magnitude of quantization parameters does not necessarily reflect the actual contribution of the bit-width to the task performance. Then, we propose a Shapley-based MPQ (SMPQ) method, which measures the bit-width operation’s direct contribution on the MPQ task. To reduce computation cost, a Monte Carlo sampling-based approximation strategy is proposed for Shapley computation. Extensive experiments on mainstream benchmarks demonstrate that our SMPQ consistently achieves state-of-the-art performance than gradient-based competitors.
Keywords:
Machine Learning: ML: Automated machine learning
Computer Vision: CV: Efficiency and Optimization
Machine Learning: ML: Game Theory
Search: S: Mixed discrete/continuous search
