Hardware-friendly Deep Learning by Network Quantization and Binarization
Hardware-friendly Deep Learning by Network Quantization and Binarization
Haotong Qin
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 4911-4912.
https://doi.org/10.24963/ijcai.2021/687
Quantization is emerging as an efficient approach to promote hardware-friendly deep learning and run deep neural networks on resource-limited hardware. However, it still causes a significant decrease to the network in accuracy. We summarize challenges of quantization into two categories: Quantization for Diverse Architectures and Quantization on Complex Scenes. Our studies focus mainly on applying quantization on various architectures and scenes and pushing the limit of quantization to extremely compress and accelerate networks. The comprehensive research on quantization will achieve more powerful, more efficient, and more flexible hardware-friendly deep learning, and make it better suited to more real-world applications.
Keywords:
Machine Learning: Deep Learning