Efficient and Modularized Training on FPGA for Real-time Applications

Efficient and Modularized Training on FPGA for Real-time Applications

Shreyas Kolala Venkataramanaiah, Xiaocong Du, Zheng Li, Shihui Yin, Yu Cao, Jae-sun Seo

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence

Training of deep Convolution Neural Networks (CNNs) requires a tremendous amount of computation and memory and thus, GPUs are widely used to meet the computation demands of these complex training tasks. However, lacking the flexibility to exploit architectural optimizations, GPUs have poor energy efficiency of GPUs and are hard to be deployed on energy-constrained platforms. FPGAs are highly suitable for training, such as real-time learning at the edge, as they provide higher energy efficiency and better flexibility to support algorithmic evolution. This paper first develops a training accelerator on FPGA, with 16-bit fixed-point computing and various training modules. Furthermore, leveraging model segmentation techniques from Progressive Segmented Training, the newly developed FPGA accelerator is applied to online learning, achieving much lower computation cost. We demonstrate the performance of representative CNNs trained for CIFAR-10 on Intel Stratix-10 MX FPGA, evaluating both the conventional training procedure and the online learning algorithm.
Keywords:
Machine Learning: general
Computer Vision: general