SelectScale: Mining More Patterns from Images via Selective and Soft Dropout

SelectScale: Mining More Patterns from Images via Selective and Soft Dropout

Zhengsu Chen, Jianwei Niu, Xuefeng Liu, Shaojie Tang

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 523-529. https://doi.org/10.24963/ijcai.2020/73

Convolutional neural networks (CNNs) have achieved remarkable success in image recognition. Although the internal patterns of the input images are effectively learned by the CNNs, these patterns only constitute a small proportion of useful patterns contained in the input images. This can be attributed to the fact that the CNNs will stop learning if the learned patterns are enough to make a correct classification. Network regularization methods like dropout and SpatialDropout can ease this problem. During training, they randomly drop the features. These dropout methods, in essence, change the patterns learned by the networks, and in turn, forces the networks to learn other patterns to make the correct classification. However, the above methods have an important drawback. Randomly dropping features is generally inefficient and can introduce unnecessary noise. To tackle this problem, we propose SelectScale. Instead of randomly dropping units, SelectScale selects the important features in networks and adjusts them during training. Using SelectScale, we improve the performance of CNNs on CIFAR and ImageNet.
Keywords:
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation