Sorting Convolution Operation for Achieving Rotational Invariance

The topic of achieving rotational invariance in convolutional neural networks (CNNs) has gained considerable attention recently, as this invariance is crucial for many computer vision tasks. In this letter, we propose a sorting convolution operation ( SConv ), which achieves invariance to arbitrary rotations without additional learnable parameters or data augmentation. It can directly replace conventional convolution operations in a classic CNN model to achieve the model’s rotational invariance. Based on MNIST-rot dataset, we first analyze the impact of convolution kernel size, sampling grid and sorting method on SConv ’s rotational invariance, and compare our method with previous rotation-invariant CNN models. Then, we combine SConv with VGG, ResNet and DenseNet, and conduct classification experiments on texture and remote sensing image datasets. The results show that SConv significantly improves the performance of these models, especially when training data is limited.