RRL: Regional Rotate Layer in Convolutional Neural Networks


  • Zongbo Hao UESTC
  • Tao Zhang UESTC
  • Mingwang Chen UESTC
  • Zou Kaixu UESTC




Computer Vision (CV), Machine Learning (ML)


Convolutional Neural Networks (CNNs) perform very well in image classification and object detection in recent years, but even the most advanced models have limited rotation invariance. Known solutions include the enhancement of training data and the increase of rotation invariance by globally merging the rotation equivariant features. These methods either increase the workload of training or increase the number of model parameters. To address this problem, this paper proposes a module that can be inserted into the existing networks, and directly incorporates the rotation invariance into the feature extraction layers of the CNNs. This module does not have learnable parameters and will not increase the complexity of the model. At the same time, only by training the upright data, it can perform well on the rotated testing set. These ad-vantages will be suitable for fields such as biomedicine and astronomy where it is difficult to obtain upright samples or the target has no directionality. Evaluate our module with LeNet-5, ResNet-18 and tiny-yolov3, we get impressive results.




How to Cite

Hao, Z., Zhang, T., Chen, M., & Kaixu, Z. (2022). RRL: Regional Rotate Layer in Convolutional Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 826-833. https://doi.org/10.1609/aaai.v36i1.19964



AAAI Technical Track on Computer Vision I