Patch Reordering: A NovelWay to Achieve Rotation and Translation Invariance in Convolutional Neural Networks

Authors

  • Xu Shen University of Science and Technology of China
  • Xinmei Tian University of Science and Technology of China
  • Shaoyan Sun University of Science and Technology of China
  • Dacheng Tao University of Technology Sydney

DOI:

https://doi.org/10.1609/aaai.v31i1.10872

Abstract

Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on many visual recognition tasks. However, the combination of convolution and pooling operations only shows invariance to small local location changes in meaningful objects in input. Sometimes, such networks are trained using data augmentation to encode this invariance into the parameters, which restricts the capacity of the model to learn the content of these objects. A more efficient use of the parameter budget is to encode rotation or translation invariance into the model architecture, which relieves the model from the need to learn them. To enable the model to focus on learning the content of objects other than their locations, we propose to conduct patch ranking of the feature maps before feeding them into the next layer. When patch ranking is combined with convolution and pooling operations, we obtain consistent representations despite the location of meaningful objects in input. We show that the patch ranking module improves the performance of the CNN on many benchmark tasks, including MNIST digit recognition, large-scale image recognition, and image retrieval.

Downloads

Published

2017-02-13

How to Cite

Shen, X., Tian, X., Sun, S., & Tao, D. (2017). Patch Reordering: A NovelWay to Achieve Rotation and Translation Invariance in Convolutional Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10872