Deep Camouflage Images


  • Qing Zhang Sun Yat-sen University
  • Gelin Yin Sun Yat-sen University
  • Yongwei Nie South China University of Technology
  • Wei-Shi Zheng Sun Yat-sen University



This paper addresses the problem of creating camouflage images. Such images typically contain one or more hidden objects embedded into a background image, so that viewers are required to consciously focus to discover them. Previous methods basically rely on hand-crafted features and texture synthesis to create camouflage images. However, due to lack of reliable understanding of what essentially makes an object recognizable, they typically result in either complete standout or complete invisible hidden objects. Moreover, they may fail to produce seamless and natural images because of the sensitivity to appearance differences. To overcome these limitations, we present a novel neural style transfer approach that adopts the visual perception mechanism to create camouflage images, which allows us to hide objects more effectively while producing natural-looking results. In particular, we design an attention-aware camouflage loss to adaptively mask out information that make the hidden objects visually standout, and also leave subtle yet enough feature clues for viewers to perceive the hidden objects. To remove the appearance discontinuities between the hidden objects and the background, we formulate a naturalness regularization to constrain the hidden objects to maintain the manifold structure of the covered background. Extensive experiments show the advantages of our approach over existing camouflage methods and state-of-the-art neural style transfer algorithms.




How to Cite

Zhang, Q., Yin, G., Nie, Y., & Zheng, W.-S. (2020). Deep Camouflage Images. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12845-12852.



AAAI Technical Track: Vision