Inferring Camouflaged Objects by Texture-Aware Interactive Guidance Network
Keywords:Low Level & Physics-based Vision, Segmentation
AbstractCamouflaged objects, similar to the background, show indefinable boundaries and deceptive textures, which increases the difficulty of detection task and makes the model rely on features with more information. Herein, we design a texture label to facilitate our network for accurate camouflaged object segmentation. Motivated by the complementary relationship between texture labels and camouflaged object labels, we propose an interactive guidance framework named TINet, which focuses on finding the indefinable boundary and the texture difference by progressive interactive guidance. It maximizes the guidance effect of refined multi-level texture cues on segmentation. Specifically, texture perception decoder (TPD) makes a comprehensive analysis of texture information in multiple scales. Feature interaction guidance decoder (FGD) interactively refines multi-level features of camouflaged object detection and texture detection level by level. Holistic perception decoder (HPD) enhances FGD results by multi-level holistic perception. In addition, we propose a boundary weight map to help the loss function pay more attention to the object boundary. Sufficient experiments conducted on COD and SOD datasets demonstrate that the proposed method performs favorably against 23 state-of-the-art methods.
How to Cite
Zhu, J., Zhang, X., Zhang, S., & Liu, J. (2021). Inferring Camouflaged Objects by Texture-Aware Interactive Guidance Network. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3599-3607. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16475
AAAI Technical Track on Computer Vision III