Can We Get Rid of Handcrafted Feature Extractors? SparseViT: Nonsemantics-Centered, Parameter-Efficient Image Manipulation Localization Through Spare-Coding Transformer

Authors

  • Lei Su College of Computer Science, Sichuan University, China Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education of China
  • Xiaochen Ma College of Computer Science, Sichuan University, China Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education of China
  • Xuekang Zhu College of Computer Science, Sichuan University, China Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education of China
  • Chaoqun Niu College of Computer Science, Sichuan University, China Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education of China
  • Zeyu Lei College of Computer Science, Sichuan University, China Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education of China Department of Computer and Information Science, University of Macao, Macao SAR
  • Ji-Zhe Zhou College of Computer Science, Sichuan University, China Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education of China

DOI:

https://doi.org/10.1609/aaai.v39i7.32754

Abstract

Non-semantic features or semantic-agnostic features, which are irrelevant to image context but sensitive to image manipulations, are recognized as evidential to Image Manipulation Localization (IML). Since manual labels are impossible, existing works rely on handcrafted methods to extract non-semantic features. Handcrafted non-semantic features jeopardize IML model's generalization ability in unseen or complex scenarios. Therefore, for IML, the elephant in the room is: How to adaptively extract non-semantic features? Non-semantic features are context-irrelevant and manipulation-sensitive. That is, within an image, they are consistent across patches unless manipulation occurs. Then, spare and discrete interactions among image patches are sufficient for extracting non-semantic features. However, image semantics vary drastically on different patches, requiring dense and continuous interactions among image patches for learning semantic representations. Hence, in this paper, we propose a Sparse Vision Transformer (SparseViT), which reformulates the dense, global self-attention in ViT into a sparse, discrete manner. Such sparse self-attention breaks image semantics and forces SparseViT to adaptively extract non-semantic features for images. Besides, compared with existing IML models, the sparse self-attention mechanism largely reduced the model size (max 80% in FLOPs), achieving stunning parameter efficiency and computation reduction. Extensive experiments demonstrate that, without any handcrafted feature extractors, SparseViT is superior in both generalization and efficiency across benchmark datasets.

Downloads

Published

2025-04-11

How to Cite

Su, L., Ma, X., Zhu, X., Niu, C., Lei, Z., & Zhou, J.-Z. (2025). Can We Get Rid of Handcrafted Feature Extractors? SparseViT: Nonsemantics-Centered, Parameter-Efficient Image Manipulation Localization Through Spare-Coding Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7024–7032. https://doi.org/10.1609/aaai.v39i7.32754

Issue

Section

AAAI Technical Track on Computer Vision VI