Semantic-Guided Generative Image Augmentation Method with Diffusion Models for Image Classification

Authors

  • Bohan Li Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Xiao Xu Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Xinghao Wang Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Yutai Hou Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Yunlong Feng Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Feng Wang Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Xuanliang Zhang Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Qingfu Zhu Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology
  • Wanxiang Che Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v38i4.28084

Keywords:

CV: Applications, CV: Language and Vision, General

Abstract

Existing image augmentation methods consist of two categories: perturbation-based methods and generative methods. Perturbation-based methods apply pre-defined perturbations to augment an original image, but only locally vary the image, thus lacking image diversity. In contrast, generative methods bring more image diversity in the augmented images but may not preserve semantic consistency, thus may incorrectly change the essential semantics of the original image. To balance image diversity and semantic consistency in augmented images, we propose SGID, a Semantic-guided Generative Image augmentation method with Diffusion models for image classification. Specifically, SGID employs diffusion models to generate augmented images with good image diversity. More importantly, SGID takes image labels and captions as guidance to maintain semantic consistency between the augmented and original images. Experimental results show that SGID outperforms the best augmentation baseline by 1.72% on ResNet-50 (from scratch), 0.33% on ViT (ImageNet-21k), and 0.14% on CLIP-ViT (LAION-2B). Moreover, SGID can be combined with other image augmentation baselines and further improves the overall performance. We demonstrate the semantic consistency and image diversity of SGID through quantitative human and automated evaluations, as well as qualitative case studies.

Published

2024-03-24

How to Cite

Li, B., Xu, X., Wang, X., Hou, Y., Feng, Y., Wang, F., Zhang, X., Zhu, Q., & Che, W. (2024). Semantic-Guided Generative Image Augmentation Method with Diffusion Models for Image Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3018-3027. https://doi.org/10.1609/aaai.v38i4.28084

Issue

Section

AAAI Technical Track on Computer Vision III