Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-Driven Approach

Authors

  • Wuyuan Xie Shenzhen University
  • Shukang Wang Shenzhen University
  • Sukun Tian Peking University
  • Lirong Huang Shenzhen University
  • Ye Liu Nanjing University of Posts and Telecommunications
  • Miaohui Wang Shenzhen University

DOI:

https://doi.org/10.1609/aaai.v37i3.25399

Keywords:

CV: Applications, CMS: Brain Modeling, CV: Low Level & Physics-Based Vision, CV: Multi-modal Vision

Abstract

Just noticeable difference (JND) refers to the maximum visual change that human eyes cannot perceive, and it has a wide range of applications in multimedia systems. However, most existing JND approaches only focus on a single modality, and rarely consider the complementary effects of multimodal information. In this article, we investigate the JND modeling from an end-to-end homologous multimodal perspective, namely hmJND-Net. Specifically, we explore three important visually sensitive modalities, including saliency, depth, and segmentation. To better utilize homologous multimodal information, we establish an effective fusion method via summation enhancement and subtractive offset, and align homologous multimodal features based on a self-attention driven encoder-decoder paradigm. Extensive experimental results on eight different benchmark datasets validate the superiority of our hmJND-Net over eight representative methods.

Downloads

Published

2023-06-26

How to Cite

Xie, W., Wang, S., Tian, S., Huang, L., Liu, Y., & Wang, M. (2023). Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-Driven Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 2965-2973. https://doi.org/10.1609/aaai.v37i3.25399

Issue

Section

AAAI Technical Track on Computer Vision III