An Object-Based Bayesian Framework for Top-Down Visual Attention

Authors

  • Ali Borji University of Southern California
  • Dicky Sihite University of Southern California
  • Laurent Itti University of Southern California

DOI:

https://doi.org/10.1609/aaai.v26i1.8334

Keywords:

top-down attention, visual attention, eye movements, bottom-up saliency, free viewing, visual search, object-based attention, space-based attention, model-based analysis

Abstract

We introduce a new task-independent framework to model top-down overt visual attention based on graph-ical models for probabilistic inference and reasoning. We describe a Dynamic Bayesian Network (DBN) that infers probability distributions over attended objects and spatial locations directly from observed data. Probabilistic inference in our model is performed over object-related functions which are fed from manual annotations of objects in video scenes or by state-of-the-art object detection models. Evaluating over ∼3 hours (appx. 315,000 eye fixations and 12,600 saccades) of observers playing 3 video games (time-scheduling, driving, and flight combat), we show that our approach is significantly more predictive of eye fixations compared to: 1) simpler classifier-based models also developed here that map a signature of a scene (multi-modal information from gist, bottom-up saliency, physical actions, and events) to eye positions, 2) 14 state-of-the-art bottom-up saliency models, and 3) brute-force algorithms such as mean eye position. Our results show that the proposed model is more effective in employing and reasoning over spatio-temporal visual data.

Downloads

Published

2021-09-20

How to Cite

Borji, A., Sihite, D., & Itti, L. (2021). An Object-Based Bayesian Framework for Top-Down Visual Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 26(1), 1529-1535. https://doi.org/10.1609/aaai.v26i1.8334