Learning Attention Model From Human for Visuomotor Tasks

Authors

  • Luxin Zhang Peking University
  • Ruohan Zhang The University of Texas at Austin
  • Zhuode Liu The University of Texas at Austin
  • Mary Hayhoe The University of Texas at Austin
  • Dana Ballard The University of Texas at Austin

Keywords:

Eye movements, Visual Attention, Saliency, Deep Learning

Abstract

A wealth of information regarding intelligent decision making is conveyed by human gaze and visual attention, hence, modeling and exploiting such information might be a promising way to strengthen algorithms like deep reinforcement learning. We collect high-quality human action and gaze data while playing Atari games. Using these data, we train a deep neural network that can predict human gaze positions and visual attention with high accuracy.

Downloads

Published

2018-04-29

How to Cite

Zhang, L., Zhang, R., Liu, Z., Hayhoe, M., & Ballard, D. (2018). Learning Attention Model From Human for Visuomotor Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/12147