Constructing Models of User and Task Characteristics from Eye Gaze Data for User-Adaptive Information Highlighting

Authors

  • Matthew Gingerich University of British Columbia
  • Cristina Conati University of British Columbia

DOI:

https://doi.org/10.1609/aaai.v29i1.9485

Keywords:

eye-tracking, user modelling, applied machine learning

Abstract

A user-adaptive information visualization system capable of learning models of users and the visualization tasks they perform could provide interventions optimized for helping specific users in specific task contexts. In this paper, we investigate the accuracy of predicting visualization tasks, user performance on tasks, and user traits from gaze data. We show that predictions made with a logistic regression model are significantly better than a baseline classifier, with particularly strong results for predicting task type and user performance. Furthermore, we compare classifiers built with interface-independent and interface-dependent features, and show that the interface-independent features are comparable or superior to interface-dependent ones. Finally, we discuss how the accuracy of predictive models is affected if they are trained with data from trials that had highlighting interventions added to the visualization.

Downloads

Published

2015-02-18

How to Cite

Gingerich, M., & Conati, C. (2015). Constructing Models of User and Task Characteristics from Eye Gaze Data for User-Adaptive Information Highlighting. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9485

Issue

Section

Main Track: Machine Learning Applications