A POMDP Model of Eye-Hand Coordination

Authors

  • Tom Erez Washington University in St. Louis
  • Julian Tramper Radboud University
  • William Smart Washington University in St. Louis
  • Stan Gielen Radboud University

DOI:

https://doi.org/10.1609/aaai.v25i1.8007

Abstract

This paper presents a generative model of eye-hand coordination. We use numerical optimization to solve for the joint behavior of an eye and two hands, deriving a predicted motion pattern from first principles, without imposing heuristics. We model the planar scene as a POMDP with 17 continuous state dimensions. Belief-space optimization is facilitated by using a nominal-belief heuristic, whereby we assume (during planning) that the maximum likelihood observation is always obtained. Since a globally-optimal solution for such a high-dimensional domain is computationally intractable, we employ local optimization in the belief domain. By solving for a locally-optimal plan through belief space, we generate a motion pattern of mutual coordination between hands and eye: the eye's saccades disambiguate the scene in a task-relevant manner, and the hands' motions anticipate the eye's saccades. Finally, the model is validated through a behavioral experiment, in which human subjects perform the same eye-hand coordination task. We show how simulation is congruent with the experimental results.

Downloads

Published

2011-08-04

How to Cite

Erez, T., Tramper, J., Smart, W., & Gielen, S. (2011). A POMDP Model of Eye-Hand Coordination. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 952-957. https://doi.org/10.1609/aaai.v25i1.8007

Issue

Section

Reasoning about Plans, Processes and Actions