Learning to Mediate Perceptual Differences in Situated Human-Robot Dialogue

Authors

  • Changsong Liu Michigan State University
  • Joyce Chai Michigan State University

DOI:

https://doi.org/10.1609/aaai.v29i1.9497

Keywords:

Referential Grounding, Weight Learning, Human-Robot Dialogue

Abstract

In human-robot dialogue, although a robot and its human partner are co-present in a shared environment, they have significantly mismatched perceptual capabilities (e.g., recognizing objects in the surroundings). When a shared perceptual basis is missing, it becomes difficult for the robot to identify referents in the physical world that are referred to by the human (i.e., a problem of referential grounding). To overcome this problem, we have developed an optimization based approach that allows the robot to detect and adapt to perceptual differences. Through online interaction with the human, the robot can learn a set of weights indicating how reliably/unreliably each dimension (e.g., object type, object color, etc.) of its perception of the environment maps to the human's linguistic descriptors and thus adjust its word models accordingly. Our empirical evaluation has shown that this weight-learning approach can successfully adjust the weights to reflect the robot's perceptual limitations. The learned weights, together with updated word models, can lead to a significant improvement for referential grounding in future dialogues.

Downloads

Published

2015-02-19

How to Cite

Liu, C., & Chai, J. (2015). Learning to Mediate Perceptual Differences in Situated Human-Robot Dialogue. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9497