Planning for Social Interaction in a Robot Bartender Domain
DOI:
https://doi.org/10.1609/icaps.v23i1.13589Keywords:
Planning with Incomplete Information, Dialogue Planning, Planning Applications, Execution MonitoringAbstract
A robot coexisting with humans must not only be able to perform physical tasks, but must also be able to interact with humans in a socially appropriate manner. In many social settings, this involves the use of social signals like gaze, facial expression, and language. In this paper, we describe an application of planning to task-based social interaction using a robot that must interact with multiple human agents in a simple bartending domain. We show how social states are inferred from low-level sensors, using vision and speech as input modalities, and how we use the knowledge-level PKS planner to construct plans with task, dialogue, and social actions, as an alternative to current mainstream methods of interaction management. The resulting system has been evaluated in a real-world study with human subjects.