FIXMYPOSE: Pose Correctional Captioning and Retrieval

Authors

  • Hyounghun Kim University of North Carolina at Chapel Hill
  • Abhay Zala University of North Carolina at Chapel Hill
  • Graham Burri University of North Carolina at Chapel Hill
  • Mohit Bansal University of North Carolina at Chapel Hill

DOI:

https://doi.org/10.1609/aaai.v35i14.17555

Keywords:

Language Grounding & Multi-modal NLP

Abstract

Interest in physical therapy and individual exercises such as yoga/dance has increased alongside the well-being trend, and people globally enjoy such exercises at home/office via video streaming platforms. However, such exercises are hard to follow without expert guidance. Even if experts can help, it is almost impossible to give personalized feedback to every trainee remotely. Thus, automated pose correction systems are required more than ever, and we introduce a new captioning dataset named FixMyPose to address this need. We collect natural language descriptions of correcting a “current” pose to look like a “target” pose. To support a multilingual setup, we collect descriptions in both English and Hindi. The collected descriptions have interesting linguistic properties such as egocentric relations to the environment objects, analogous references, etc., requiring an understanding of spatial relations and commonsense knowledge about postures. Further, to avoid ML biases, we maintain a balance across characters with diverse demographics, who perform a variety of movements in several interior environments (e.g., homes, offices). From our FixMyPose dataset, we introduce two tasks: the pose-correctional-captioning task and its reverse, the target-pose-retrieval task. During the correctional-captioning task, models must generate the descriptions of how to move from the current to the target pose image, whereas in the retrieval task, models should select the correct target pose given the initial pose and the correctional description. We present strong cross-attention baseline models (uni/multimodal, RL, multilingual) and also show that our baselines are competitive with other models when evaluated on other image-difference datasets. We also propose new task-specific metrics (object-match, body-part-match, direction-match) and conduct human evaluation for more reliable evaluation, and we demonstrate a large human-model performance gap suggesting room for promising future work. Finally, to verify the sim-to-real transfer of our FixMyPose dataset, we collect a set of real images and show promising performance on these images. Data and code are available: https://fixmypose-unc.github.io.

Downloads

Published

2021-05-18

How to Cite

Kim, H., Zala, A., Burri, G., & Bansal, M. (2021). FIXMYPOSE: Pose Correctional Captioning and Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 13161-13170. https://doi.org/10.1609/aaai.v35i14.17555

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I