Learning Compositional Sparse Models of Bimodal Percepts

Authors

  • Suren Kumar State University of New York at Buffalo
  • Vikas Dhiman State University of New York at Buffalo
  • Jason Corso State University of New York at Buffalo

DOI:

https://doi.org/10.1609/aaai.v28i1.8753

Keywords:

compositional model, bimodal sparse representation, vision and audio

Abstract

Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal percepts that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on 'red triangles' and 'blue squares'; yet, implicitly will also have learned 'red squares' and 'blue triangles'. The structure of the projections and hence the compositional basis is learned automatically for a given language model. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes in a tabletop setup. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.

Downloads

Published

2014-06-19

How to Cite

Kumar, S., Dhiman, V., & Corso, J. (2014). Learning Compositional Sparse Models of Bimodal Percepts. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8753

Issue

Section

AAAI Technical Track: Cognitive Systems