Unsupervised Learning of Compositional Scene Representations from Multiple Unspecified Viewpoints

Authors

  • Jinyang Yuan Fudan University
  • Bin Li Fudan University
  • Xiangyang Xue Fudan University

DOI:

https://doi.org/10.1609/aaai.v36i8.20880

Keywords:

Machine Learning (ML)

Abstract

Visual scenes are extremely rich in diversity, not only because there are infinite combinations of objects and background, but also because the observations of the same scene may vary greatly with the change of viewpoints. When observing a visual scene that contains multiple objects from multiple viewpoints, humans are able to perceive the scene in a compositional way from each viewpoint, while achieving the so-called ``object constancy'' across different viewpoints, even though the exact viewpoints are untold. This ability is essential for humans to identify the same object while moving and to learn from vision efficiently. It is intriguing to design models that have the similar ability. In this paper, we consider a novel problem of learning compositional scene representations from multiple unspecified viewpoints without using any supervision, and propose a deep generative model which separates latent representations into a viewpoint-independent part and a viewpoint-dependent part to solve this problem. To infer latent representations, the information contained in different viewpoints is iteratively integrated by neural networks. Experiments on several specifically designed synthetic datasets have shown that the proposed method is able to effectively learn from multiple unspecified viewpoints.

Downloads

Published

2022-06-28

How to Cite

Yuan, J., Li, B., & Xue, X. (2022). Unsupervised Learning of Compositional Scene Representations from Multiple Unspecified Viewpoints. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8971-8979. https://doi.org/10.1609/aaai.v36i8.20880

Issue

Section

AAAI Technical Track on Machine Learning III