Learning V1 Simple Cells with Vector Representation of Local Content and Matrix Representation of Local Motion
DOI:
https://doi.org/10.1609/aaai.v36i6.20622Keywords:
Machine Learning (ML), Computer Vision (CV)Abstract
This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1). The model couples the following two components: (1) the vector representations of local contents of images and (2) the matrix representations of local pixel displacements caused by the relative motions between the agent and the objects in the 3D scene. When the image frame undergoes changes due to local pixel displacements, the vectors are multiplied by the matrices that represent the local displacements. Thus the vector representation is equivariant as it varies according to the local displacements. Our experiments show that our model can learn Gabor-like filter pairs of quadrature phases. The profiles of the learned filters match those of simple cells in Macaque V1. Moreover, we demonstrate that the model can learn to infer local motions in either a supervised or unsupervised manner. With such a simple model, we achieve competitive results on optical flow estimation.Downloads
Published
2022-06-28
How to Cite
Gao, R., Xie, J., Huang, S., Ren, Y., Zhu, S.-C., & Wu, Y. N. (2022). Learning V1 Simple Cells with Vector Representation of Local Content and Matrix Representation of Local Motion. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6674-6684. https://doi.org/10.1609/aaai.v36i6.20622
Issue
Section
AAAI Technical Track on Machine Learning I