Co-Regularized PLSA for Multi-Modal Learning

Authors

  • Xin Wang State University of New York at Albany
  • MingChing Chang State University of New York at Albany
  • Yiming Ying State University of New York at Albany
  • Siwei Lyu State University of New York at Albany

DOI:

https://doi.org/10.1609/aaai.v30i1.10204

Keywords:

PLSA, Topic Model, Multi-Modal Learning

Abstract

Many learning problems in real world applications involve rich datasets comprising multiple information modalities. In this work, we study co-regularized PLSA(coPLSA) as an efficient solution to probabilistic topic analysis of multi-modal data. In coPLSA, similarities between topic compositions of a data entity across different data modalities are measured with divergences between discrete probabilities, which are incorporated as a co-regularizer to augment individual PLSA models over each data modality. We derive efficient iterative learning algorithms for coPLSA with symmetric KL, L2 and L1 divergences as co-regularizers, in each case the essential optimization problem affords simple numerical solutions that entail only matrix arithmetic operations and numerical solution of 1D nonlinear equations. We evaluate the performance of the coPLSA algorithms on text/image cross-modal retrieval tasks, on which they show competitive performance with state-of-the-art methods.

Downloads

Published

2016-03-02

How to Cite

Wang, X., Chang, M., Ying, Y., & Lyu, S. (2016). Co-Regularized PLSA for Multi-Modal Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10204

Issue

Section

Technical Papers: Machine Learning Methods