Unsupervised Learning with Contrastive Latent Variable Models


  • Kristen A. Severson IBM Research
  • Soumya Ghosh IBM Research
  • Kenney Ng IBM Research




In unsupervised learning, dimensionality reduction is an important tool for data exploration and visualization. Because these aims are typically open-ended, it can be useful to frame the problem as looking for patterns that are enriched in one dataset relative to another. These pairs of datasets occur commonly, for instance a population of interest vs. control or signal vs. signal free recordings. However, there are few methods that work on sets of data as opposed to data points or sequences. Here, we present a probabilistic model for dimensionality reduction to discover signal that is enriched in the target dataset relative to the background dataset. The data in these sets do not need to be paired or grouped beyond set membership. By using a probabilistic model where some structure is shared amongst the two datasets and some is unique to the target dataset, we are able to recover interesting structure in the latent space of the target dataset. The method also has the advantages of a probabilistic model, namely that it allows for the incorporation of prior information, handles missing data, and can be generalized to different distributional assumptions. We describe several possible variations of the model and demonstrate the application of the technique to de-noising, feature selection, and subgroup discovery settings.




How to Cite

Severson, K. A., Ghosh, S., & Ng, K. (2019). Unsupervised Learning with Contrastive Latent Variable Models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4862-4869. https://doi.org/10.1609/aaai.v33i01.33014862



AAAI Technical Track: Machine Learning