Deep Clustering of Text Representations for Supervision-Free Probing of Syntax

Authors

  • Vikram Gupta ShareChat, India
  • Haoyue Shi Toyota Technological Institute at Chicago
  • Kevin Gimpel Toyota Technological Institute at Chicago
  • Mrinmaya Sachan ETH Zurich

DOI:

https://doi.org/10.1609/aaai.v36i10.21317

Keywords:

Speech & Natural Language Processing (SNLP), Machine Learning (ML)

Abstract

We explore deep clustering of multilingual text representations for unsupervised model interpretation and induction of syntax. As these representations are high-dimensional, out-of-the-box methods like K-means do not work well. Thus, our approach jointly transforms the representations into a lower-dimensional cluster-friendly space and clusters them. We consider two notions of syntax: Part of Speech Induction (POSI) and Constituency Labelling (CoLab) in this work. Interestingly, we find that Multilingual BERT (mBERT) contains surprising amount of syntactic knowledge of English; possibly even as much as English BERT (E-BERT). Our model can be used as a supervision-free probe which is arguably a less-biased way of probing. We find that unsupervised probes show benefits from higher layers as compared to supervised probes. We further note that our unsupervised probe utilizes E-BERT and mBERT representations differently, especially for POSI. We validate the efficacy of our probe by demonstrating its capabilities as a unsupervised syntax induction technique. Our probe works well for both syntactic formalisms by simply adapting the input representations. We report competitive performance of our probe on 45-tag English POSI, state-of-the-art performance on 12-tag POSI across 10 languages, and competitive results on CoLab. We also perform zero-shot syntax induction on resource impoverished languages and report strong results.

Downloads

Published

2022-06-28

How to Cite

Gupta, V., Shi, H., Gimpel, K., & Sachan, M. (2022). Deep Clustering of Text Representations for Supervision-Free Probing of Syntax. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10720-10728. https://doi.org/10.1609/aaai.v36i10.21317

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing