Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization for Heterogeneous Representational Coarseness

Authors

  • Dianbo Liu Mila-Quebec AI Institute
  • Alex Lamb Mila-Quebec AI Institute
  • Xu Ji Mila-Quebec AI Institute
  • Pascal Junior Tikeng Notsawo MILA-Quebec AI Institute
  • Michael Mozer Google Research, Brain Team
  • Yoshua Bengio Mila-Quebec AI Institute, CIFAR AI Chair
  • Kenji Kawaguchi National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v37i7.26061

Keywords:

ML: Deep Neural Architectures, ML: Deep Neural Network Algorithms

Abstract

Vector Quantization (VQ) is a method for discretizing latent representations and has become a major part of the deep learning toolkit. It has been theoretically and empirically shown that discretization of representations leads to improved generalization, including in reinforcement learning where discretization can be used to bottleneck multi-agent communication to promote agent specialization and robustness. The discretization tightness of most VQ-based methods is defined by the number of discrete codes in the representation vector and the codebook size, which are fixed as hyperparameters. In this work, we propose learning to dynamically select discretization tightness conditioned on inputs, based on the hypothesis that data naturally contains variations in complexity that call for different levels of representational coarseness which is observed in many heterogeneous data sets. We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks with heterogeneity in representations.

Downloads

Published

2023-06-26

How to Cite

Liu, D., Lamb, A., Ji, X., Tikeng Notsawo, P. J., Mozer, M., Bengio, Y., & Kawaguchi, K. (2023). Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization for Heterogeneous Representational Coarseness. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8825-8833. https://doi.org/10.1609/aaai.v37i7.26061

Issue

Section

AAAI Technical Track on Machine Learning II