Efficient Large-Scale Multi-Modal Classification

Authors

  • Douwe Kiela Facebook AI Research
  • Edouard Grave Facebook AI Research
  • Armand Joulin Facebook AI Research
  • Tomas Mikolov Facebook AI Research

DOI:

https://doi.org/10.1609/aaai.v32i1.11945

Abstract

While the incipient internet was largely text-based, the modern digital world is becoming increasingly multi-modal. Here, we examine multi-modal classification where one modality is discrete, e.g. text, and the other is continuous, e.g. visual representations transferred from a convolutional neural network. In particular, we focus on scenarios where we have to be able to classify large quantities of data quickly. We investigate various methods for performing multi-modal fusion and analyze their trade-offs in terms of classification accuracy and computational efficiency. Our findings indicate that the inclusion of continuous information improves performance over text-only on a range of multi-modal classification tasks, even with simple fusion methods. In addition, we experiment with discretizing the continuous features in order to speed up and simplify the fusion process even further. Our results show that fusion with discretized features outperforms text-only classification, at a fraction of the computational cost of full multi-modal fusion, with the additional benefit of improved interpretability.

Downloads

Published

2018-04-27

How to Cite

Kiela, D., Grave, E., Joulin, A., & Mikolov, T. (2018). Efficient Large-Scale Multi-Modal Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11945