Multimodal Ensembling for Zero-Shot Image Classification

Authors

  • Javon Hickmon University of Washington, Seattle, WA

DOI:

https://doi.org/10.1609/aaai.v38i21.30551

Keywords:

Multimodal Machine Learning, Image Classification, Machine Learning, Fine-Grained Image Classification, Machine Perception

Abstract

Artificial intelligence has made significant progress in image classification, an essential task for machine perception to achieve human-level image understanding. Despite recent advances in vision-language fields, multimodal image classification is still challenging, particularly for the following two reasons. First, models with low capacity often suffer from underfitting and thus underperform on fine-grained image classification. Second, it is important to ensure high-quality data with rich cross-modal representations of each class, which is often difficult to generate. Here, we utilize ensemble learning to reduce the impact of these issues on pre-trained models. We aim to create a meta-model that combines the predictions of multiple open-vocabulary multimodal models trained on different data to create more robust and accurate predictions. By utilizing ensemble learning and multimodal machine learning, we will achieve higher prediction accuracies without any additional training or fine-tuning, meaning that this method is completely zero-shot.

Downloads

Published

2024-03-24

How to Cite

Hickmon, J. (2024). Multimodal Ensembling for Zero-Shot Image Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23747-23749. https://doi.org/10.1609/aaai.v38i21.30551