Controllable Image Captioning via Prompting
DOI:
https://doi.org/10.1609/aaai.v37i2.25360Keywords:
CV: Language and Vision, CV: Multi-modal VisionAbstract
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to fine-tune the pre-trained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.Downloads
Published
2023-06-26
How to Cite
Wang, N., Xie, J., Wu, J., Jia, M., & Li, L. (2023). Controllable Image Captioning via Prompting. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2617-2625. https://doi.org/10.1609/aaai.v37i2.25360
Issue
Section
AAAI Technical Track on Computer Vision II