Discovering and Distinguishing Multiple Visual Senses for Polysemous Words

Authors

  • Yazhou Yao University of Technology Sydney
  • Jian Zhang University of Technology Sydney
  • Fumin Shen University of Electronic Science and Technology of China
  • Wankou Yang Southeast University
  • Pu Huang Nanjing University of Posts and Telecommunications
  • Zhenmin Tang Nanjing University of Science and Technology

Keywords:

Visual Polysemy, Multiple Visual Senses, Polysemous Words

Abstract

To reduce the dependence on labeled data, there have been increasing research efforts on learning visual classifiers by exploiting web images. One issue that limits their performance is the problem of polysemy. To solve this problem, in this work, we present a novel framework that solves the problem of polysemy by allowing sense-specific diversity in search results. Specifically, we first discover a list of possible semantic senses to retrieve sense-specific images. Then we merge visual similar semantic senses and prune noises by using the retrieved images. Finally, we train a visual classifier for each selected semantic sense and use the learned sense-specific classifiers to distinguish multiple visual senses. Extensive experiments on classifying images into sense-specific categories and re-ranking search results demonstrate the superiority of our proposed approach.

Downloads

Published

2018-04-25

How to Cite

Yao, Y., Zhang, J., Shen, F., Yang, W., Huang, P., & Tang, Z. (2018). Discovering and Distinguishing Multiple Visual Senses for Polysemous Words. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11255