Multi-Modal Learning over User-Contributed Content from Cross-Domain Social Media

Authors

  • Wen-Yu Lee National Taiwan University

DOI:

https://doi.org/10.1609/aaai.v30i1.9813

Keywords:

Cross-Media Mining, Social Media, Multi-Modal Learning

Abstract

The goal of the research is to discover and summarize data from the emerging social media into information of interests. Specifically, leveraging user-contributed data from cross-domain social media, the idea is to perform multi-modal learning for a given photo, aiming to present people’s description or comments, geographical information, and events of interest, closely related to the photo. These information then can be used for various purposes, such as being a real-time guide for the tourists to improve the quality of tourism. As a result, this research investigates modern challenges of image annotation, image retrieval, and cross-media mining, followed by presenting promising ways to conquer the challenges.

Downloads

Published

2016-03-05

How to Cite

Lee, W.-Y. (2016). Multi-Modal Learning over User-Contributed Content from Cross-Domain Social Media. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.9813