Crowdsourcing for Multiple-Choice Question Answering

Authors

  • Bahadir Ismail Aydin State University of New York at Buffalo
  • Yavuz Selim Yilmaz State University of New York at Buffalo
  • Yaliang Li State University of New York at Buffalo
  • Qi Li State University of New York at Buffalo
  • Jing Gao State University of New York at Buffalo
  • Murat Demirbas State University of New York at Buffalo

DOI:

https://doi.org/10.1609/aaai.v28i2.19016

Abstract

We leverage crowd wisdom for multiple-choice question answering, and employ lightweight machine learning techniques to improve the aggregation accuracy of crowdsourced answers to these questions. In order to develop more effective aggregation methods and evaluate them empirically, we developed and deployed a crowdsourced system for playing the “Who wants to be a millionaire?” quiz show. Analyzing our data (which consist of more than 200,000 answers), we find that by just going with the most selected answer in the aggregation, we can answer over 90% of the questions correctly, but the success rate of this technique plunges to 60% for the later/harder questions in the quiz show. To improve the success rates of these later/harder questions, we investigate novel weighted aggregation schemes for aggregating the answers obtained from the crowd. By using weights optimized for reliability of participants (derived from the participants’ confidence), we show that we can pull up the accuracy rate for the harder questions by 15%, and to overall 95% average accuracy. Our results provide a good case for the benefits of applying machine learning techniques for building more accurate crowdsourced question answering systems.

Downloads

Published

2014-07-27

How to Cite

Aydin, B., Yavuz Selim Yilmaz, Y. S. Y., Li, Y., Li, Q., Gao, J., & Demirbas, M. (2014). Crowdsourcing for Multiple-Choice Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 28(2), 2946-2953. https://doi.org/10.1609/aaai.v28i2.19016