Rigorously Collecting Commonsense Judgments for Complex Question-Answer Content

Authors

  • Mehrnoosh Sameki Boston University
  • Aditya Barua Google Inc.
  • Praveen Paritosh Google Inc.

DOI:

https://doi.org/10.1609/hcomp.v3i1.13267

Abstract

Community Question Answering (CQA) websites are a popular tool for internet users to fulfill diverse information needs. Posted questions can be multiple sentences long and span diverse domains. They go beyond factoid questions and can be conversational, opinion-seeking and experiential questions, that might have multiple, potentially conflicting, useful answers from different users. In this paper, we describe a large-scale formative study to collect commonsense properties of questions and answers from 18 diverse communities from stackexchange.com. We collected 50,000 human judgments on 500 question-answer pairs. Commonsense properties are features that humans can extract and characterize reliably by using their commonsense knowledge and native language skills, and no special domain expertise is assumed. We report results and suggestions for designing human computation tasks for collecting commonsense semantic judgments.

Downloads

Published

2016-03-28

How to Cite

Sameki, M., Barua, A., & Paritosh, P. (2016). Rigorously Collecting Commonsense Judgments for Complex Question-Answer Content. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3(1), 26-33. https://doi.org/10.1609/hcomp.v3i1.13267

Issue

Section

Crowdsourcing Breakthroughs for Language Technology Applications Workshop