Ontology Quality Assurance with the Crowd

Authors

  • Jonathan Mortensen Stanford University
  • Mark Musen Stanford University
  • Natalya Noy Stanford University

DOI:

https://doi.org/10.1609/hcomp.v1i1.13111

Keywords:

biomedicine, ontology, crowdsourcing, human computation

Abstract

The Semantic Web has the potential to change the Web as we know it. However, the community faces a significant challenge in managing, aggregating, and curating the massive amount of data and knowledge. Human computation is only beginning to serve an essential role in the curation of these Web-based data. Ontologies, which facilitate data integration and search, serve as a central component of the Semantic Web, but they are large, complex, and typically require extensive expert curation. Furthermore, ontology-engineering tasks require more knowledge than is required  in a typical crowdsourcing-task. We have developed ontology-engineering methods that leverage the crowd. In this work, we describe our general crowdsourcing workflow. We then highlight  our work on applying this workflow to ontology verification and quality assurance. In a pilot study, this method approaches expert ability, finding the same errors that experts identified with 86% accuracy in a faster and more scalable fashion. The work provides a general framework with which to develop crowdsourcing methods for the Semantic Web. In addition, it highlights opportunities for future research in human computation and crowdsourcing.

Downloads

Published

2013-11-03

How to Cite

Mortensen, J., Musen, M., & Noy, N. (2013). Ontology Quality Assurance with the Crowd. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1(1), 54-55. https://doi.org/10.1609/hcomp.v1i1.13111