Testing Pre-Annotation to Help Non-Experts Identify Drug-Drug Interactions Mentioned in Drug Product Labeling
In this study, a system for allowing combination of textmining and crowdsourcing of annotation approaches for detection of DDIs from drug package inserts is presented. An annotation study was designed to evaluate expert versus non-expert curation performance, and the impact of NLP pre-annotation on precision and recall on both groups. The design and development of the system and annotation study, consisted of three stages. First, our existing NLP pipeline for DDI extraction was improved, and it was used to preannotate 208 drug product labels with drug mentions and DDIs. Secondly, a DDI machine readable representation scheme was created using the Annotation Ontolgy. This model allowed us to load the NLP preannotated drug label sections into our plugin for human curation created using the Annotation tool DOMEO. Finally, the annotation study was performed along with usability questionnaires for collecting qualitative feedback. To our knowledge, this is the first study in comparing experts and non-experts for pharmacokinetic DDI annotation. Results showed lower performance on non-experts compared with expert annotation without the use of NLP,and an improvement of non-expert annotation performance using the NER module of the NLP assistance. Simplification of the workflow for NLP assisted annotation is necessary for scaling ourapproach.