Can Embeddings Adequately Represent Medical Terminology? New Large-Scale Medical Term Similarity Datasets Have the Answer!

Authors

  • Claudia Schulz Babylon Health
  • Damir Juric Babylon Health

DOI:

https://doi.org/10.1609/aaai.v34i05.6404

Abstract

A large number of embeddings trained on medical data have emerged, but it remains unclear how well they represent medical terminology, in particular whether the close relationship of semantically similar medical terms is encoded in these embeddings. To date, only small datasets for testing medical term similarity are available, not allowing to draw conclusions about the generalisability of embeddings to the enormous amount of medical terms used by doctors. We present multiple automatically created large-scale medical term similarity datasets and confirm their high quality in an annotation study with doctors. We evaluate state-of-the-art word and contextual embeddings on our new datasets, comparing multiple vector similarity metrics and word vector aggregation techniques. Our results show that current embeddings are limited in their ability to adequately encode medical terms. The novel datasets thus form a challenging new benchmark for the development of medical embeddings able to accurately represent the whole medical terminology.

Downloads

Published

2020-04-03

How to Cite

Schulz, C., & Juric, D. (2020). Can Embeddings Adequately Represent Medical Terminology? New Large-Scale Medical Term Similarity Datasets Have the Answer!. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8775-8782. https://doi.org/10.1609/aaai.v34i05.6404

Issue

Section

AAAI Technical Track: Natural Language Processing