Dynamic Multi-Context Attention Networks for Citation Forecasting of Scientific Publications

Authors

  • Taoran Ji Virginia Tech
  • Nathan Self Virginia Tech
  • Kaiqun Fu Virginia Tech
  • Zhiqian Chen Mississippi State University
  • Naren Ramakrishnan Virginia Tech
  • Chang-Tien Lu Virginia Tech, USA

DOI:

https://doi.org/10.1609/aaai.v35i9.16970

Keywords:

Time-Series/Data Streams

Abstract

Forecasting citations of scientific patents and publications is a crucial task for understanding the evolution and development of technological domains and for foresight into emerging technologies. By construing citations as a time series, the task can be cast into the domain of temporal point processes. Most existing work on forecasting with temporal point processes, both conventional and neural network-based, only performs single-step forecasting. In citation forecasting, however, the more salient goal is n-step forecasting: predicting the arrival time and the technology class of the next n citations. In this paper, we propose Dynamic Multi-Context Attention Networks (DMA-Nets), a novel deep learning sequence-to-sequence (Seq2Seq) model with a novel hierarchical dynamic attention mechanism for long-term citation forecasting. Extensive experiments on two real-world datasets demonstrate that the proposed model learns better representations of conditional dependencies over historical sequences compared to state-of-the-art counterparts and thus achieves significant performance for citation predictions. The dataset and code have been made available online.

Downloads

Published

2021-05-18

How to Cite

Ji, T., Self, N., Fu, K., Chen, Z., Ramakrishnan, N., & Lu, C.-T. (2021). Dynamic Multi-Context Attention Networks for Citation Forecasting of Scientific Publications. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7953-7960. https://doi.org/10.1609/aaai.v35i9.16970

Issue

Section

AAAI Technical Track on Machine Learning II