Dynamic Determinantal Point Processes

Authors

  • Takayuki Osogami IBM Research AI
  • Rudy Raymond IBM Research AI
  • Akshay Goel Graduate School of Mathematics, Kyushu University
  • Tomoyuki Shirai Institute of Mathematics for Industry, Kyushu University
  • Takanori Maehara RIKEN Center for Advanced Intelligence Project

Keywords:

Determinantal point process, Time series, Learning

Abstract

The determinantal point process (DPP) has been receiving increasing attention in machine learning as a generative model of subsets consisting of relevant and diverse items. Recently, there has been a significant progress in developing efficient algorithms for learning the kernel matrix that characterizes a DPP. Here, we propose a dynamic DPP, which is a DPP whose kernel can change over time, and develop efficient learning algorithms for the dynamic DPP. In the dynamic DPP, the kernel depends on the subsets selected in the past, but we assume a particular structure in the dependency to allow efficient learning. We also assume that the kernel has a low rank and exploit a recently proposed learning algorithm for the DPP with low-rank factorization, but also show that its bottleneck computation can be reduced from O(M2K) time to O(M K2) time, where M is the number of items under consideration, and K is the rank of the kernel, which can be set smaller than M by orders of magnitude.

Downloads

Published

2018-04-29

How to Cite

Osogami, T., Raymond, R., Goel, A., Shirai, T., & Maehara, T. (2018). Dynamic Determinantal Point Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11598