ERMMA: Expected Risk Minimization for Matrix Approximation-based Recommender Systems

Authors

  • DongSheng Li IBM Research – China
  • Chao Chen IBM Research – China
  • Qin Lv Univeristy of Colorado Boulder
  • Li Shang Univeristy of Colorado Boulder
  • Stephen Chu IBM Research – China
  • Hongyuan Zha Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v31i1.10743

Keywords:

matrix approximation, recommender systems, collaborative filtering

Abstract

Matrix approximation (MA) is one of the most popular techniques in today's recommender systems. In most MA-based recommender systems, the problem of risk minimization should be defined, and how to achieve minimum expected risk in model learning is one of the most critical problems to recommendation accuracy. This paper addresses the expected risk minimization problem, in which expected risk can be bounded by the sum of optimization error and generalization error. Based on the uniform stability theory, we propose an expected risk minimized matrix approximation method (ERMMA), which is designed to achieve better tradeoff between optimization error and generalization error in order to reduce the expected risk of the learned MA models. Theoretical analysis shows that ERMMA can achieve lower expected risk bound than existing MA methods. Experimental results on the MovieLens and Netflix datasets demonstrate that ERMMA outperforms six state-of-the-art MA-based recommendation methods in both rating prediction problem and item ranking problem.

Downloads

Published

2017-02-12

How to Cite

Li, D., Chen, C., Lv, Q., Shang, L., Chu, S., & Zha, H. (2017). ERMMA: Expected Risk Minimization for Matrix Approximation-based Recommender Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10743

Issue

Section

Main Track: Machine Learning Applications