On the ERM Principle With Networked Data

Authors

  • Yuanhong Wang Beihang University
  • Yuyi Wang ETH Zurich
  • Xingwu Liu Chinese Academy of Sciences, Institute of Computing Technology
  • Juhua Pu Beihang University

DOI:

https://doi.org/10.1609/aaai.v32i1.11643

Keywords:

Generalization error bounds, Non-i.i.d. data, U-statistics, Fully polynomial-time approximation scheme

Abstract

Networked data, in which every training example involves two objects and may share some common objects with others, is used in many machine learning tasks such as learning to rank and link prediction. A challenge of learning from networked examples is that target values are not known for some pairs of objects. In this case, neither the classical i.i.d. assumption nor techniques based on complete U-statistics can be used. Most existing theoretical results of this problem only deal with the classical empirical risk minimization (ERM) principle that always weights every example equally, but this strategy leads to unsatisfactory bounds. We consider general weighted ERM and show new universal risk bounds for this problem. These new bounds naturally define an optimization problem which leads to appropriate weights for networked examples. Though this optimization problem is not convex in general, we devise a new fully polynomial-time approximation scheme (FPTAS) to solve it.

Downloads

Published

2018-04-29

How to Cite

Wang, Y., Wang, Y., Liu, X., & Pu, J. (2018). On the ERM Principle With Networked Data. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11643