Opposite Online Learning via Sequentially Integrated Stochastic Gradient Descent Estimators

Authors

  • Wenhai Cui Zhongtai Securities Institute for Financial Studies, Shandong University
  • Xiaoting Ji Zhongtai Securities Institute for Financial Studies, Shandong University
  • Linglong Kong Department of Mathematical and Statistical Sciences, University of Alberta
  • Xiaodong Yan Zhongtai Securities Institute for Financial Studies, Shandong University Shandong Province Key Laboratory of Financial Risk Shandong National Center for Applied Mathematics

DOI:

https://doi.org/10.1609/aaai.v37i6.25886

Keywords:

ML: Time-Series/Data Streams, ML: Optimization

Abstract

Stochastic gradient descent algorithm (SGD) has been popular in various fields of artificial intelligence as well as a prototype of online learning algorithms. This article proposes a novel and general framework of one-sided testing for streaming data based on SGD, which determines whether the unknown parameter is greater than a certain positive constant. We construct the online-updated test statistic sequentially by integrating the selected batch-specific estimator or its opposite, which is referred to opposite online learning. The batch-specific online estimators are chosen strategically according to the proposed sequential tactics designed by two-armed bandit process. Theoretical results prove the advantage of the strategy ensuring the distribution of test statistic to be optimal under the null hypothesis and also supply the theoretical evidence of power enhancement compared with classical test statistic. In application, the proposed method is appealing for statistical inference of one-sided testing because it is scalable for any model. Finally, the superior finite-sample performance is evaluated by simulation studies.

Downloads

Published

2023-06-26

How to Cite

Cui, W., Ji, X., Kong, L., & Yan, X. (2023). Opposite Online Learning via Sequentially Integrated Stochastic Gradient Descent Estimators. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7270-7278. https://doi.org/10.1609/aaai.v37i6.25886

Issue

Section

AAAI Technical Track on Machine Learning I