Robust Optimization over Multiple Domains


  • Qi Qian Alibaba Group
  • Shenghuo Zhu Alibaba Group
  • Jiasheng Tang Alibaba Group
  • Rong Jin Alibaba Group
  • Baigui Sun Alibaba Group
  • Hao Li Alibaba Group



In this work, we study the problem of learning a single model for multiple domains. Unlike the conventional machine learning scenario where each domain can have the corresponding model, multiple domains (i.e., applications/users) may share the same machine learning model due to maintenance loads in cloud computing services. For example, a digit-recognition model should be applicable to hand-written digits, house numbers, car plates, etc. Therefore, an ideal model for cloud computing has to perform well at each applicable domain. To address this new challenge from cloud computing, we develop a framework of robust optimization over multiple domains. In lieu of minimizing the empirical risk, we aim to learn a model optimized to the adversarial distribution over multiple domains. Hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency. Theoretically, we analyze the convergence rate for convex and non-convex models. To our best knowledge, we first study the convergence rate of learning a robust non-convex model with a practical algorithm. Furthermore, we demonstrate that the robustness of the framework and the convergence rate can be further enhanced by appropriate regularizers over the adversarial distribution. The empirical study on real-world fine-grained visual categorization and digits recognition tasks verifies the effectiveness and efficiency of the proposed framework.




How to Cite

Qian, Q., Zhu, S., Tang, J., Jin, R., Sun, B., & Li, H. (2019). Robust Optimization over Multiple Domains. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4739-4746.



AAAI Technical Track: Machine Learning