Enhance Robustness of Machine Learning with Improved Efficiency
DOI:
https://doi.org/10.1609/aaai.v37i13.26828Keywords:
New Faculty HighlightsAbstract
Robustness of machine learning, often referring to securing performance on different data, is always an active field due to the ubiquitous variety and diversity of data in practice. Many studies have been investigated to enhance the learning process robust in recent years. To this end, there is usually a trade-off that results in somewhat extra cost, e.g., more data samples, more complicated objective functions, more iterations to converge in optimization, etc. Then this problem boils down to finding a better trade-off under some conditions. My recent research focuses on robust machine learning with improved efficiency. Particularly, the efficiency here represents learning speed to find a model, and the number of data required to secure the robustness. In the talk, I will survey three pieces of my recent research by elaborating the algorithmic idea and theoretical analysis as technical contributions --- (i) epoch stochastic gradient descent ascent for min-max problems, (ii) stochastic optimization algorithm for non-convex inf-projection problems, and (iii) neighborhood conformal prediction. In the first two pieces of work, the proposed optimization algorithms are general and cover objective functions for robust machine learning. In the third one, I will elaborate an efficient conformal prediction algorithm that guarantee the robustness of prediction after model is trained. Particularly, the efficiency of conformal prediction is measured by its bandwidth.Downloads
Published
2023-09-06
How to Cite
Yan, Y. (2023). Enhance Robustness of Machine Learning with Improved Efficiency. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15461-15461. https://doi.org/10.1609/aaai.v37i13.26828
Issue
Section
New Faculty Highlights