TY - JOUR AU - Liao, Jingxian AU - Wang, Wei AU - Xue, Jason AU - Lei, Anthony AU - Han, Xue AU - Lu, Kun PY - 2022/06/28 Y2 - 2024/03/28 TI - Combating Sampling Bias: A Self-Training Method in Credit Risk Models JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 11 SE - IAAI Technical Track on Emerging Applications of AI DO - 10.1609/aaai.v36i11.21528 UR - https://ojs.aaai.org/index.php/AAAI/article/view/21528 SP - 12566-12572 AB - A significant challenge in credit risk models for underwriting is the presence of bias in model training data. When most credit risk models are built using only applicants who had been funded for credit, such non-random sampling predominantly influenced by credit policymakers and previous loan performances may introduce sampling bias to the models, and thus alter their prediction of default on loan repayment when screening applications from prospective borrowers. In this paper, we propose a novel data augmentation method that aims to identify and pseudo-label parts of the historically declined loan applications to mitigate sampling bias in the training data. We also introduce a new measure to assess the performance from the business perspective, loan application approval rates at various loan default rate levels. Our proposed methods were compared to the original supervised learning model and the traditional sampling issue remedy techniques in the industry. The experiment and early production results from deployed model show that self-training method with calibrated probability as data augmentation selection criteria improved the ability of credit scoring to differentiate default loan applications and, more importantly, can increase loan approval rate up to 8.8\%, while keeping similar default rate comparing to baselines. The results demonstrate practical implications on how future underwriting model development processes should follow. ER -