Human-Corrected Labels Learning: Enhancing Labels Quality via Human Correction of VLMs Discrepancies
DOI:
https://doi.org/10.1609/aaai.v40i28.39504Abstract
Vision-Language Models (VLMs), with their powerful content generation capabilities, have been successfully applied to data annotation processes. However, the VLM-generated labels exhibit dual limitations: low quality (i.e., label noise) and absence of error correction mechanisms. To enhance label quality, we propose Human-Corrected Labels (HCLs), a novel setting that efficient human correction for VLM-generated noisy labels. As shown in Figure 1(b), HCL strategically deploys human correction only for instances with VLM discrepancies, achieving both higher-quality annotations and reduced labor costs. Specifically, we theoretically derive a risk-consistent estimator that incorporates both human-corrected labels and VLM predictions to train classifiers. Besides, we further propose a conditional probability method to estimate the label distribution using a combination of VLM outputs and model predictions. Extensive experiments demonstrate that our approach achieves superior classification performance and is robust to label noise, validating the effectiveness of HCL in practical weak supervision scenarios.Published
2026-03-14
How to Cite
Li, Z., Chen, L., Xu, Y., Xu, S., & Xu, X. (2026). Human-Corrected Labels Learning: Enhancing Labels Quality via Human Correction of VLMs Discrepancies. Proceedings of the AAAI Conference on Artificial Intelligence, 40(28), 23346–23354. https://doi.org/10.1609/aaai.v40i28.39504
Issue
Section
AAAI Technical Track on Machine Learning V