Towards Fair and Selectively Privacy-Preserving Models Using Negative Multi-Task Learning (Student Abstract)
Keywords:Multi-task Learning, Gender Bias, Selective Privacy-preserving
AbstractDeep learning models have shown great performances in natural language processing tasks. While much attention has been paid to improvements in utility, privacy leakage and social bias are two major concerns arising in trained models. In order to tackle these problems, we protect individuals' sensitive information and mitigate gender bias simultaneously. First, we propose a selective privacy-preserving method that only obscures individuals' sensitive information. Then we propose a negative multi-task learning framework to mitigate the gender bias which contains a main task and a gender prediction task. We analyze two existing word embeddings and evaluate them on sentiment analysis and a medical text classification task. Our experimental results show that our negative multi-task learning framework can mitigate the gender bias while keeping models’ utility.
How to Cite
Gao, L., Zhan, H., Chen, A., & Sheng, V. S. (2023). Towards Fair and Selectively Privacy-Preserving Models Using Negative Multi-Task Learning (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16214-16215. https://doi.org/10.1609/aaai.v37i13.26967
AAAI Student Abstract and Poster Program