Fairness in AI-Based Mental Health: Clinician Perspectives and Bias Mitigation
DOI:
https://doi.org/10.1609/aies.v7i1.31732Abstract
There is limited research on fairness in automated decision-making systems in the clinical domain, particularly in the mental health domain. Our study explores clinicians' perceptions of AI fairness through two distinct scenarios: violence risk assessment and depression phenotype recognition using textual clinical notes. We engage with clinicians through semi-structured interviews to understand their fairness perceptions and to identify appropriate quantitative fairness objectives for these scenarios. Then, we compare a set of bias mitigation strategies developed to improve at least one of the four selected fairness objectives. Our findings underscore the importance of carefully selecting fairness measures, as prioritizing less relevant measures can have a detrimental rather than a beneficial effect on model behavior in real-world clinical use.Downloads
Published
2024-10-16
How to Cite
Sogancioglu, G., Mosteiro, P., Salah, A. A., Scheepers, F., & Kaya, H. (2024). Fairness in AI-Based Mental Health: Clinician Perspectives and Bias Mitigation. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1390-1400. https://doi.org/10.1609/aies.v7i1.31732
Issue
Section
Full Archival Papers