U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making
DOI:
https://doi.org/10.1609/aaai.v38i18.29972Keywords:
PEAI: Safety, Robustness & Trustworthiness, ML: Evaluation and Analysis, ML: Other Foundations of Machine LearningAbstract
With growing concerns regarding bias and discrimination in predictive models, the AI community has increasingly focused on assessing AI system trustworthiness. Conventionally, trustworthy AI literature relies on the probabilistic framework and calibration as prerequisites for trustworthiness. In this work, we depart from this viewpoint by proposing a novel trust framework inspired by the philosophy literature on trust. We present a precise mathematical definition of trustworthiness, termed U-trustworthiness, specifically tailored for a subset of tasks aimed at maximizing a utility function. We argue that a model’s U-trustworthiness is contingent upon its ability to maximize Bayes utility within this task subset. Our first set of results challenges the probabilistic framework by demonstrating its potential to favor less trustworthy models and introduce the risk of misleading trustworthiness assessments. Within the context of U-trustworthiness, we prove that properly-ranked models are inherently U-trustworthy. Furthermore, we advocate for the adoption of the AUC metric as the preferred measure of trustworthiness. By offering both theoretical guarantees and experimental validation, AUC enables robust evaluation of trustworthiness, thereby enhancing model selection and hyperparameter tuning to yield more trustworthy outcomes.Downloads
Published
2024-03-24
How to Cite
Vashistha, R., & Farahi, A. (2024). U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 19956-19964. https://doi.org/10.1609/aaai.v38i18.29972
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI