Amortized Generation of Sequential Algorithmic Recourses for Black-Box Models
Keywords:Machine Learning (ML), Humans And AI (HAI), Philosophy And Ethics Of AI (PEAI)
AbstractExplainable machine learning (ML) has gained traction in recent years due to the increasing adoption of ML-based systems in many sectors. Algorithmic Recourses (ARs) provide "what if" feedback of the form "if an input datapoint were x' instead of x, then an ML-based system's output would be y' instead of y." Recourses are attractive due to their actionable feedback, amenability to existing legal frameworks, and fidelity to the underlying ML model. Yet, current recourse approaches are single shot that is, they assume x can change to x' in a single time period. We propose a novel stochastic-control-based approach that generates sequential recourses, that is, recourses that allow x to move stochastically and sequentially across intermediate states to a final state x'. Our approach is model agnostic and black box. Furthermore, the calculation of recourses is amortized such that once trained, it applies to multiple datapoints without the need for re-optimization. In addition to these primary characteristics, our approach admits optional desiderata such as adherence to the data manifold, respect for causal relations, and sparsity identified by past research as desirable properties of recourses. We evaluate our approach using three real-world datasets and show successful generation of sequential recourses that respect other recourse desiderata.
How to Cite
Verma, S., Hines, K., & Dickerson, J. P. (2022). Amortized Generation of Sequential Algorithmic Recourses for Black-Box Models. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8512-8519. https://doi.org/10.1609/aaai.v36i8.20828
AAAI Technical Track on Machine Learning III