Interpretable Low-Resource Legal Decision Making
Keywords:AI For Social Impact (AISI Track Papers Only)
AbstractOver the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically, we introduce a model-agnostic interpretable intermediate layer, a technique which proves to be effective for legal documents. Furthermore, we utilize weakly supervised learning by means of a curriculum learning strategy, effectively demonstrating the improved performance of a deep learning model. This is in contrast to the conventional models which are only able to utilize the limited number of expensive manually-annotated samples by legal experts. Although the methods presented in this work tackles the task of risk of confusion for trademarks, it is straightforward to extend them to other fields of law, or more generally, to other similar high-stakes application scenarios.
How to Cite
Bhambhoria, R., Liu, H., Dahan, S., & Zhu, X. (2022). Interpretable Low-Resource Legal Decision Making. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 11819-11827. https://doi.org/10.1609/aaai.v36i11.21438
AAAI Special Track on AI for Social Impact