GLUECons: A Generic Benchmark for Learning under Constraints
DOI:
https://doi.org/10.1609/aaai.v37i8.26143Keywords:
ML: Evaluation and Analysis (Machine Learning), CV: Language and Vision, SNLP: Learning & Optimization for SNLP, CSO: Solvers and Tools, KRR: Logic Programming, KRR: Ontologies and Semantic Web, ML: Multi-Class/Multi-Label Learning & Extreme Classification, ML: Optimization, ML: Semi-Supervised LearningAbstract
Recent research has shown that integrating domain knowledge into deep learning architectures is effective; It helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models. However, the research community lacks a convened benchmark for systematically evaluating knowledge integration methods. In this work, we create a benchmark that is a collection of nine tasks in the domains of natural language processing and computer vision. In all cases, we model external knowledge as constraints, specify the sources of the constraints for each task, and implement various models that use these constraints. We report the results of these models using a new set of extended evaluation criteria in addition to the task performances for a more in-depth analysis. This effort provides a framework for a more comprehensive and systematic comparison of constraint integration techniques and for identifying related research challenges. It will facilitate further research for alleviating some problems of state-of-the-art neural models.Downloads
Published
2023-06-26
How to Cite
Rajaby Faghihi, H., Nafar, A., Zheng, C., Mirzaee, R., Zhang, Y., Uszok, A., Wan, A., Premsri, T., Roth, D., & Kordjamshidi, P. (2023). GLUECons: A Generic Benchmark for Learning under Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9552-9561. https://doi.org/10.1609/aaai.v37i8.26143
Issue
Section
AAAI Technical Track on Machine Learning III