Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation

Authors

  • Ieva Staliūnaitė Huawei Noah's Ark Lab
  • Philip John Gorinski Huawei Noah's Ark Lab
  • Ignacio Iacobacci Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v35i15.17630

Keywords:

Discourse, Pragmatics & Argument Mining

Abstract

Determining the plausibility of causal relations between clauses is a commonsense reasoning task that requires complex inference ability. The general approach to this task is to train a large pretrained language model on a specific dataset. However, the available training data for the task is often scarce, which leads to instability of model training or reliance on the shallow features of the dataset. This paper presents a number of techniques for making models more robust in the domain of causal reasoning. Firstly, we perform adversarial training by generating perturbed inputs through synonym substitution. Secondly, based on a linguistic theory of discourse connectives, we perform data augmentation using a discourse parser for detecting causally linked clauses in large text, and a generative language model for generating distractors. Both methods boost model performance on the Choice of Plausible Alternatives (COPA) dataset, as well as on a Balanced COPA dataset, which is a modified version of the original data that has been developed to avoid superficial cues, leading to a more challenging benchmark. We show a statistically significant improvement in performance and robustness on both datasets, even with only a small number of additionally generated data points.

Downloads

Published

2021-05-18

How to Cite

Staliūnaitė, I., Gorinski, P. J., & Iacobacci, I. (2021). Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13834-13842. https://doi.org/10.1609/aaai.v35i15.17630

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II