*-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task

Authors

  • Dmitry Tsarkov Google
  • Tibor Tihon Google
  • Nathan Scales Google
  • Nikola Momchev Google
  • Danila Sinopalnikov Google
  • Nathanael Schärli Google

DOI:

https://doi.org/10.1609/aaai.v35i11.17195

Keywords:

Scalability of ML Systems, Evaluation and Analysis (Machine Learning), Interpretaility & Analysis of NLP Models, Lexical & Frame Semantics, Semantic Parsing

Abstract

We present *-CFQ ("star-CFQ"): a suite of large-scale datasets of varying scope based on the CFQ semantic parsing benchmark, designed for principled investigation of the scalability of machine learning systems in a realistic compositional task setting. Using this suite, we conduct a series of experiments investigating the ability of Transformers to benefit from increased training data size under conditions of fixed computational cost. We show that compositional generalization remains a challenge at all training sizes, and we show that increasing the scope of natural language leads to consistently higher error rates, which are only partially offset by increased training data. We further show that while additional training data from a related domain improves the accuracy in data-starved situations, this improvement is limited and diminishes as the distance from the related domain to the target domain increases.

Downloads

Published

2021-05-18

How to Cite

Tsarkov, D., Tihon, T., Scales, N., Momchev, N., Sinopalnikov, D., & Schärli, N. (2021). *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9949-9957. https://doi.org/10.1609/aaai.v35i11.17195

Issue

Section

AAAI Technical Track on Machine Learning IV