Generalizing Math Word Problem Solvers via Solution Diversification
DOI:
https://doi.org/10.1609/aaai.v37i11.26548Keywords:
SNLP: Question Answering, KRR: Argumentation, ML: Applications, ML: Probabilistic Methods, SNLP: ApplicationsAbstract
Current math word problem (MWP) solvers are usually Seq2Seq models trained by the (one-problem; one-solution) pairs, each of which is made of a problem description and a solution showing reasoning flow to get the correct answer. However, one MWP problem naturally has multiple solution equations. The training of an MWP solver with (one-problem; one-solution) pairs excludes other correct solutions, and thus limits the generalizability of the MWP solver. One feasible solution to this limitation is to augment multiple solutions to a given problem. However, it is difficult to collect diverse and accurate augment solutions through human efforts. In this paper, we design a new training framework for an MWP solver by introducing a solution buffer and a solution discriminator. The buffer includes solutions generated by an MWP solver to encourage the training data diversity. The discriminator controls the quality of buffered solutions to participate in training. Our framework is flexibly applicable to a wide setting of fully, semi-weakly and weakly supervised training for all Seq2Seq MWP solvers. We conduct extensive experiments on a benchmark dataset Math23k and a new dataset named Weak12k, and show that our framework improves the performance of various MWP solvers under different settings by generating correct and diverse solutions.Downloads
Published
2023-06-26
How to Cite
Liang, Z., Zhang, J., Wang, L., Wang, Y., Shao, J., & Zhang, X. (2023). Generalizing Math Word Problem Solvers via Solution Diversification. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13183-13191. https://doi.org/10.1609/aaai.v37i11.26548
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing