Improving Open-Domain Dialogue Response Generation with Multi-Source Multilingual Commonsense Knowledge
DOI:
https://doi.org/10.1609/aaai.v38i17.29894Keywords:
NLP: Conversational AI/Dialog Systems, NLP: GenerationAbstract
Knowledge-grounded Dialogue Response Generation (KRG) can facilitate informative and fidelity dialogues using external knowledge. Prior monolingual works can only use the knowledge of the corresponding native language. Thus, due to the prohibitive costs of collecting and constructing external knowledge bases, the limited scale of accessible external knowledge always constrains the ability of KRG, especially in low-resource language scenarios. To this end, we propose a new task, Multi-Source Multilingual Knowledge-Grounded Response Generation (MMKRG), which simultaneously uses multiple knowledge sources of different languages. We notice that simply combining knowledge of different languages is inefficient due to the Cross-Conflict issue and Cross-Repetition issue. Thus, we propose a novel approach MMK-BART, which uses a simple but elegant Estimate-Cluster-Penalize mechanism to overcome the mentioned issues and adopts the multilingual language model mBART as the backbone. Meanwhile, based on the recent multilingual corpus XDailyDialog, we propose an MMKRG dataset MMK-DailyDialog, which has been aligned to the large-scale multilingual commonsense knowledge base ConceptNet and supports four languages (English, Chinese, German, and Italian). Extensive experiments have verified the effectiveness of our dataset and approach in monolingual, cross-lingual, and multilingual scenarios.Downloads
Published
2024-03-24
How to Cite
Wu, S., Yu, J., Chen, J., Deng, X., & Zhou, W. (2024). Improving Open-Domain Dialogue Response Generation with Multi-Source Multilingual Commonsense Knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19252-19260. https://doi.org/10.1609/aaai.v38i17.29894
Issue
Section
AAAI Technical Track on Natural Language Processing II