TY - JOUR AU - Zadeh, Amir AU - Liang, Paul Pu AU - Mazumder, Navonil AU - Poria, Soujanya AU - Cambria, Erik AU - Morency, Louis-Philippe PY - 2018/04/27 Y2 - 2024/03/28 TI - Memory Fusion Network for Multi-view Sequential Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 32 IS - 1 SE - Main Track: NLP and Machine Learning DO - 10.1609/aaai.v32i1.12021 UR - https://ojs.aaai.org/index.php/AAAI/article/view/12021 SP - AB - <p> <div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p><span>Multi-view sequential learning is a fundamental problem in machine learning dealing with multi-view sequences. In a multi-view sequence, there exists two forms of interactions between different views: view-specific interactions and cross-view interactions. In this paper, we present a new neural architecture for multi-view sequential learning called the Memory Fusion Network (MFN) that explicitly accounts for both interactions in a neural architecture and continuously models them through time. The first component of the MFN is called the System of LSTMs, where view-specific interactions are learned in isolation through assigning an LSTM function to each view. The cross-view interactions are then identified using a special attention mechanism called the Delta-memory Attention Network (DMAN) and summarized through time with a Multi-view Gated Memory. Through extensive experimentation, MFN is compared to various proposed approaches for multi-view sequential learning on multiple publicly available benchmark datasets. MFN outperforms all the multi-view approaches. Furthermore, MFN outperforms all current state-of-the-art models, setting new state-of-the-art results for all three multi-view datasets. </span></p></div></div></div> </p> ER -