Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency

Authors

  • Yuhong Chen College of Computer and Data Science, Fuzhou University; Guangdong Institute of Intelligence Science and Technology
  • Ailin Song Guangdong Institute of Intelligence Science and Technology University of Electronic Science and Technology of China
  • Huifeng Yin Center for Brain Inspired Computing Research, Department of Precision Instrument, Tsinghua University
  • Shuai Zhong Guangdong Institute of Intelligence Science and Technology
  • Fuhai Chen College of Computer and Data Science, Fuzhou University
  • Qi Xu School of Computer Science and Technology, Dalian University of Technology
  • Shiping Wang College of Computer and Data Science, Fuzhou University
  • Mingkun Xu Guangdong Institute of Intelligence Science and Technology Center for Brain Inspired Computing Research, Department of Precision Instrument, Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v39i2.32115

Abstract

The rapid evolution of multimedia technology has revolutionized human perception, paving the way for multi-view learning. However, traditional multi-view learning approaches are tailored for scenarios with fixed data views, falling short of emulating the intricate cognitive procedures of the human brain processing signals sequentially. Our cerebral architecture seamlessly integrates sequential data through intricate feed-forward and feedback mechanisms. In stark contrast, traditional methods struggle to generalize effectively when confronted with data spanning diverse domains, highlighting the need for innovative strategies that can mimic the brain's adaptability and dynamic integration capabilities. In this paper, we propose a bio-neurologically inspired multi-view incremental framework named MVIL aimed at emulating the brain's fine-grained fusion of sequentially arriving views. MVIL lies two fundamental modules: structured Hebbian plasticity and synaptic partition learning. The structured Hebbian plasticity reshapes the structure of weights to express the high correlation between view representations, facilitating a fine-grained fusion of view representations. Moreover, synaptic partition learning is efficient in alleviating drastic changes in weights and also retaining old knowledge by inhibiting partial synapses. These modules bionically play a central role in reinforcing crucial associations between newly acquired information and existing knowledge repositories, thereby enhancing the network's capacity for generalization. Experimental results on six benchmark datasets show MVIL's effectiveness over state-of-the-art methods.

Published

2025-04-11

How to Cite

Chen, Y., Song, A., Yin, H., Zhong, S., Chen, F., Xu, Q., … Xu, M. (2025). Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency. Proceedings of the AAAI Conference on Artificial Intelligence, 39(2), 1265–1273. https://doi.org/10.1609/aaai.v39i2.32115

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems