Manhattan Self-Attention Diffusion Residual Networks with Dynamic Bias Rectification for BCI-based Few-Shot Learning
DOI:
https://doi.org/10.1609/aaai.v39i13.33580Abstract
The distribution biases and scarcity of samples in multi-source data present significant challenges for few-shot learning (FSL) tasks based on brain-computer interface (BCI). Recent efforts have explored the application of diffusion mechanisms in FSL, typically utilizing labeled data to augment the support set. However, this approach has not effectively utilized unlabeled data nor addressed distribution biases. Inspired by the latest advancements in FSL, we propose the manhattan self-attention diffusion residual networks (MSADiff-Resnet) with dynamic bias rectification. This model explicitly adds the manhattan self-attention diffusion layer to resnet, using attention mechanisms and manhattan distance-based decay function to control local diffusion intensity, and adjusts the global diffusion strength through the parameter. This diffusion mechanism bridges labeled and unlabeled data, addressing the limitations associated with sample availability. Additionally, we effectively tackle the distribution biases of multi-source data through inter-class bias rectification and dynamic intra-class bias rectification. Moreover, this study presents for the first time a universal deep learning framework specifically designed for BCI-based FSL tasks. Extensive experiments on multi-source BCI task datasets have validated the effectiveness of proposed method.Downloads
Published
2025-04-11
How to Cite
Wang, H., Xu, L., Yu, Y., Ding, W., & Xu, Y. (2025). Manhattan Self-Attention Diffusion Residual Networks with Dynamic Bias Rectification for BCI-based Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(13), 14423-14431. https://doi.org/10.1609/aaai.v39i13.33580
Issue
Section
AAAI Technical Track on Humans and AI