Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding
DOI:
https://doi.org/10.1609/aaai.v39i13.33554Abstract
Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, the diversity in cortical parcellation and fMRI patterns across individuals has prompted the development of deep learning models tailored to each subject. The personalization limits the broader applicability of brain visual decoding in real-world scenarios. To address this issue, we introduce Wills Aligner, a novel approach designed to achieve multi-subject collaborative brain visual decoding. Wills Aligner begins by aligning the fMRI data from different subjects at the anatomical level. It then employs delicate mixture-of-brain-expert adapters and a meta-learning strategy to account for individual fMRI pattern differences. Additionally, Wills Aligner leverages the semantic relation of visual stimuli to guide the learning of inter-subject commonality, enabling visual decoding for each subject to draw insights from other subjects' data. We rigorously evaluate our Wills Aligner across various visual decoding tasks, including classification, cross-modal retrieval, and image reconstruction. The experimental results demonstrate that Wills Aligner achieves promising performance.Downloads
Published
2025-04-11
How to Cite
Bao, G., Zhang, Q., Gong, Z., Zhou, J., Fan, W., Yi, K., Naseem, U., Hu, L., & Miao, D. (2025). Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding. Proceedings of the AAAI Conference on Artificial Intelligence, 39(13), 14194-14202. https://doi.org/10.1609/aaai.v39i13.33554
Issue
Section
AAAI Technical Track on Humans and AI