Coupling Artificial Neurons in BERT and Biological Neurons in the Human Brain

Authors

  • Xu Liu School of Automation, Northwestern Polytechnical University
  • Mengyue Zhou School of Automation, Northwestern Polytechnical University
  • Gaosheng Shi School of Automation, Northwestern Polytechnical University
  • Yu Du School of Automation, Northwestern Polytechnical University
  • Lin Zhao School of Computing, University of Georgia
  • Zihao Wu School of Computing, University of Georgia
  • David Liu Athens Academy
  • Tianming Liu School of Computing, University of Georgia
  • Xintao Hu School of Automation, Northwestern Polytechnical University

DOI:

https://doi.org/10.1609/aaai.v37i7.26068

Keywords:

ML: Bio-Inspired Learning, SNLP: Interpretability & Analysis of NLP Models

Abstract

Linking computational natural language processing (NLP) models and neural responses to language in the human brain on the one hand facilitates the effort towards disentangling the neural representations underpinning language perception, on the other hand provides neurolinguistics evidence to evaluate and improve NLP models. Mappings of an NLP model’s representations of and the brain activities evoked by linguistic input are typically deployed to reveal this symbiosis. However, two critical problems limit its advancement: 1) The model’s representations (artificial neurons, ANs) rely on layer-level embeddings and thus lack fine-granularity; 2) The brain activities (biological neurons, BNs) are limited to neural recordings of isolated cortical unit (i.e., voxel/region) and thus lack integrations and interactions among brain functions. To address those problems, in this study, we 1) define ANs with fine-granularity in transformer-based NLP models (BERT in this study) and measure their temporal activations to input text sequences; 2) define BNs as functional brain networks (FBNs) extracted from functional magnetic resonance imaging (fMRI) data to capture functional interactions in the brain; 3) couple ANs and BNs by maximizing the synchronization of their temporal activations. Our experimental results demonstrate 1) The activations of ANs and BNs are significantly synchronized; 2) the ANs carry meaningful linguistic/semantic information and anchor to their BN signatures; 3) the anchored BNs are interpretable in a neurolinguistic context. Overall, our study introduces a novel, general, and effective framework to link transformer-based NLP models and neural activities in response to language and may provide novel insights for future studies such as brain-inspired evaluation and development of NLP models.

Downloads

Published

2023-06-26

How to Cite

Liu, X., Zhou, M., Shi, G., Du, Y., Zhao, L., Wu, Z., Liu, D., Liu, T., & Hu, X. (2023). Coupling Artificial Neurons in BERT and Biological Neurons in the Human Brain. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8888-8896. https://doi.org/10.1609/aaai.v37i7.26068

Issue

Section

AAAI Technical Track on Machine Learning II