Self-Attention Mechanisms as Representations for Gene Interaction Networks in Hypothesis-Driven Gene-based Transformer Genomics AI Models

Authors

  • Hong Qin Old Dominion University

DOI:

https://doi.org/10.1609/aaaiss.v4i1.31813

Abstract

In this position paper, we propose a framework for hypothesis-driven genomic AI using self-attention mechanisms in gene-based transformer models to represent gene interaction networks. Hypotheses can be introduced as attention masks in these transformer models with genes treated as tokens. This approach can bridge the gap between genotypic data and phenotypic observations by using prior knowledge-based masks in the transformer models. By using attention masks as hypotheses to guide the model fitting, the proposed framework can potentially assess various hypotheses to determine which best explains the experimental observations. The proposed framework can enhance the interpretability and predictive power of genomic AI to advance personalized medicine and promote healthcare equity.

Downloads

Published

2024-11-08

How to Cite

Qin, H. (2024). Self-Attention Mechanisms as Representations for Gene Interaction Networks in Hypothesis-Driven Gene-based Transformer Genomics AI Models. Proceedings of the AAAI Symposium Series, 4(1), 334-336. https://doi.org/10.1609/aaaiss.v4i1.31813

Issue

Section

Machine Intelligence for Equitable Global Health (MI4EGH) - Extended Abstracts