Sparse Maximum Margin Learning from Multimodal Human Behavioral Patterns

Authors

  • Ervine Zheng Rochester Institute of Technology
  • Qi Yu Rochester Institute of Technology
  • Zhi Zheng Rochester Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v37i4.25676

Keywords:

APP: Healthcare, Medicine & Wellness, ML: Bayesian Learning

Abstract

We propose a multimodal data fusion framework to systematically analyze human behavioral data from specialized domains that are inherently dynamic, sparse, and heterogeneous. We develop a two-tier architecture of probabilistic mixtures, where the lower tier leverages parametric distributions from the exponential family to extract significant behavioral patterns from each data modality. These patterns are then organized into a dynamic latent state space at the higher tier to fuse patterns from different modalities. In addition, our framework jointly performs pattern discovery and maximum-margin learning for downstream classification tasks by using a group-wise sparse prior that regularizes the coefficients of the maximum-margin classifier. Therefore, the discovered patterns are highly interpretable and discriminative to support downstream classification tasks. Experiments on real-world behavioral data from medical and psychological domains demonstrate that our framework discovers meaningful multimodal behavioral patterns with improved interpretability and prediction performance.

Downloads

Published

2023-06-26

How to Cite

Zheng, E., Yu, Q., & Zheng, Z. (2023). Sparse Maximum Margin Learning from Multimodal Human Behavioral Patterns. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 5437-5445. https://doi.org/10.1609/aaai.v37i4.25676

Issue

Section

AAAI Technical Track on Domain(s) of Application