Sleep-Like Unsupervised Replay Improves Performance When Data Are Limited or Unbalanced (Student Abstract)

Authors

  • Anthony Bazhenov Del Norte High School, San Diego, CA
  • Pahan Dewasurendra Del Norte High School, San Diego, CA
  • Giri Krishnan Department of Medicine, University of California San Diego, La Jolla, CA
  • Jean Erik Delanois Department of Computer Science & Engineering, University of California San Diego, La Jolla, CA Department of Medicine, University of California San Diego, La Jolla, CA

DOI:

https://doi.org/10.1609/aaai.v38i21.30420

Keywords:

Sleep, Unsupervised Learning, Limited Data, Low Data, Data Imbalance

Abstract

The performance of artificial neural networks (ANNs) degrades when training data are limited or imbalanced. In contrast, the human brain can learn quickly from just a few examples. Here, we investigated the role of sleep in improving the performance of ANNs trained with limited data on the MNIST and Fashion MNIST datasets. Sleep was implemented as an unsupervised phase with local Hebbian type learning rules. We found a significant boost in accuracy after the sleep phase for models trained with limited data in the range of 0.5-10% of total MNIST or Fashion MNIST datasets. When more than 10% of the total data was used, sleep alone had a slight negative impact on performance, but this was remedied by fine-tuning on the original data. This study sheds light on a potential synaptic weight dynamics strategy employed by the brain during sleep to enhance memory performance when training data are limited or imbalanced.

Published

2024-03-24

How to Cite

Bazhenov, A., Dewasurendra, P., Krishnan, G., & Delanois, J. E. (2024). Sleep-Like Unsupervised Replay Improves Performance When Data Are Limited or Unbalanced (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23441–23442. https://doi.org/10.1609/aaai.v38i21.30420