Associative Memory Using Dictionary Learning and Expander Decoding

Authors

  • Arya Mazumdar University of Massachusetts Amherst
  • Ankit Singh Rawat Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v31i1.10515

Abstract

An associative memory is a framework of content-addressable memory that stores a collection of message vectors (or a dataset) over a neural network while enabling a neurally feasible mechanism to recover any message in the dataset from its noisy version. Designing an associative memory requires addressing two main tasks: 1) learning phase: given a dataset, learn a concise representation of the dataset in the form of a graphical model (or a neural network), 2) recall phase: given a noisy version of a message vector from the dataset, output the correct message vector via a neurally feasible algorithm over the network learnt during the learning phase. This paper studies the problem of designing a class of neural associative memories which learns a network representation for a large dataset that ensures correction against a large number of adversarial errors during the recall phase. Specifically, the associative memories designed in this paper can store dataset containing exp(n) n-length message vectors over a network with O(n) nodes and can tolerate Ω(n / polylog) adversarial errors. This paper carries out this memory design by mapping the learning phase and recall phase to the tasks of dictionary learning with a square dictionary and iterative error correction in an expander code, respectively.

Downloads

Published

2017-02-10

How to Cite

Mazumdar, A., & Rawat, A. S. (2017). Associative Memory Using Dictionary Learning and Expander Decoding. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10515

Issue

Section

Cognitive Modeling and Cognitive Systems