Friendly Attacks to Improve Channel Coding Reliability

Authors

  • Anastasiia Kurmukova Imperial College London
  • Deniz Gunduz Imperial College London

DOI:

https://doi.org/10.1609/aaai.v38i12.29230

Keywords:

ML: Applications, SO: Adversarial Search

Abstract

This paper introduces a novel approach called "friendly attack" aimed at enhancing the performance of error correction channel codes. Inspired by the concept of adversarial attacks, our method leverages the idea of introducing slight perturbations to the neural network input, resulting in a substantial impact on the network's performance. By introducing small perturbations to fixed-point modulated codewords before transmission, we effectively improve the decoder's performance without violating the input power constraint. The perturbation design is accomplished by a modified iterative fast gradient method. This study investigates various decoder architectures suitable for computing gradients to obtain the desired perturbations. Specifically, we consider belief propagation (BP) for LDPC codes; the error correcting code transformer, BP and neural BP (NBP) for polar codes, and neural BCJR for convolutional codes. We demonstrate that the proposed friendly attack method can improve the reliability across different channels, modulations, codes, and decoders. This method allows us to increase the reliability of communication with a legacy receiver by simply modifying the transmitted codeword appropriately.

Published

2024-03-24

How to Cite

Kurmukova, A., & Gunduz, D. (2024). Friendly Attacks to Improve Channel Coding Reliability. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13292-13300. https://doi.org/10.1609/aaai.v38i12.29230

Issue

Section

AAAI Technical Track on Machine Learning III