SecDD: Efficient and Secure Method for Remotely Training Neural Networks (Student Abstract)

Authors

  • Ilia Sucholutsky University of Waterloo
  • Matthias Schonlau University of Waterloo

Keywords:

Deep Learning, Dataset Distillation, Privacy Preservation, Synthetic Data

Abstract

We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecure channels.

Downloads

Published

2021-05-18

How to Cite

Sucholutsky, I., & Schonlau, M. (2021). SecDD: Efficient and Secure Method for Remotely Training Neural Networks (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15897-15898. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17945

Issue

Section

AAAI Student Abstract and Poster Program