Interpretable Privacy Preservation of Text Representations Using Vector Steganography
DOI:
https://doi.org/10.1609/aaai.v36i11.21573Keywords:
Privacy Preserving NLP, Language Models, Text Representations, Interpretable NLP, Lexical SemanticsAbstract
Contextual word representations generated by language models learn spurious associations present in the training corpora. Adversaries can exploit these associations to reverse-engineer the private attributes of entities mentioned in the training corpora. These findings have led to efforts towards minimizing the privacy risks of language models. However, existing approaches lack interpretability, compromise on data utility and fail to provide privacy guarantees. Thus, the goal of my doctoral research is to develop interpretable approaches towards privacy preservation of text representations that maximize data utility retention and guarantee privacy. To this end, I aim to study and develop methods to incorporate steganographic modifications within the vector geometry to obfuscate underlying spurious associations and retain the distributional semantic properties learnt during training.Downloads
Published
2022-06-28
How to Cite
Bihani, G. (2022). Interpretable Privacy Preservation of Text Representations Using Vector Steganography. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12872-12873. https://doi.org/10.1609/aaai.v36i11.21573
Issue
Section
The Twenty - Seventh AAAI / SIGAI Doctoral Consortium