Privacy-Preserving Adversarial Networks
We propose a data-driven framework for optimizing privacy-preserving data release mechanisms to attain the information-theoretically optimal tradeoff between minimizing distortion of useful data and concealing specific sensitive information. Our approach employs adversarially-trained neural networks to implement randomized mechanisms and to perform a variational approximation of mutual information privacy. We validate our Privacy-Preserving Adversarial Networks (PPAN) framework via proof-of-concept experiments on discrete and continuous synthetic data, as well as the MNIST handwritten digits dataset. For synthetic data, our model-agnostic PPAN approach achieves tradeoff points very close to the optimal tradeoffs that are analytically-derived from model knowledge. In experiments with the MNIST data, we visually demonstrate a learned tradeoff between minimizing the pixel-level distortion versus concealing the written digit.
A. S. Tripathy et al., "Privacy-Preserving Adversarial Networks," 2019 57th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2019, pp. 495-505, Institute of Electrical and Electronics Engineers (IEEE), Sep 2019.
The definitive version is available at https://doi.org/10.1109/ALLERTON.2019.8919758
International Standard Book Number (ISBN)
Article - Conference proceedings
© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Sep 2019