A Generative Model for Evasion Attacks in Smart Grid
Adversarial machine learning (AML) studies how to fool a machine learning (ML) model with malicious inputs degrade the ML method's performance. Within AML, evasion attacks are an attack category that involves manipulation of input data during the testing phase to induce a misclassification of the data input by the ML model. Such manipulated data inputs that are called, adversarial examples. In this paper, we propose a generative approach for crafting evasion attacks against three ML learning based security classifiers. The proof of concept application for the ML based security classifier is the classification of compromised smart meters launching false data injection. Our proposed solution is validated with a real smart metering dataset. We found degradation in compromised meter detection performance under our proposed generative evasion attack.
V. P. Madhavarapu et al., "A Generative Model for Evasion Attacks in Smart Grid," INFOCOM WKSHPS 2022 - IEEE Conference on Computer Communications Workshops, Institute of Electrical and Electronics Engineers (IEEE), Jan 2022.
The definitive version is available at https://doi.org/10.1109/INFOCOMWKSHPS54753.2022.9798325
Keywords and Phrases
Adversarial Machine Learning; AMI; Smart Grid
International Standard Book Number (ISBN)
Article - Conference proceedings
© 2022 Institute of Electrical and Electronics Engineers, All rights reserved.
01 Jan 2022
This research was supported by NSF grants: SATC-2030611, SATC-2030624, and OAC-2017289.